<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ethics on MacWorks</title><link>https://macworks.dev/tags/ethics/</link><description>Recent content in Ethics on MacWorks</description><generator>Hugo</generator><language>en</language><atom:link href="https://macworks.dev/tags/ethics/index.xml" rel="self" type="application/rss+xml"/><item><title>Engineer Reads</title><link>https://macworks.dev/docs/week/blogs/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/week/blogs/</guid><description>&lt;h1 id="engineering-reads--week-of-2026-04-02-to-2026-04-10"&gt;Engineering Reads — Week of 2026-04-02 to 2026-04-10&lt;a class="anchor" href="#engineering-reads--week-of-2026-04-02-to-2026-04-10"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="week-in-review"&gt;Week in Review&lt;a class="anchor" href="#week-in-review"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;This week&amp;rsquo;s reading reflects a fundamental inflection point: raw LLM intelligence is no longer the bottleneck in software development. Instead, the industry is pivoting toward the hard systems engineering required to constrain probabilistic models—whether through strict data ledgers, living specifications, or formal verification harnesses. The dominant debate centers on how we preserve architectural taste, mechanical sympathy, and system ethics as the mechanical act of writing code becomes increasingly commoditized.&lt;/p&gt;</description></item><item><title>2026-04-10</title><link>https://macworks.dev/docs/week/blogs/engineer-blogs-2026-04-10/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/week/blogs/engineer-blogs-2026-04-10/</guid><description>&lt;h1 id="engineering-reads--2026-04-10"&gt;Engineering Reads — 2026-04-10&lt;a class="anchor" href="#engineering-reads--2026-04-10"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="the-big-idea"&gt;The Big Idea&lt;a class="anchor" href="#the-big-idea"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;As AI abstractions upend our relationship with code, engineering craft is bifurcating: we must simultaneously grapple with emergent, functional behaviors in massive models while deliberately preserving the mechanical, systems-level intuition that historically grounded software ethics.&lt;/p&gt;
&lt;h2 id="deep-reads"&gt;Deep Reads&lt;a class="anchor" href="#deep-reads"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://eli.thegreenplace.net/2026/watgo-a-webassembly-toolkit-for-go/"&gt;watgo - a WebAssembly Toolkit for Go&lt;/a&gt;&lt;/strong&gt; · Eli Bendersky
This piece introduces &lt;code&gt;watgo&lt;/code&gt;, a zero-dependency WebAssembly toolkit written in pure Go that parses, validates, encodes, and decodes WASM. The core of the system lowers WebAssembly Text (WAT) to a semantic intermediate representation called &lt;code&gt;wasmir&lt;/code&gt;, flattening syntactic sugar to match WASM&amp;rsquo;s strict binary execution semantics. To guarantee correctness, &lt;code&gt;watgo&lt;/code&gt; executes the official 200K-line WebAssembly specification test suite by converting &lt;code&gt;.wast&lt;/code&gt; files to binary and running them against a Node.js harness. An earlier attempt to maintain a pure-Go execution pipeline using &lt;code&gt;wazero&lt;/code&gt; was abandoned because the runtime lacked support for recent WASM garbage collection proposals. Engineers working on compilers, parsers, or WebAssembly infrastructure should read this for a masterclass in leveraging specification test suites to bootstrap confidence in new tooling.&lt;/p&gt;</description></item></channel></rss>