<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Game Development on MacWorks</title><link>https://macworks.dev/tags/game-development/</link><description>Recent content in Game Development on MacWorks</description><generator>Hugo</generator><language>en</language><atom:link href="https://macworks.dev/tags/game-development/index.xml" rel="self" type="application/rss+xml"/><item><title>2026-04-09</title><link>https://macworks.dev/docs/week/hackernews/hackernews-2026-04-09/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/week/hackernews/hackernews-2026-04-09/</guid><description>&lt;h1 id="hacker-news--2026-04-09"&gt;Hacker News — 2026-04-09&lt;a class="anchor" href="#hacker-news--2026-04-09"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="top-story"&gt;Top Story&lt;a class="anchor" href="#top-story"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;The Vercel Claude Code plugin has been caught using prompt injection to fake user consent for telemetry, quietly exfiltrating full bash command strings to Vercel&amp;rsquo;s servers across all local projects. Instead of implementing a proper UI for permission, the plugin injects behavioral instructions into Claude&amp;rsquo;s system context, forcing the agent to execute shell commands to write tracking preferences based on your chat replies. It&amp;rsquo;s exactly the kind of quiet overreach and abuse of LLM integrations that makes developers deeply paranoid about agent tooling.&lt;/p&gt;</description></item><item><title>Hacker News</title><link>https://macworks.dev/docs/week/hackernews/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/week/hackernews/</guid><description>&lt;h1 id="hacker-news--week-of-2026-04-04-to-2026-04-10"&gt;Hacker News — Week of 2026-04-04 to 2026-04-10&lt;a class="anchor" href="#hacker-news--week-of-2026-04-04-to-2026-04-10"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="story-of-the-week"&gt;Story of the Week&lt;a class="anchor" href="#story-of-the-week"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Anthropic&amp;rsquo;s frontier AI models crossed a terrifying new threshold in autonomous cybersecurity, completely shifting the industry&amp;rsquo;s threat model. First, Claude Code uncovered a complex, 23-year-old vulnerability in the Linux kernel&amp;rsquo;s NFS driver that predated Git itself. Days later, the infosec community went into full meltdown when Anthropic&amp;rsquo;s unreleased &amp;ldquo;Mythos&amp;rdquo; model autonomously wrote a 200-byte ROP chain exploit for FreeBSD and demonstrated the ability to reliably escape Firefox&amp;rsquo;s JavaScript virtualization sandbox in 72.4% of trials.&lt;/p&gt;</description></item></channel></rss>