<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Generative Media on MacWorks</title><link>https://macworks.dev/tags/generative-media/</link><description>Recent content in Generative Media on MacWorks</description><generator>Hugo</generator><language>en</language><atom:link href="https://macworks.dev/tags/generative-media/index.xml" rel="self" type="application/rss+xml"/><item><title>2026-04-15</title><link>https://macworks.dev/docs/week/ai_reddit/ai-reddit-2026-04-15/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/week/ai_reddit/ai-reddit-2026-04-15/</guid><description>&lt;details&gt;
&lt;summary&gt;Sources&lt;/summary&gt;
&lt;div class="markdown-inner"&gt;
&lt;ul&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/aipromptprogramming/.rss"&gt;r/AIPromptProgramming&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/chatgpt/.rss"&gt;r/ChatGPT&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/chatgptcoding/.rss"&gt;r/ChatGPTCoding&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/claudeai/.rss"&gt;r/ClaudeAI&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/cline/.rss"&gt;r/Cline&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/githubcopilot/.rss"&gt;r/GithubCopilot&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/localllama/.rss"&gt;r/LocalLLaMA&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/mcp/.rss"&gt;r/MCP&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/notebooklm/.rss"&gt;r/NotebookLM&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/OpenAI/.rss"&gt;r/OpenAI&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/PromptEngineering/.rss"&gt;r/PromptEngineering&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/roocode/.rss"&gt;r/RooCode&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/singularity/.rss"&gt;r/Singularity&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/stablediffusion/.rss"&gt;r/StableDiffusion&lt;/a&gt;&lt;/li&gt;

&lt;/ul&gt;
&lt;/div&gt;
&lt;/details&gt;


&lt;h1 id="ai-reddit--2026-04-15"&gt;AI Reddit — 2026-04-15&lt;a class="anchor" href="#ai-reddit--2026-04-15"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="the-buzz"&gt;The Buzz&lt;a class="anchor" href="#the-buzz"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;A fascinating shift in prompt injection strategies has surfaced, proving that the most effective attacks no longer rely on technical overrides but instead weaponize a model&amp;rsquo;s own alignment training. Researchers analyzing over 1,400 injection attempts discovered that framing requests as moral compliance tests or ethical hypotheticals forces models to willingly leak their system prompts and secrets. This revelation suggests that a model&amp;rsquo;s inherent helpfulness and ethical reasoning are actually its largest attack surfaces, rendering traditional keyword-based defenses largely obsolete.&lt;/p&gt;</description></item><item><title>AI Reddit</title><link>https://macworks.dev/docs/today/ai-reddit-2026-04-16/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/today/ai-reddit-2026-04-16/</guid><description>&lt;details&gt;
&lt;summary&gt;Sources&lt;/summary&gt;
&lt;div class="markdown-inner"&gt;
&lt;ul&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/aipromptprogramming/.rss"&gt;r/AIPromptProgramming&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/chatgpt/.rss"&gt;r/ChatGPT&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/chatgptcoding/.rss"&gt;r/ChatGPTCoding&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/claudeai/.rss"&gt;r/ClaudeAI&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/cline/.rss"&gt;r/Cline&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/githubcopilot/.rss"&gt;r/GithubCopilot&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/localllama/.rss"&gt;r/LocalLLaMA&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/mcp/.rss"&gt;r/MCP&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/notebooklm/.rss"&gt;r/NotebookLM&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/OpenAI/.rss"&gt;r/OpenAI&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/PromptEngineering/.rss"&gt;r/PromptEngineering&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/roocode/.rss"&gt;r/RooCode&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/singularity/.rss"&gt;r/Singularity&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://www.reddit.com/r/stablediffusion/.rss"&gt;r/StableDiffusion&lt;/a&gt;&lt;/li&gt;

&lt;/ul&gt;
&lt;/div&gt;
&lt;/details&gt;


&lt;h1 id="ai-reddit--2026-04-16"&gt;AI Reddit — 2026-04-16&lt;a class="anchor" href="#ai-reddit--2026-04-16"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="the-buzz"&gt;The Buzz&lt;a class="anchor" href="#the-buzz"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;The community finally has hard data to back up the &amp;ldquo;vibes&amp;rdquo; that Claude Code got perceptibly worse recently. An AMD engineer analyzed over 6,800 sessions and proved that Anthropic silently dropped the default thinking effort to &amp;lsquo;medium&amp;rsquo;, causing a massive spike in blind edits and unexpected API costs. It is a stark reminder that relying on a single frontier model with zero fallback is a massive liability when lab behavior changes unannounced.&lt;/p&gt;</description></item></channel></rss>