<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Social-Engineering on MacWorks</title><link>https://macworks.dev/tags/social-engineering/</link><description>Recent content in Social-Engineering on MacWorks</description><generator>Hugo</generator><language>en</language><atom:link href="https://macworks.dev/tags/social-engineering/index.xml" rel="self" type="application/rss+xml"/><item><title>Week 14 Summary</title><link>https://macworks.dev/docs/month/simonwillison/weekly-2026-W14/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/month/simonwillison/weekly-2026-W14/</guid><description>&lt;h1 id="simon-willison--week-of-2026-03-30-to-2026-04-03"&gt;Simon Willison — Week of 2026-03-30 to 2026-04-03&lt;a class="anchor" href="#simon-willison--week-of-2026-03-30-to-2026-04-03"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="highlight-of-the-week"&gt;Highlight of the Week&lt;a class="anchor" href="#highlight-of-the-week"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;This week highlighted a monumental shift in the open-source security landscape, marking the sudden end of &amp;ldquo;AI slop&amp;rdquo; security reports and the arrival of a tsunami of high-quality, AI-generated vulnerability discoveries. High-profile maintainers of the Linux kernel, cURL, and HAPROXY are reporting an overwhelming influx of legitimate bugs found by AI agents, fundamentally altering the economics of exploit development and forcing open-source projects to rapidly adapt to a massive increase in valid bug reports.&lt;/p&gt;</description></item><item><title>2026-04-03</title><link>https://macworks.dev/docs/archives/simonwillison/simonwillison-2026-04-03/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/archives/simonwillison/simonwillison-2026-04-03/</guid><description>&lt;h1 id="simon-willison--2026-04-03"&gt;Simon Willison — 2026-04-03&lt;a class="anchor" href="#simon-willison--2026-04-03"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="highlight"&gt;Highlight&lt;a class="anchor" href="#highlight"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;The overarching theme today is the sudden, step-function improvement in AI-driven vulnerability research. Major open-source maintainers are simultaneously reporting that the era of &amp;ldquo;AI slop&amp;rdquo; security reports has ended, replaced by an overwhelming tsunami of highly accurate, AI-generated bug discoveries that are drastically changing the economics of exploit development.&lt;/p&gt;
&lt;h2 id="posts"&gt;Posts&lt;a class="anchor" href="#posts"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Vulnerability Research Is Cooked&lt;/strong&gt; · &lt;a href="https://simonwillison.net/2026/Apr/3/vulnerability-research-is-cooked/#atom-everything"&gt;Source&lt;/a&gt;
Highlighting Thomas Ptacek&amp;rsquo;s commentary, Simon notes that frontier models are uniquely suited for exploit development due to their baked-in knowledge of bug classes, massive context of source code, and pattern-matching capabilities. Since LLMs never get bored constraint-solving for exploitability, agents simply pointing at source trees and searching for zero-days are set to drastically alter the security landscape. Simon is tracking this trend closely enough that he just created a dedicated &lt;code&gt;ai-security-research&lt;/code&gt; tag to follow it.&lt;/p&gt;</description></item></channel></rss>