Simon Willison — 2026-04-03#

Highlight#

The overarching theme today is the sudden, step-function improvement in AI-driven vulnerability research. Major open-source maintainers are simultaneously reporting that the era of “AI slop” security reports has ended, replaced by an overwhelming tsunami of highly accurate, AI-generated bug discoveries that are drastically changing the economics of exploit development.

Posts#

Vulnerability Research Is Cooked · Source Highlighting Thomas Ptacek’s commentary, Simon notes that frontier models are uniquely suited for exploit development due to their baked-in knowledge of bug classes, massive context of source code, and pattern-matching capabilities. Since LLMs never get bored constraint-solving for exploitability, agents simply pointing at source trees and searching for zero-days are set to drastically alter the security landscape. Simon is tracking this trend closely enough that he just created a dedicated ai-security-research tag to follow it.

Quoting Greg Kroah-Hartman · Source Linux kernel maintainer Greg Kroah-Hartman observes that while early AI security reports were laughably wrong “slop,” a switch flipped about a month ago. Now, major open-source projects are receiving high-quality, real security reports generated by AI.

Quoting Daniel Stenberg · Source Echoing the Linux kernel team, cURL lead developer Daniel Stenberg reports spending intense hours every day managing this new influx. He notes the challenge has transitioned from a tsunami of AI slop to a tsunami of plain, high-quality security reports.

Quoting Willy Tarreau · Source HAPROXY lead developer Willy Tarreau provides stark numbers for this trend: kernel security reports jumped from 2-3 a week a few years ago to 5-10 a day recently, with most of them being correct. The influx is so severe they’ve had to bring in more maintainers, and they are now seeing identical bugs found by multiple people using different tools.

The Axios supply chain attack used individually targeted social engineering · Source Simon highlights a postmortem from the Axios team regarding a malware dependency introduced via targeted social engineering. Attackers impersonated a company founder, set up a highly convincing branded Slack workspace, and tricked a maintainer into installing a Remote Access Trojan (RAT) via a fake MS Teams update during a scheduled call. Simon warns that the time pressures of joining last-minute meetings make clicking “yes” a highly effective scam that all OSS maintainers need to be aware of.

Can JavaScript Escape a CSP Meta Tag Inside an Iframe? · Source While trying to build his own version of Claude Artifacts, Simon researched ways to apply Content Security Policy (CSP) headers to sandboxed iframes without needing a separate domain to host the files. He discovered that injecting <meta http-equiv="Content-Security-Policy"...> tags at the top of the iframe content works securely and cannot be overridden by subsequent untrusted JavaScript.

The cognitive impact of coding agents · Source A brief note on media formatting: a 48-second short-form vertical video clipped from Simon’s 1 hour 40 minute podcast appearance with Lenny Rachitsky went highly viral, attracting over 1.1 million views on Twitter.

Project Pulse#

Simon’s radar is heavily tuned to open-source security right now, specifically the dual threats of highly targeted social engineering against maintainers and the sudden viability of AI agents discovering zero-days at scale. Beyond the commentary, we get a quick glimpse into his ongoing web experiments, noting that he is actively figuring out sandboxed iframes to build an open-source clone of Claude Artifacts.


Categories: Blogs, AI, Tech