Sources

The Death of Sora and the AI-Driven Engineering Boom — 2026-03-25#

Highlights#

The generative video dream hit a massive wall today as Sora officially shuttered its app, validating skeptics who doubted its commercial viability and its utility in the path to AGI. Counterintuitively, the widespread deployment of AI coding tools is triggering a real-time Jevons paradox—rather than eliminating jobs, the lowered cost of software production has spiked engineering demand to three-year highs. Meanwhile, the geopolitical AI landscape is shifting rapidly as China leverages cheap, open-source models and embodied AI to build massive data flywheels, exposing the vulnerabilities of the US’s reliance on closed frontier models.

Top Stories#

  • The Demise of Sora: The official Sora app announced its shutdown today, thanking creators while promising future details on preserving user work. Critics were quick to claim vindication, with Gary Marcus arguing that the failure proves scaling massive compute doesn’t yield AGI and that the product inherently lacked commercial viability, while Art Keller condemned the entire company as fraudulent. (Source)
  • AI Triggers an Engineering Boom: Aaron Levie highlighted a real-time Jevons paradox where AI-driven cost reductions in software are enabling non-tech companies to tackle digital projects they previously couldn’t afford. This has led to a massive surge in demand for technical oversight, with engineering open roles hitting a three-year high of over 67,000 globally. (Source)
  • China’s Open-Source AI Edge: A Reuters report highlighted by Rohan Paul notes that China is aggressively utilizing cheap open-source models to create massive data flywheels. By widely deploying these models across manufacturing and robotics, China is capturing highly valuable, hard-to-fake embodied AI data, directly challenging the US’s closed-model strategy. (Source)
  • The Problem with LLM Personalization: Andrej Karpathy critiqued current LLM memory features across all major models, noting that they tend to aggressively overfit to old context window data via RAG. The result is a distracting “trying too hard” effect where trivial past queries are perpetually surfaced as deep interests. (Source)
  • Perplexity Computer Impresses: Jack Raines highlighted the high utility of the newly released “Perplexity Computer” for handling tedious coding tasks. Despite previous criticisms of the company’s maneuvers, Raines noted that the tool successfully one-shotted a complex data re-labeling job, outperforming Claude. (Source)
  • Gas Town Achieves Stability: Steve Yegge announced that Gas Town is now stable, with developers already building tangible applications on the platform. As proof of its utility, DoltHub’s CEO successfully built a new SQLite Dolt backend using it in just a few days. (Source)

Articles Worth Reading#

The Jevons Paradox in Software Production (Source) Aaron Levie sharply diagnoses a counterintuitive market trend: AI is making software incrementally cheaper to produce, which is exponentially increasing the broader economy’s appetite to build it. As marketing teams, life science researchers, and small businesses leverage AI to automate workflows, the bottleneck shifts to the human engineers who must prompt, review, and maintain these agents when they inevitably drift. Lenny Rachitsky backs this up with hard data, noting an acceleration of engineering job openings—now at 67,000 globally—proving that the doom-mongering about the death of the software engineer was wildly premature.

China’s Embodied AI Data Flywheel (Source) This analysis dissects a critical strategic divergence between US and Chinese AI development. While top US labs remain fixated on monolithic, highly expensive closed frontier models, Chinese firms are flooding the zone with cheap, open-source models heavily deployed across factories, logistics, and robotics. This decentralized deployment gives China a massive structural advantage in collecting real-world, non-synthetic action data required for embodied AI, creating a powerful feedback loop that closed ecosystems will struggle to replicate. Dan Jeffries rightfully laments that the US wrote the open-source playbook but has foolishly abandoned it to our own detriment.

The Overfitting Flaw in LLM Memory (Source) Andrej Karpathy identifies a highly relatable failure mode in modern conversational AI: personalization currently feels more like hyper-fixation. Because LLMs are biased during training to heavily weight provided context, test-time memory features cause the model to overfit on trivial past interactions. A throwaway question from two months ago can permanently skew the model’s responses, demonstrating that current implementations of long-term memory are rigid and lack the natural decay required for fluid, non-distracting interactions.