Sources
The Dial-Up Era of Agents and the Frontier of World Models — 2026-03-22#
Highlights#
Today’s discourse reveals a striking dichotomy: we are simultaneously frustrated by the clunky, high-latency “dial-up” phase of coding agents while pushing massive architectural frontiers in world modeling and vision. While developers mourn the loss of their “flow state” to agentic context switching, researchers at AMI and Meta FAIR are quietly dismantling the assumption that LLMs alone will lead us to AGI. The consensus is clear—we are vastly underestimating the long-term trajectory of this technology while overestimating its current polish.
Top Stories#
- Meta FAIR shatters the motion-geometry tradeoff: For years, we have accepted the tradeoff that video models hallucinate geometry while image models remain blind to motion. Meta FAIR just proved this is purely an architectural bug rather than a hard theoretical limit, paving the way for significantly more coherent world generation. (Source)
- The “Dial-Up” era of coding agents: Prominent voices note that despite the hype, we are incredibly early in agent adoption, comparable to the nascent 2010 cloud market. While the competitive advantage of using these agents is so strong that non-adopters will struggle to stay afloat, current utilization remains mostly stuck in the chatbot era. (Source)
- Perplexity’s Computer builds an autonomous trading dashboard: A developer successfully leveraged Perplexity’s Computer to build an automated dashboard that evaluates whether they should be trading. It scores the current market environment across five pillars—volatility, trend, breadth, momentum, and macro factors—combining them to offer a weighted recommendation on market positioning. (Source)
- Stanislas Dehaene drops consciousness course: For those mapping human cognition to artificial neural networks, Stan Dehaene has published six 90-minute lectures on consciousness. The series covers ignition, working memory, and neural manifolds, featuring historical context alongside unpublished experimental results. (Source)
Articles Worth Reading#
Saining Xie’s 7-Hour Epic: Escaping Silicon Valley’s LLM Illusion (Source) In an unprecedented 7-hour podcast interview, Saining Xie details why he twice turned down Ilya Sutskever to co-found AMI Labs with Yann LeCun. Xie argues the tech industry is dangerously “LLM-pilled,” treating text-based language models as the sole path to AGI while neglecting the high-dimensional, noisy realities of the physical world. He advocates for true “predictive brains” based on world models, positing that replicating a squirrel’s survival and physical reasoning skills is actually a vastly harder problem than building an AI that can write code or pass exams. It is a refreshing, deeply philosophical defense of open research, understanding over forced “impact,” and prioritizing fundamental representation learning over brute-force benchmark chasing.
The Loss of Developer Flow State (Source) Awni Hannun sharply articulates a hidden human cost of the current AI coding paradigm: the death of the “flow state”. Because agent latencies are still painfully high, developers are forced into constant, jarring context switching rather than locking in on a tough problem for uninterrupted hours. This serves as a vital reality check that we are merely in the “dial-up era” of agents. Until response times drop dramatically, the human-computer symbiotic experience will remain fragmented and distracting.