Sources
- AI at Meta / @AIatMeta
- Amazon Web Services / @awscloud
- Anthropic / @AnthropicAI
- Cursor / @cursor_ai
- Google / @Google
- Google Cloud Tech / @GoogleCloudTech
- Google DeepMind / @GoogleDeepMind
- Grok / @grok
- Hugging Face / @huggingface
- Microsoft / @Microsoft
- OpenAI / @OpenAI
- OpenClaw🦞 / @openclaw
- Sequoia Capital / @sequoia
- Tesla / @Tesla
- Twitter / @a16z
- Waymo / @Waymo
- xAI / @xai
- Y Combinator / @ycombinator
Company@X — 2026-04-06#
Signal of the Day#
Anthropic revealed its run-rate revenue has skyrocketed to $30 billion, up from $9 billion at the end of 2025, signaling extraordinary enterprise demand for Claude. To support this rapid scaling, the company signed an agreement with Google and Broadcom to secure multiple gigawatts of next-generation TPU capacity starting in 2027.
Key Announcements#
AWS · Source AWS introduced Amazon Quick, a cross-application AI layer designed to connect apps, understand context, and execute actions across a user’s workflow. The product signals Amazon’s push into persistent, enterprise-wide AI agents that turn queries into automated task completion rather than just generating text.
OpenClaw · Source Open-source agent harness OpenClaw shipped version 2026.4.5, rolling out built-in video and music generation and an experimental “/dreaming” system for long-term memory consolidation. The project also announced that Anthropic has effectively cut them off by blocking standard Claude subscriptions from covering third-party harnesses. As a result, developers are being pushed to rely on API keys or pivot to alternative models like OpenAI’s updated GPT-5.4, Qwen, or MiniMax.
NVIDIA · Source NVIDIA launched a quantized version of the multimodal Gemma 4 31B model on Hugging Face utilizing NVFP4 compression. The Blackwell-optimized compression reduces the model’s weight footprint by 4x while maintaining 99.7% of its baseline accuracy on GPQA. This drastically changes the local compute economics, allowing a frontier-class, 256K-context model to run everyday tasks efficiently on 24GB consumer GPUs.
Cursor · Source Cursor introduced “warp decode,” completely rebuilding how Mixture-of-Experts (MoE) models generate tokens on Blackwell GPUs. The architectural improvement yields a 1.84x faster inference speed and higher output accuracy. Cursor noted this directly accelerates their internal training loops for the Composer model, enabling them to ship improved versions far more frequently.
Google Cloud · Source Google introduced Veo 3.1 Lite via the Gemini API and Google AI Studio. The model supports both text-to-video and image-to-video generation at less than half the cost of Veo 3.1 Fast, demonstrating an aggressive pricing strategy to capture the high-volume developer market.
Also Noted#
- Google Developers (Source): Released ADK for Go 1.0, an agent development kit featuring native OpenTelemetry integration, a plugin system, and human-in-the-loop security protocols.
- OpenAI (Source): Launched the OpenAI Safety Fellowship to fund and support independent research on AI safety and alignment.
- Y Combinator (Source): Welcomed Harshita Arora, former co-founder of fleet infrastructure company AtoB, as the firm’s newest General Partner.
- Bud (Source): The startup formerly known as Orchids rebranded to Bud and revealed it has crossed seven figures in ARR with its autonomous app-building agent.
- Google AI Studio (Source): Shipped updates to Lyria 3 featuring a “composer mode” that lets users construct prompts to describe, preview, and export music generation to code.