<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Vibe Coding on MacWorks</title><link>https://macworks.dev/tags/vibe-coding/</link><description>Recent content in Vibe Coding on MacWorks</description><generator>Hugo</generator><language>en</language><atom:link href="https://macworks.dev/tags/vibe-coding/index.xml" rel="self" type="application/rss+xml"/><item><title>Week 14 Summary</title><link>https://macworks.dev/docs/month/tech_news_cn/weekly-2026-W14/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/month/tech_news_cn/weekly-2026-W14/</guid><description>&lt;h1 id="chinese-tech--week-of-2026-03-31-to-2026-04-03"&gt;Chinese Tech — Week of 2026-03-31 to 2026-04-03&lt;a class="anchor" href="#chinese-tech--week-of-2026-03-31-to-2026-04-03"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="week-in-review"&gt;Week in Review&lt;a class="anchor" href="#week-in-review"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;The dominant theme across the Chinese tech ecosystem this week was the sudden acceleration of AI agent workflows, unexpectedly catalyzed by Anthropic&amp;rsquo;s colossal source code leak. While frontier labs transition from consumer-facing demos to highly profitable enterprise infrastructures, the developer community is fiercely debating the right architectural boundaries for autonomous agents. Simultaneously, a noticeable counter-culture is emerging in consumer tech, with users rejecting hyper-processed AI outputs in favor of analog imperfections and human &amp;ldquo;taste.&amp;rdquo;&lt;/p&gt;</description></item><item><title>2026-04-03</title><link>https://macworks.dev/docs/archives/tech_news_cn/tech-news-cn-2026-04-03/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/archives/tech_news_cn/tech-news-cn-2026-04-03/</guid><description>&lt;h1 id="chinese-tech-daily--2026-04-03"&gt;Chinese Tech Daily — 2026-04-03&lt;a class="anchor" href="#chinese-tech-daily--2026-04-03"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="top-story"&gt;Top Story&lt;a class="anchor" href="#top-story"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.infoq.cn/article/X1c6ZllztrQhGEIoYrBR"&gt;Google&amp;rsquo;s release of the Gemma 4 open-source model series&lt;/a&gt; marks a pivotal shift toward true &amp;ldquo;local AI&amp;rdquo; by moving to the commercially permissive Apache 2.0 license. The lineup ranges from edge-optimized E2B and E4B models—capable of running completely offline on smartphones and Raspberry Pi devices—to highly efficient 26B MoE and 31B Dense models that rival much larger parameter counts in complex reasoning benchmarks. By engineering these models with native function calling, multimodal inputs, and 128K+ context windows specifically tailored for autonomous agent workflows, Google is drastically lowering the barrier for edge device AI integration while preserving data sovereignty.&lt;/p&gt;</description></item></channel></rss>