<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Gemma on MacWorks</title><link>https://macworks.dev/tags/gemma/</link><description>Recent content in Gemma on MacWorks</description><generator>Hugo</generator><language>en</language><atom:link href="https://macworks.dev/tags/gemma/index.xml" rel="self" type="application/rss+xml"/><item><title>2026-04-12</title><link>https://macworks.dev/docs/week/simonwillison/simonwillison-2026-04-12/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://macworks.dev/docs/week/simonwillison/simonwillison-2026-04-12/</guid><description>&lt;h1 id="simon-willison--2026-04-12"&gt;Simon Willison — 2026-04-12&lt;a class="anchor" href="#simon-willison--2026-04-12"&gt;#&lt;/a&gt;&lt;/h1&gt;
&lt;h2 id="highlight"&gt;Highlight&lt;a class="anchor" href="#highlight"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Simon shares a highly practical, single-command recipe for running local speech-to-text transcription on macOS using the Gemma 4 model and Apple&amp;rsquo;s MLX framework. It is a prime example of his ongoing exploration into making local, multimodal LLMs frictionless and accessible using modern Python packaging tools like &lt;code&gt;uv&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="posts"&gt;Posts&lt;a class="anchor" href="#posts"&gt;#&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;[Gemma 4 audio with MLX]&lt;/strong&gt; · &lt;a href="https://simonwillison.net/2026/Apr/12/mlx-audio/#atom-everything"&gt;Source&lt;/a&gt;
Thanks to a tip from Rahim Nathwani, Simon demonstrates a quick &lt;code&gt;uv run&lt;/code&gt; recipe to transcribe audio locally using the 10.28 GB Gemma 4 E2B model via &lt;code&gt;mlx-vlm&lt;/code&gt;. He tested the pipeline on a 14-second voice memo, and while it slightly misinterpreted a couple of words (hearing &amp;ldquo;front&amp;rdquo; instead of &amp;ldquo;right&amp;rdquo;), Simon conceded that the errors were understandable given the audio itself. The post highlights how easy it has become to test heavyweight, local AI models on Apple Silicon without complex environment setup.&lt;/p&gt;</description></item></channel></rss>