<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="../assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>TinyComputers.io (Posts about Philosophy)</title><link>https://tinycomputers.io/</link><description></description><atom:link href="https://tinycomputers.io/categories/cat_philosophy.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2026 A.C. Jokela 
&lt;!-- div style="width: 100%" --&gt;
&lt;a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"&gt;&lt;img alt="" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/80x15.png" /&gt; Creative Commons Attribution-ShareAlike&lt;/a&gt;&amp;nbsp;|&amp;nbsp;
&lt;!-- /div --&gt;
</copyright><lastBuildDate>Mon, 06 Apr 2026 22:12:59 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>The Feedback Loop That Jevons Couldn't Name</title><link>https://tinycomputers.io/posts/the-feedback-loop-that-jevons-couldnt-name.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/the-feedback-loop-that-jevons-couldnt-name_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;36 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;In 1865, William Stanley Jevons published &lt;em&gt;The Coal Question&lt;/em&gt; and identified a paradox: James Watt's more efficient steam engine didn't reduce coal consumption. It increased it. The efficiency gain made coal-powered industry more profitable, which drove more investment, which consumed more coal. The per-unit savings were overwhelmed by the expansion in total units demanded.&lt;/p&gt;
&lt;p&gt;In 1948, Norbert Wiener published &lt;em&gt;Cybernetics: Or Control and Communication in the Animal and the Machine&lt;/em&gt; and described a mechanism: systems that feed their outputs back into their inputs will either stabilize (negative feedback) or accelerate (positive feedback). A thermostat is negative feedback: the output (heat) reduces the input (the gap between current and target temperature). A microphone pointed at a speaker is positive feedback: the output (sound) amplifies the input (sound), and the system screams.&lt;/p&gt;
&lt;p&gt;Jevons saw what happened. Wiener explained why.&lt;/p&gt;
&lt;p&gt;They never met. Jevons died in 1882, twelve years before Wiener was born. Their fields barely overlapped. Jevons was an economist and logician working in Manchester. Wiener was a mathematician and engineer at MIT. Neither cited the other. Neither would have had reason to. But they were describing the same phenomenon from different sides: Jevons from economics, Wiener from control theory. Jevons identified the paradox. Wiener provided the mechanism. Together, they explain something about AI that the current conversation consistently misses: the reason demand expands when cognitive tools get cheaper isn't economic irrationality. It's positive feedback. It's the system doing exactly what feedback systems do.&lt;/p&gt;
&lt;h3&gt;Wiener's Machines&lt;/h3&gt;
&lt;p&gt;Wiener was not an abstract theorist. He built anti-aircraft fire control systems during World War II, predicting the future position of enemy aircraft based on their observed trajectories. The mathematical problem was filtering signal from noise in real-time feedback data, and the solution required treating the human pilot as a component in a mechanical system: a system that could be modeled, predicted, and countered.&lt;/p&gt;
&lt;p&gt;This experience shaped everything he wrote afterward. In &lt;em&gt;Cybernetics&lt;/em&gt;, Wiener argued that communication and control were fundamentally the same problem, whether the system involved nerves, wires, or social institutions. A factory is a feedback system. An economy is a feedback system. A conversation is a feedback system. The mathematics of regulation and stability apply to all of them.&lt;/p&gt;
&lt;p&gt;In 1950, he published &lt;a href="https://baud.rs/B8JkEc"&gt;&lt;em&gt;The Human Use of Human Beings&lt;/em&gt;&lt;/a&gt;, a book aimed at general readers. Its central argument: automation would transform society not by replacing humans but by changing the feedback loops that humans operate within. The automated factory doesn't just make products without workers. It creates a system where the speed of production is no longer limited by human labor, which means the system's dynamics shift to whatever the next bottleneck happens to be.&lt;/p&gt;
&lt;p&gt;Wiener's most famous warning was blunt: "The automatic machine is the precise economic equivalent of slave labor. Any labor which competes with slave labor must accept the economic conditions of slave labor." He predicted that automation would produce unemployment that would make the Great Depression "seem a pleasant joke." He wrote this in 1950, when computers filled rooms and could barely calculate ballistic tables.&lt;/p&gt;
&lt;h3&gt;The Jevons Mechanism&lt;/h3&gt;
&lt;p&gt;Jevons didn't have the vocabulary of cybernetics. He described his paradox in economic terms: efficiency improvements reduce per-unit cost, lower cost increases demand, increased demand outweighs the efficiency gain, total consumption rises. He was observing a positive feedback loop, but he described it as a paradox because the economic framework he was working within predicted the opposite. If coal becomes more efficient, you should need less of it. The loop that amplifies demand was invisible in the model.&lt;/p&gt;
&lt;p&gt;Wiener's framework makes the loop visible. Here's the cybernetic translation of Jevons:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A system component becomes more efficient (Watt's steam engine, a cheaper semiconductor, an AI model).&lt;/li&gt;
&lt;li&gt;The efficiency reduces the cost of the system's output.&lt;/li&gt;
&lt;li&gt;Lower cost makes new applications viable that were previously too expensive.&lt;/li&gt;
&lt;li&gt;New applications create new demand for the now-cheaper component.&lt;/li&gt;
&lt;li&gt;The new demand feeds back into step 1 as pressure for more efficiency, more production, more investment.&lt;/li&gt;
&lt;li&gt;The loop accelerates.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This is a positive feedback loop. The output (cheaper goods, more applications) amplifies the input (demand for the efficient component). There is no negative feedback mechanism to stabilize the system. The loop runs until it hits an external constraint: a physical limit on the resource, a regulatory intervention, or the saturation of all possible demand.&lt;/p&gt;
&lt;p&gt;Jevons observed steps 1 through 4 with coal. He didn't have the mathematical framework to describe steps 5 and 6 as a feedback loop. Wiener had the framework but was focused on machines and automation, not on resource economics. The connection between them is that Jevons Paradox is a specific instance of positive feedback in economic systems, and positive feedback is the phenomenon Wiener spent his career analyzing.&lt;/p&gt;
&lt;h3&gt;The AI Loop&lt;/h3&gt;
&lt;p&gt;I've been writing about &lt;a href="https://tinycomputers.io/posts/jevons-paradox.html"&gt;Jevons Paradox and AI&lt;/a&gt; for months. The argument: AI makes cognitive output cheaper, demand for cognitive output expands beyond the efficiency gain, and the expansion concentrates pressure on the one input that can't scale: human judgment. The &lt;a href="https://tinycomputers.io/posts/the-ai-vampire-is-jevons-paradox.html"&gt;Vampire piece&lt;/a&gt; described the human cost. The &lt;a href="https://tinycomputers.io/posts/the-excavator-and-the-foundation.html"&gt;Excavator piece&lt;/a&gt; described the software quality cost. The &lt;a href="https://tinycomputers.io/posts/the-split-isnt-between-people-its-between-tasks.html"&gt;split piece&lt;/a&gt; described how the craft concentrates in the judgment layer.&lt;/p&gt;
&lt;p&gt;What I didn't do was explain the mechanism. Why does demand expand when a cognitive input gets cheaper? Why doesn't the system reach equilibrium at lower total consumption, the way classical economics predicts? What force drives the expansion?&lt;/p&gt;
&lt;p&gt;Wiener's answer: positive feedback.&lt;/p&gt;
&lt;p&gt;Here's the AI loop, stated in cybernetic terms:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;AI makes code generation cheaper (the efficiency gain).&lt;/li&gt;
&lt;li&gt;Cheaper code generation makes new software projects viable (the demand expansion).&lt;/li&gt;
&lt;li&gt;New projects produce software that requires review, testing, debugging, and maintenance (the output).&lt;/li&gt;
&lt;li&gt;Review and debugging create demand for more AI assistance (the feedback).&lt;/li&gt;
&lt;li&gt;The loop accelerates: more projects, more software, more review, more AI, more projects.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;At no point does the loop include a mechanism for slowing down. There is no thermostat. The "temperature" (volume of software in production) rises without limit until it hits an external constraint.&lt;/p&gt;
&lt;p&gt;I can see this loop operating in my own work. I built &lt;a href="https://tinycomputers.io/posts/building-dirtscout-a-land-acquisition-platform-with-claude-code.html"&gt;DirtScout&lt;/a&gt;, a full-stack land acquisition platform, in a series of conversations with Claude Code. 29,000 lines of code across Python, TypeScript, and infrastructure-as-code. The project would have taken months to type by hand. With AI, I built it in days. But building it in days meant I immediately started adding features: soil analysis, environmental assessments, auction tracking, deal pipeline management, offer letter generation. Each feature was a conversation. Each conversation produced code that needed to be reviewed, tested, and maintained. The faster I built, the more I wanted to build, and the more I built, the more review work accumulated. The loop ran. I didn't notice it running until the maintenance surface area was larger than anything I'd built before.&lt;/p&gt;
&lt;p&gt;That's Wiener's loop at the individual level. At the organizational level, the same dynamic plays out with more people and higher stakes. Every developer using AI-assisted tooling ships more code, which creates more surface area for bugs and security vulnerabilities, which creates more demand for review, which creates more demand for AI-assisted review tooling, which ships more code.&lt;/p&gt;
&lt;p&gt;The external constraint, as I've argued in previous pieces, is human judgment. The &lt;a href="https://tinycomputers.io/posts/the-ai-vampire-is-jevons-paradox.html"&gt;three-to-four-hour ceiling on deep work&lt;/a&gt; is biological. It doesn't expand because the feedback loop demands more of it. It's a fixed resource being consumed by an accelerating process. In Wiener's terms, the human component in the feedback loop is a bottleneck with a fixed maximum throughput. The system can't route around it (the judgment is necessary) and can't expand it (the biology doesn't scale). So the system does the only thing a positive feedback loop can do when it hits a fixed constraint: it overloads the constraint.&lt;/p&gt;
&lt;p&gt;That's burnout. That's what Steve Yegge described in "The AI Vampire." Wiener would have recognized it instantly. The human component in an accelerating feedback loop reaches its throughput limit and degrades. The system doesn't stop. The system doesn't care. The system is a feedback loop, and feedback loops don't have preferences about their components.&lt;/p&gt;
&lt;h3&gt;Wiener's Warning, Updated&lt;/h3&gt;
&lt;p&gt;Wiener warned that automation would make human labor economically equivalent to slave labor. He was wrong about the specifics (manufacturing employment declined but didn't collapse) but right about the dynamic. The feedback loop he described (automation reduces labor cost, which drives more automation, which further reduces labor cost) played out exactly as predicted. It just played out over decades instead of years, and the economy adapted by shifting labor to sectors that weren't yet automated.&lt;/p&gt;
&lt;p&gt;The AI version of this warning is different in a way that matters. Wiener's automation loop operated on physical labor. Muscle has substitutes: machines. When the feedback loop overloaded the human muscle component, the system routed around it with hydraulics, robotics, and assembly lines. The human moved to cognitive work, where machines couldn't follow.&lt;/p&gt;
&lt;p&gt;AI's feedback loop operates on cognitive labor. Judgment does not have substitutes. When the feedback loop overloads the human judgment component, the system can't route around it the way manufacturing routed around physical labor. There is no higher-order activity to retreat to. Judgment is the top of the stack. The feedback loop either overloads it (burnout) or degrades it (review quality drops, software slop accumulates, the &lt;a href="https://tinycomputers.io/posts/the-excavator-and-the-foundation.html"&gt;Excavator&lt;/a&gt; scenario plays out).&lt;/p&gt;
&lt;p&gt;Wiener saw this possibility in the abstract. In &lt;em&gt;The Human Use of Human Beings&lt;/em&gt;, he wrote: "The world of the future will be an even more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves." He was pushing back against the utopian narrative of his own era: the idea that automation would create leisure. His counterclaim was that automation would shift the struggle to a harder domain. He was right, and the harder domain turned out to be exactly the one AI is now pressuring: the limits of human cognition.&lt;/p&gt;
&lt;h3&gt;The Speed Problem&lt;/h3&gt;
&lt;p&gt;There's a dimension of the AI feedback loop that Wiener's industrial-era examples didn't anticipate: speed.&lt;/p&gt;
&lt;p&gt;Wiener's factory automation loop ran at the speed of manufacturing. It took years to design a new factory, months to retool an assembly line, weeks to train workers on new processes. The feedback loop was real, but it operated on a timescale that allowed human institutions (unions, regulations, education systems) to adapt. Walter Reuther, the president of the United Auto Workers, received Wiener's 1949 letter warning about automation and had years to develop a response. The loop was slow enough for governance.&lt;/p&gt;
&lt;p&gt;The AI feedback loop runs at the speed of software. An operations manager can go from "I have an idea" to "it's in production" in &lt;a href="https://tinycomputers.io/posts/the-excavator-and-the-foundation.html"&gt;an afternoon&lt;/a&gt;. A developer can ship ten features in the time it used to take to ship one. The loop cycles in hours, not years. Human institutions that adapted to the manufacturing automation loop over decades don't have decades to adapt to the AI loop. They have the time between one deployment and the next.&lt;/p&gt;
&lt;p&gt;This is the Jevons Paradox running at software speed. Coal consumption took decades to double after Watt's engine. Computing demand took years to double after each semiconductor generation. AI-assisted software production can double in months. The feedback loop is the same. The clock rate is different. And the human component's clock rate (the biological ceiling on judgment) hasn't changed at all.&lt;/p&gt;
&lt;p&gt;The supply side is contracting at the same time. After sixteen consecutive years of growth, undergraduate computer science enrollment turned negative in 2025. The Computing Research Association found that 62% of computing departments reported declining enrollment for 2025-26. Students and their parents are reading the headlines about AI replacing developers and steering toward fields they perceive as more durable. The feedback loop produces an ironic secondary effect: the fear of automation reduces the supply of the human component that the accelerating system needs most. The loop runs faster. The pipeline of people qualified to govern it narrows. Wiener's warning about building governance structures before the loop overloads becomes more urgent as the pool of people who could build those structures shrinks.&lt;/p&gt;
&lt;h3&gt;Wiener and Heidegger&lt;/h3&gt;
&lt;p&gt;Wiener and Heidegger never engaged with each other's work, as far as I know. They were writing at the same time (late 1940s, early 1950s), about the same phenomenon (technology reshaping human life), and they arrived at complementary conclusions from completely different starting points.&lt;/p&gt;
&lt;p&gt;Heidegger, as I wrote in &lt;a href="https://tinycomputers.io/posts/enframing-the-code.html"&gt;the Enframing piece&lt;/a&gt;, argued that technology changes how we see the world. Everything becomes standing reserve: raw material to be ordered and consumed. The river becomes a power source. The specification becomes code. The transformation is ontological: it changes what things are, not just what we do with them.&lt;/p&gt;
&lt;p&gt;Wiener argued that technology changes the dynamics of the systems we operate within. Feedback loops accelerate. Bottlenecks shift. Components that were adequate at one cycle speed become inadequate at a faster one. The transformation is mechanical: it changes the forces acting on us, not necessarily how we understand them.&lt;/p&gt;
&lt;p&gt;The two frameworks aren't contradictory. They're describing different aspects of the same process. Heidegger explains why we treat the Zilog manual as raw material for code generation (Enframing). Wiener explains what happens when we do it at scale (positive feedback, demand expansion, bottleneck overload). Jevons measured the economic result (total consumption rises despite efficiency gains).&lt;/p&gt;
&lt;p&gt;There's a useful way to layer them. Heidegger describes the precondition: technology must first transform how we see the world (specifications become standing reserve) before the feedback loop can operate. You can't accelerate production of something you don't yet see as producible. Enframing opens the door. Wiener's loop walks through it. Jevons counts what's on the other side.&lt;/p&gt;
&lt;p&gt;The sequence matters for AI. First, we began seeing cognitive tasks as automatable (Heidegger's shift in perception). Then, AI tools made the automation practical and cheap (Wiener's efficiency gain). Then, demand for cognitive output expanded beyond what anyone predicted (Jevons' paradox). Each step enables the next. The feedback loop couldn't run until the Enframing was in place, and the economic expansion couldn't happen until the loop was running.&lt;/p&gt;
&lt;p&gt;Three disciplines. One phenomenon. The feedback loop that Jevons couldn't name, Wiener formalized, and Heidegger diagnosed as a transformation in our relationship to the world.&lt;/p&gt;
&lt;h3&gt;The Missing Thermostat&lt;/h3&gt;
&lt;p&gt;Every stable system has negative feedback. A thermostat, a voltage regulator, a population predator-prey cycle: something measures the output and adjusts the input to keep the system within bounds. Positive feedback without negative feedback is, by definition, unstable. The microphone screams until someone unplugs it.&lt;/p&gt;
&lt;p&gt;The AI feedback loop currently has no thermostat. There is no mechanism that measures the volume of unreviewed software in production and slows the rate of production accordingly. There is no mechanism that measures developer burnout and reduces the demand for cognitive output. There is no mechanism that measures the ratio of AI-generated code to human-reviewed code and raises an alarm when it crosses a threshold.&lt;/p&gt;
&lt;p&gt;Wiener would argue that this is the actual problem. Not AI itself (a tool, a component, an efficiency gain), but the absence of negative feedback in the system that AI accelerates. His entire career was about designing feedback systems that stabilize rather than explode. His warning about automation wasn't "don't build machines." It was "build the governance structures that keep the feedback loop from overloading its human components."&lt;/p&gt;
&lt;p&gt;In 1949, Wiener wrote a letter to Walter Reuther, president of the United Auto Workers union, warning him about the coming wave of industrial automation. He didn't tell Reuther to smash the machines. He told him to prepare the workforce and the institutions for a system that would accelerate beyond their current capacity to manage. The letter went largely unheeded.&lt;/p&gt;
&lt;p&gt;We're in the same position now. The feedback loop is running. The human component is approaching its throughput limit. The thermostat doesn't exist. Someone needs to build it, and the people best positioned to do so are the ones inside the loop: the developers and decision-makers who can see the acceleration because they're experiencing it.&lt;/p&gt;
&lt;p&gt;Wiener died in Stockholm in 1964 at the age of sixty-nine, two decades before the personal computer and six decades before large language models. He never saw the system he described reach the scale it's reaching now. But the mathematics he wrote down in 1948 describe it precisely. Positive feedback without negative feedback is unstable. The system will find its constraint and overload it. The only question is whether we build the thermostat before or after the overload.&lt;/p&gt;
&lt;p&gt;What makes Wiener worth reading today isn't his specific predictions (some were right, some were wrong, the timeline was consistently too compressed). It's his framework. He understood that technological change is not a series of discrete events but a system of coupled feedback loops. Each efficiency gain changes the dynamics of the system it operates within. Each change in dynamics creates pressure on whatever component is now the bottleneck. And each bottleneck, when overloaded, produces consequences that feed back into the system and accelerate the next cycle.&lt;/p&gt;
&lt;p&gt;That framework applies to coal in 1865, to factory automation in 1950, and to AI-assisted cognitive work in 2026. The specific resources change. The specific bottlenecks change. The feedback dynamics don't.&lt;/p&gt;
&lt;p&gt;John von Neumann, Wiener's contemporary and one of the minds his work most influenced, once said that young mathematicians should not worry about whether their work would be useful because "truth is much too complicated to allow anything but approximations." Wiener's approximation of the feedback dynamics of technological change was good enough that it still describes the system seventy-eight years after he formalized it. Whether it's good enough to help us build the thermostat before we need it is the question his work leaves us with.&lt;/p&gt;
&lt;h3&gt;What a Thermostat Might Look Like&lt;/h3&gt;
&lt;p&gt;Wiener didn't just diagnose problems. He designed solutions. His entire field was about building systems that regulate themselves. If he were alive today, he'd be asking: what does negative feedback look like in an AI-accelerated software economy?&lt;/p&gt;
&lt;p&gt;Some possibilities:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mandatory review ratios.&lt;/strong&gt; For every N lines of AI-generated code deployed to production, M lines must be reviewed by a qualified human. The ratio creates a coupling between the production rate and the review rate, forcing the system to slow down when review capacity is saturated. This is a thermostat: the output (deployed code) is measured against a constraint (review capacity), and the input (generation rate) is throttled accordingly.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Liability assignment.&lt;/strong&gt; If AI-generated code causes a data breach or financial loss, who pays? Currently, nobody in particular. Assigning liability to the person who deployed the code (not the person who prompted the AI) creates negative feedback: the cost of deployment failure feeds back into the decision to deploy, making people more cautious about shipping unreviewed code. Insurance markets would price this risk and create their own feedback mechanisms.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Institutional adaptation.&lt;/strong&gt; This is what Wiener actually advocated. Not technical solutions but organizational ones. He told Walter Reuther to prepare the workforce for automation. The equivalent today: companies need to build review capacity at the same rate they build production capacity. Every developer who ships AI-generated code needs a corresponding increase in testing, security review, and architectural oversight. The organizations that treat AI as free productivity without investing in review are the ones that will hit the overload first.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Cultural awareness.&lt;/strong&gt; &lt;a href="https://baud.rs/4ws9kl"&gt;Tristan Harris&lt;/a&gt; and the &lt;a href="https://baud.rs/cGtlHh"&gt;Center for Humane Technology&lt;/a&gt; have been arguing since 2023 that AI is being deployed faster than any technology in history under maximum incentives to cut corners on safety. Harris makes a distinction that Wiener would have appreciated: the difference between the "possible" (AI's theoretical benefits) and the "probable" (what actually happens given current incentive structures). The probable outcome, without intervention, is that companies race toward capability because the competitive feedback loop punishes restraint. Harris's proposed response is building global consensus that the current trajectory is unacceptable, the way the nuclear test ban and the Montreal Protocol established consensus before those feedback loops ran to their conclusions. In Wiener's terms, Harris is trying to build the thermostat at the cultural level: changing the system's objective function so that it optimizes for something other than pure output volume.&lt;/p&gt;
&lt;p&gt;None of these exist at scale today. The thermostat is unbuilt. The loop runs open.&lt;/p&gt;
&lt;p&gt;Jevons told us what happens. Wiener told us why. The question that remains is whether anyone is building the feedback mechanism that prevents the system from screaming.&lt;/p&gt;</description><category>ai</category><category>automation</category><category>control theory</category><category>cybernetics</category><category>economics</category><category>feedback loops</category><category>heidegger</category><category>jevons paradox</category><category>norbert wiener</category><category>philosophy</category><guid>https://tinycomputers.io/posts/the-feedback-loop-that-jevons-couldnt-name.html</guid><pubDate>Fri, 27 Mar 2026 13:00:00 GMT</pubDate></item><item><title>Enframing the Code</title><link>https://tinycomputers.io/posts/enframing-the-code.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/enframing-the-code_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;25 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;&lt;img src="https://tinycomputers.io/images/clean-room-z80-emulator/zilog-z80.jpg" alt="A Zilog Z80 CPU in a white ceramic DIP-40 package, the processor whose specification became standing reserve" style="float: right; max-width: 300px; margin: 0 0 1em 1.5em; border-radius: 4px; box-shadow: 0 10px 20px rgba(0,0,0,.1);" loading="lazy"&gt;&lt;/p&gt;
&lt;p&gt;I asked Claude to build a &lt;a href="https://tinycomputers.io/posts/clean-room-z80-emulator.html"&gt;Z80 emulator&lt;/a&gt;. The constraint was explicit: no reference to existing emulator source code. The inputs were the Zilog Z80 CPU User Manual, an architectural plan I wrote, and the test ROMs to validate against. Claude produced 1,300 lines of C covering every official Z80 instruction, undocumented flag behaviors, ACIA serial emulation, and CP/M support. It passed 117 unit tests. It boots CP/M and runs programs.&lt;/p&gt;
&lt;p&gt;The emulator works. The question is what it means that it exists.&lt;/p&gt;
&lt;h3&gt;The Clean Room That Wasn't&lt;/h3&gt;
&lt;p&gt;"Clean room" is a legal term borrowed from semiconductor fabrication. In software, it describes a methodology where developers build from specifications and documentation without ever examining existing implementations. The purpose is to produce code that is legally independent of prior art. If you've never seen the original code, you can't have copied it.&lt;/p&gt;
&lt;p&gt;The clean-room process was designed for human cognition. A developer reads a specification, forms a mental model, and writes code that implements the behavior the specification describes. The legal fiction is that the developer's mental model is informed solely by the specification, not by any existing implementation. In practice, developers have seen other implementations, read blog posts, studied textbook examples. The clean room is a discipline, not a guarantee: you follow the process, document that you followed it, and hope that's sufficient if someone challenges you.&lt;/p&gt;
&lt;p&gt;When Claude writes a Z80 emulator from the Zilog manual, the clean-room concept doesn't dissolve because the AI is better at following the rules. It dissolves because the framework doesn't apply. Claude's training data includes dozens of Z80 emulators. The model has seen &lt;a href="https://baud.rs/GeplXn"&gt;MAME's Z80 core&lt;/a&gt;, it has seen &lt;a href="https://baud.rs/Adkbi8"&gt;Fuse&lt;/a&gt;, it has seen &lt;a href="https://baud.rs/KJoorR"&gt;whatever antirez published&lt;/a&gt;. The question of whether a specific output is "derived from" a specific input is unanswerable, because the model's internal state isn't decomposable into "I learned this from source A" and "I learned this from source B." The provenance that clean-room law requires you to demonstrate doesn't exist in a form that can be demonstrated.&lt;/p&gt;
&lt;p&gt;But here's what's interesting: the emulator I directed Claude to produce is not a copy of any specific emulator. The architecture is mine. The bit-field decoding strategy (x/y/z/p/q decomposition of opcode bytes) was specified in my architectural plan. The test suite structure, the ACIA emulation interface, the system emulator's callback design: all specified by me and implemented by Claude from those specifications plus the Zilog manual. The output is an original assembly of knowledge. It's also an output of a system that has seen the source code it was told not to reference.&lt;/p&gt;
&lt;p&gt;The law has no category for this. It's not a copy. It's not independent. It's something else.&lt;/p&gt;
&lt;h3&gt;The Language That Doesn't Exist&lt;/h3&gt;
&lt;p&gt;The Z80 case is complicated by the fact that prior implementations exist. Somebody could, in theory, diff my emulator against MAME's and look for structural similarities. (They won't find meaningful ones, because the architecture is different, but the argument could be made.) The more interesting case eliminates this possibility entirely.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html"&gt;Lattice&lt;/a&gt; is a programming language I designed. It has a novel feature called the phase system: mutability is not a static attribute but a runtime property that values transition through, like matter moving between liquid and solid. You declare a value in &lt;code&gt;flux&lt;/code&gt; (mutable), &lt;code&gt;freeze&lt;/code&gt; it to &lt;code&gt;fix&lt;/code&gt; (immutable), &lt;code&gt;thaw&lt;/code&gt; it back if needed. The language has &lt;code&gt;forge&lt;/code&gt; blocks for controlled mutation zones. None of this exists in any other language.&lt;/p&gt;
&lt;p&gt;Claude writes Lattice code. It writes it well. It produces correct programs using the phase system, the concurrency primitives, and the bytecode VM's 100-opcode instruction set. It does this despite the fact that Lattice does not appear in its training data. The language was designed after Claude's knowledge cutoff. There is no Lattice source code on GitHub, no Stack Overflow answers, no blog posts (other than mine) explaining the syntax.&lt;/p&gt;
&lt;p&gt;How does Claude write Lattice? Because Lattice's syntax looks like Rust. The curly braces, the type annotations, the pattern matching: Claude recognizes the structural similarity and maps its understanding of Rust-like languages onto the Lattice grammar. The phase-specific keywords (&lt;code&gt;flux&lt;/code&gt;, &lt;code&gt;fix&lt;/code&gt;, &lt;code&gt;freeze&lt;/code&gt;, &lt;code&gt;thaw&lt;/code&gt;, &lt;code&gt;forge&lt;/code&gt;) are new, but they appear in contexts that are syntactically familiar. Claude doesn't need to have seen Lattice before. It needs to have seen languages that smell similar.&lt;/p&gt;
&lt;p&gt;This is a fundamentally different kind of creation than what copyright law contemplates. Claude didn't copy Lattice code (none exists to copy). It didn't copy Rust code (Lattice isn't Rust). It transformed a grammar specification and a set of examples into working programs in a language that has no prior art. The specification became the implementation without passing through any intermediate step that could be called "copying."&lt;/p&gt;
&lt;h3&gt;Heidegger Saw This Coming&lt;/h3&gt;
&lt;p&gt;In 1954, Martin Heidegger published &lt;a href="https://baud.rs/BziXVW"&gt;&lt;em&gt;The Question Concerning Technology&lt;/em&gt;&lt;/a&gt;. His central argument: modern technology is not just a set of tools. It is a way of seeing the world. He called this way of seeing &lt;em&gt;Enframing&lt;/em&gt; (Gestell): the tendency of modern technology to reveal everything as &lt;em&gt;standing reserve&lt;/em&gt; (Bestand), raw material ordered into availability.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/enframing/rhine-dam.jpg" alt="A hydroelectric dam on the Rhine near Märkt, Germany, the kind of infrastructure Heidegger used to illustrate Enframing" style="max-width: 100%; border-radius: 4px; box-shadow: 0 10px 20px rgba(0,0,0,.1); margin: 1em 0;" loading="lazy"&gt;&lt;/p&gt;
&lt;p&gt;The example Heidegger used was a hydroelectric dam on the Rhine. The river is no longer a river in the way a bridge reveals it (something to cross, something to contemplate, something with its own presence). The dam reveals the river as a power source. The water is standing reserve: ordered, measured, extracted. The river hasn't changed physically. What changed is how technology frames it.&lt;/p&gt;
&lt;p&gt;This is exactly what happens when Claude reads the &lt;a href="https://baud.rs/EESjG1"&gt;Zilog Z80 CPU User Manual&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The manual is a specification: 332 pages of timing diagrams, instruction tables, register descriptions, and pin assignments. When a human developer reads it, the manual is a guide. The developer forms an understanding, makes design choices, writes code that reflects their interpretation of the specification. The manual and the implementation are connected through the developer's comprehension. The developer is present in the code in a way that matters both legally and philosophically.&lt;/p&gt;
&lt;p&gt;When Claude reads the same manual, the specification becomes standing reserve. The timing diagrams are not studied; they are consumed. The instruction tables are not interpreted; they are transformed. The manual is raw material, ordered directly into code, the way the dam orders the river into electricity. There is no intermediate step of "understanding" in the human sense. There is a transformation from one representation (specification) to another (implementation), and the transformation is mechanical in a way that human interpretation is not.&lt;/p&gt;
&lt;p&gt;This is what Heidegger meant by Enframing. Technology doesn't just use resources; it changes what counts as a resource. The Zilog manual was written as a reference for engineers. Enframing reveals it as raw material for code generation. The specification was always latently an implementation; the AI just makes the transformation explicit.&lt;/p&gt;
&lt;h3&gt;What Copyright Was Protecting&lt;/h3&gt;
&lt;p&gt;Copyright law protects "original works of authorship fixed in a tangible medium of expression." The key word is "original." A Z80 emulator is copyrightable because the programmer made creative choices in expressing the specification as code. Two programmers given the same Zilog manual will produce different emulators: different variable names, different control flow structures, different optimization strategies, different architectural decisions. The specification constrains the behavior. The expression is where the creativity lives.&lt;/p&gt;
&lt;p&gt;This framework assumes that the gap between specification and implementation is where human creativity operates. The specification says "the ADD instruction sets the zero flag if the result is zero." A hundred programmers will write a hundred slightly different implementations of this behavior. Each is an original expression. Each is copyrightable.&lt;/p&gt;
&lt;p&gt;What happens when the gap closes? When the transformation from specification to implementation becomes mechanical, when there is no creative gap for originality to occupy, what is left to protect?&lt;/p&gt;
&lt;p&gt;Claude's Z80 emulator makes specific structural choices: the x/y/z/p/q bit-field decomposition, the callback-based system bus interface, the T-state tracking architecture. These choices came from my architectural plan, not from Claude's autonomous creativity. I specified the structure; Claude filled it in from the Zilog manual. The "creative choices" that copyright relies on were mine (the architecture) and the specification's (the behavior). Claude's contribution was the transformation between the two, and that transformation is closer to compilation than to authorship.&lt;/p&gt;
&lt;p&gt;Lattice pushes this further. Claude writes programs in a language with no training data, from a grammar specification and examples I provided. The output is correct Lattice code. But who is the author? I designed the language. Claude learned it from my spec. The programs it produces are implementations of tasks I described. At no point did Claude exercise the kind of independent creative judgment that copyright assumes. It transformed a task description into code in a grammar it learned from me. The entire chain from specification to implementation is mechanical, even though the output looks exactly like something a human programmer would write.&lt;/p&gt;
&lt;h3&gt;The Dissolution&lt;/h3&gt;
&lt;p&gt;Clean-room reverse engineering was a legal ritual designed to prove that a human developer's mental model was not contaminated by existing code. The ritual made sense when the concern was human memory: a developer who has read source code might unconsciously reproduce it.&lt;/p&gt;
&lt;p&gt;AI makes the ritual meaningless in two ways.&lt;/p&gt;
&lt;p&gt;First, provenance is undemonstrable. You cannot prove that Claude's output is or isn't derived from a specific piece of training data, because the model's internal representations don't maintain source attribution. The clean-room question ("did the developer see the original code?") has no answerable equivalent for an LLM. The model has seen everything in its training data simultaneously. It cannot unsee selectively.&lt;/p&gt;
&lt;p&gt;Second, the distinction between "specification" and "implementation" is collapsing. When the transformation between them is mechanical and instantaneous, the specification &lt;em&gt;is&lt;/em&gt; the implementation in a meaningful sense. The Zilog manual contains the Z80 emulator the way an acorn contains an oak tree. The transformation from one to the other requires energy and process, but the information content is the same. Copyright protects the expression, but when the expression is a deterministic function of the specification, the creative contribution approaches zero.&lt;/p&gt;
&lt;p&gt;This doesn't mean all AI-generated code is uncopyrightable. If I write a detailed architectural plan, direct Claude to implement it, review and revise the output, and make structural decisions throughout the process, the result reflects my creative choices expressed through an AI tool. The tool is more sophisticated than a compiler, but the relationship is similar: I made the design decisions; the tool translated them into a lower-level representation. The copyright, if it exists, is in my architectural choices, not in Claude's line-by-line implementation.&lt;/p&gt;
&lt;p&gt;But if someone asks Claude to "write a Z80 emulator" with no architectural plan, no structural constraints, and no iterative review, and Claude produces a working emulator from its training data, who owns that code? Not the person who typed the prompt; they made no creative contribution beyond the request. Not Anthropic; they built the tool but didn't direct the output. Not the authors of the Z80 emulators in the training data; their code wasn't copied in any legally meaningful sense. The code exists in a copyright vacuum: produced by a process that doesn't have an author in the way the law requires.&lt;/p&gt;
&lt;h3&gt;Why This Matters Now&lt;/h3&gt;
&lt;p&gt;The &lt;a href="https://tinycomputers.io/posts/the-excavator-and-the-foundation.html"&gt;velocity of AI-assisted code production&lt;/a&gt; is accelerating. Every developer using Claude, Copilot, or Cursor is producing code whose provenance is uncertain. The code works. It passes tests. It ships to production. And its relationship to the training data that informed it is, in a strict legal sense, unknown and unknowable.&lt;/p&gt;
&lt;p&gt;The current legal frameworks (copyright, clean room, fair use) were designed for a world where code was written by humans who could testify about their creative process. "I read the specification. I designed the architecture. I wrote the code. I did not reference any existing implementation." This testimony is the foundation of clean-room defense. An LLM cannot provide it, and the human directing the LLM can only testify about their own contributions (the prompt, the architectural plan, the review), not about what the model drew from.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/enframing/compaq-portable.jpg" alt="A Compaq Portable computer, the machine whose clean-room BIOS reimplementation established the legal precedent AI is now dissolving" style="float: right; max-width: 350px; margin: 0 0 1em 1.5em; border-radius: 4px; box-shadow: 0 10px 20px rgba(0,0,0,.1);" loading="lazy"&gt;&lt;/p&gt;
&lt;p&gt;I took a CS ethics course as an undergraduate. The cases we studied (Compaq's clean-room reimplementation of the IBM PC BIOS, SCO's claim that Linux contained UNIX code, DeCSS and the DMCA's prohibition on circumventing copy protection) all assumed a human author whose creative process could be examined and whose sources could be traced. Every one of those cases would be decided differently if the defendant had said "I told an AI to implement the specification and it produced this code." The existing precedent doesn't apply, and the new precedent doesn't exist yet.&lt;/p&gt;
&lt;h3&gt;The Acorn and the Oak&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://baud.rs/Ij1iHE"&gt;Heidegger&lt;/a&gt; would say that the danger of Enframing is not that it's wrong but that it's totalizing. When technology reveals everything as standing reserve, we lose the ability to see things as they are. The river becomes only a power source. The specification becomes only raw material for code generation. The act of programming becomes only a transformation pipeline from input to output.&lt;/p&gt;
&lt;p&gt;What gets lost is what the clean-room process was actually designed to protect: the space between specification and implementation where human understanding operates. That space is where a developer reads "the ADD instruction sets the zero flag if the result is zero" and decides how to express that in code. The decision is small. The creativity is modest. But it's real, and it's human, and it's the entire basis of software copyright.&lt;/p&gt;
&lt;p&gt;AI doesn't eliminate that space. My Z80 emulator project included genuine creative decisions: the architecture, the test strategy, the system emulator design. Lattice exists because I designed a novel type system that no AI would have invented from existing languages. The creative space still exists for the people who operate at the design level.&lt;/p&gt;
&lt;p&gt;But for the implementation level, for the transformation from "what this should do" to "code that does it," the space is closing. The specification is becoming the implementation. The acorn is becoming the oak without passing through the seasons of human comprehension. And the legal and philosophical frameworks we built for a world where that transformation required human creativity haven't caught up.&lt;/p&gt;
&lt;p&gt;They will. The question is how much code ships before they do.&lt;/p&gt;</description><category>ai</category><category>clean room</category><category>copyright</category><category>heidegger</category><category>intellectual property</category><category>jevons paradox</category><category>lattice</category><category>philosophy</category><category>programming languages</category><category>software licensing</category><category>z80</category><guid>https://tinycomputers.io/posts/enframing-the-code.html</guid><pubDate>Sun, 22 Mar 2026 13:00:00 GMT</pubDate></item><item><title>The Excavator and the Foundation</title><link>https://tinycomputers.io/posts/the-excavator-and-the-foundation.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/the-excavator-and-the-foundation_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;26 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;Jason Fried posted a &lt;a href="https://baud.rs/aHG0UZ"&gt;sharp critique&lt;/a&gt; of the "bespoke software revolution" narrative this week. His argument: most people don't like computers, don't want software projects, and won't become builders just because AI hands them better tools. The three-person accounting firm wants the paperwork gone, not a new system to maintain. The logistics company wants optimized routes, not Joe's side project. The law firm wants leverage on their time, not a codebase.&lt;/p&gt;
&lt;p&gt;His metaphor is good: "A powerful excavator doesn't turn a homeowner into a contractor. Most people just want the hole dug by someone else."&lt;/p&gt;
&lt;p&gt;He's right about who builds. He's wrong about what happens next.&lt;/p&gt;
&lt;h3&gt;The Echo Chamber Is Real&lt;/h3&gt;
&lt;p&gt;Fried's observation about the software community talking to itself lands because it's obviously true. Open any tech feed and the bespoke software excitement is coming from people who already build software for a living. They're excited because AI makes their work faster and more interesting. They project that excitement onto everyone else and conclude that everyone will want to build. This is like assuming everyone wants to change their own oil because you enjoy working on cars.&lt;/p&gt;
&lt;p&gt;Most people have no interest in building software. Not because they lack intelligence or creativity, but because software is a means to an end and they'd rather focus on the end. The accounting firm wants to close the books faster. The logistics company wants fewer empty miles. These are domain problems, not software problems, and the people who understand them best have spent their careers on the domain, not on code.&lt;/p&gt;
&lt;p&gt;Fried identifies the outliers correctly: the people who go deep with AI building tools were already dabblers. The curiosity was already there. AI didn't create new builders; it gave existing builders a power tool. This is an important observation that the tech community consistently ignores because it's less exciting than "everyone becomes a developer."&lt;/p&gt;
&lt;h3&gt;Where Fried Stops&lt;/h3&gt;
&lt;p&gt;But Fried's analysis ends at "most people won't build," and that's where the interesting question starts. Because some people will try.&lt;/p&gt;
&lt;p&gt;Not the majority. Not the three-person accounting firm drowning in paperwork. But the accounting firm's nephew who's "good with computers." The operations manager at the logistics company who watched a YouTube tutorial on Cursor. The paralegal at the law firm who built a spreadsheet macro once and now has access to tools that can generate entire applications from a text description.&lt;/p&gt;
&lt;p&gt;These people exist in every organization. They're not professional developers. They don't think of themselves as builders. But they have just enough technical confidence to be dangerous, and AI tools have just lowered the barrier enough to let them act on it.&lt;/p&gt;
&lt;p&gt;This is not a hypothetical. It's already happening. People are building internal tools with AI assistance, deploying them to their teams, and running business processes on software that no one with software judgment has reviewed. The tools work on the happy path. They do exactly what the builder asked for. The problem is what the builder didn't ask for.&lt;/p&gt;
&lt;h3&gt;The Happy Path Is All You Get&lt;/h3&gt;
&lt;p&gt;When a non-developer builds software with AI, they describe what they want: "I need a tool that takes client intake forms, extracts the relevant fields, and puts them in a spreadsheet." The AI builds it. It works. The builder is thrilled.&lt;/p&gt;
&lt;p&gt;What the builder didn't specify, and the AI didn't volunteer:&lt;/p&gt;
&lt;p&gt;What happens when a client submits a form with special characters that break the parser? What happens when two people submit simultaneously? What happens when the spreadsheet hits the row limit? Where are the backups? Who has access? What happens when the API key expires? What happens when the builder leaves the company and nobody knows how the tool works?&lt;/p&gt;
&lt;p&gt;These aren't obscure edge cases. They're the standard failure modes of every software system ever built. Professional developers think about them not because they're smarter, but because they've watched systems fail in these exact ways. That accumulated experience of failure is what I've been calling &lt;a href="https://tinycomputers.io/posts/the-split-isnt-between-people-its-between-tasks.html"&gt;the judgment layer&lt;/a&gt;: the part of building that AI can't replace because it requires contact with the consequences of getting it wrong.&lt;/p&gt;
&lt;p&gt;The operations manager building a routing tool in Cursor has domain judgment about logistics. She knows which routes are efficient and which constraints matter. She does not have software judgment about error handling, data integrity, concurrent access, or failure recovery. Professional developers fail at these things constantly too. The difference is that professionals recognize the failure when it happens and have the skills to iterate toward a fix. The operations manager's tool breaks the same way, but she doesn't know it broke, doesn't know why, and doesn't know what to do about it. The AI gave her a tool that satisfies her domain judgment perfectly and her software judgment not at all, because she doesn't have any, and she doesn't know she doesn't have any.&lt;/p&gt;
&lt;h3&gt;This Has Happened Before&lt;/h3&gt;
&lt;p&gt;The counterargument writes itself: people have been building bad mission-critical software forever. Hospitals tracked patient records in Access databases. Small banks ran loan portfolios in Excel. Supply chains depended on macros that one person understood. When that person left, nobody could maintain it. The world survived.&lt;/p&gt;
&lt;p&gt;This is true, and it's important to take seriously. "Bad software" and "functional software" are not mutually exclusive. The accounting firm's Access database was terrible by every engineering standard and it ran their business for fifteen years. The nurse's Excel tracker was a data integrity nightmare and it kept patient appointments from falling through the cracks. Fried is right that custom software has always been "bloated, confusing, and built wrong in all the ways." He's also right that it existed and that people used it.&lt;/p&gt;
&lt;p&gt;So if bad software has always existed and the world kept turning, what changes with AI?&lt;/p&gt;
&lt;h3&gt;Velocity&lt;/h3&gt;
&lt;p&gt;The change is speed.&lt;/p&gt;
&lt;p&gt;Access took months to build something broken. You had to learn Access first, or find someone who knew it. You had to build the forms, design the tables, write the queries. The pace of construction imposed a natural speed limit on how fast bad software could enter production. By the time you finished, you'd encountered at least some of the failure modes, because the slow process forced you through enough iterations to stumble into them.&lt;/p&gt;
&lt;p&gt;AI removes that speed limit. The operations manager can go from "I have an idea" to "it's running in production" in an afternoon. The intake form tool is live before lunch. The routing optimizer is deployed by end of day. The contract parser is running by Friday. Each one works on the happy path. Each one has the same class of unexamined failure modes that Access databases had. But Access databases took months to accumulate. AI-built tools accumulate in days.&lt;/p&gt;
&lt;p&gt;More attempts. Same failure rate. More failures. Compressed into a shorter timeline. By the time the first tool breaks, three more have been deployed. By the time someone realizes the intake form tool is silently dropping records with special characters, the routing optimizer and the contract parser are already load-bearing parts of the business.&lt;/p&gt;
&lt;p&gt;This is &lt;a href="https://tinycomputers.io/posts/jevons-paradox.html"&gt;Jevons Paradox&lt;/a&gt; applied to the failure mode itself. When building software gets cheaper, you don't get the same amount of bad software for less effort. You get vastly more bad software for the same effort. The per-unit cost of production drops, total production expands, and the total volume of unreviewed, unexamined software in production grows faster than anyone anticipated.&lt;/p&gt;
&lt;h3&gt;The Judgment Bottleneck&lt;/h3&gt;
&lt;p&gt;I've argued in previous pieces that &lt;a href="https://tinycomputers.io/posts/the-ai-vampire-is-jevons-paradox.html"&gt;human judgment is the binding constraint&lt;/a&gt; in AI-augmented work. AI makes the labor cheaper; demand expands; the expansion concentrates on the one input that can't scale: the human capacity for deep, focused evaluation. The &lt;a href="https://tinycomputers.io/posts/the-ai-vampire-is-jevons-paradox.html"&gt;three-to-four-hour ceiling on cognitively demanding work&lt;/a&gt; is biological, not cultural, and no productivity tool changes it.&lt;/p&gt;
&lt;p&gt;Software judgment is a specific instance of this general constraint. Reviewing code for failure modes, reasoning about edge cases, thinking through data integrity, anticipating what happens when components interact in unexpected ways: this is deep work. It requires the kind of sustained attention that depletes on a fixed biological schedule. And the supply of people who have this judgment is not growing. Computer science enrollment is up, but software judgment comes from experience, not coursework. You develop it by watching systems fail, and that takes years.&lt;/p&gt;
&lt;p&gt;AI expands the rate at which software enters production. It does not expand the rate at which qualified people can review it. The production side scales. The judgment side doesn't.&lt;/p&gt;
&lt;p&gt;And the judgment side may actually be contracting. After sixteen consecutive years of growth, undergraduate CS enrollment turned negative in 2025. The Computing Research Association (CRA) found that 62% of computing departments reported declining enrollment for 2025-26, while only 13% saw increases. At University of California campuses, CS enrollment fell 6% in 2025 after declining 3% in 2024: the first drops since the dot-com crash. Students and their parents are reading the headlines about AI displacing entry-level developers and steering toward fields they perceive as more durable.&lt;/p&gt;
&lt;p&gt;The irony is thick. The fear that AI will replace software developers is reducing the supply of software developers at the exact moment that AI is massively expanding the demand for software judgment. Students are fleeing the field because they think AI can do the work. AI is simultaneously creating more work that only humans with software judgment can evaluate. The enrollment decline doesn't just fail to solve the judgment bottleneck; it tightens it.&lt;/p&gt;
&lt;p&gt;The gap between "software that exists" and "software that someone qualified has evaluated" widens from both directions: production accelerates while the pipeline of qualified reviewers narrows. Something has to give, and what gives is the review.&lt;/p&gt;
&lt;h3&gt;Software Slop&lt;/h3&gt;
&lt;p&gt;I wrote an essay about &lt;a href="https://tinycomputers.io/posts/llm-generated-content-what-makes-something-slop.html"&gt;what makes AI-generated content "slop"&lt;/a&gt;: superficial competence masking an absence of substance. The text looks right. The grammar is clean. The structure is logical. But it doesn't commit to anything, doesn't engage with anything, doesn't mean anything. It fills the container without filling it with content.&lt;/p&gt;
&lt;p&gt;AI-generated software has the same property. The code is syntactically correct. The UI has proper styling, responsive layouts, loading spinners, appropriate error messages. It passes every visual inspection. A manager looking at a demo sees a professional application. A user running through the standard workflow sees something that works.&lt;/p&gt;
&lt;p&gt;Underneath: no input validation beyond what the framework provides for free. No error handling beyond try/catch blocks that swallow exceptions. No concurrency protection. No backup strategy. No audit trail. No security beyond defaults. The software is superficially competent and structurally hollow, and you cannot tell the difference by looking at it.&lt;/p&gt;
&lt;p&gt;This is what distinguishes the AI-built software problem from the Access database problem. Access databases looked like Access databases. The limitations were visible in the interface. The grey forms, the flat tables, the clunky queries: everyone could see they were using a tool that was not designed for what they were doing with it. The expectations were calibrated, even if the risks weren't.&lt;/p&gt;
&lt;p&gt;AI-built software looks like real software. The surface quality has been democratized. What hasn't been democratized is the structural integrity underneath. And because the surface looks professional, the people using it have no signal that anything is missing. The feedback loop that would normally tell you "this is a prototype, not a product" has been severed. The prototype looks like the product, and nobody in the room can tell the difference except the people with software judgment, who weren't in the room when it was built.&lt;/p&gt;
&lt;h3&gt;What Fried Misses&lt;/h3&gt;
&lt;p&gt;Fried's framework has one gap. He says the demand for bespoke software won't grow because people don't want software projects. But the demand is already growing, not because people want to build, but because AI collapsed the apparent cost of building to near zero. The operations manager didn't set out to start a software project. She set out to solve a routing problem, and the software was a side effect that happened so fast she didn't register it as a project.&lt;/p&gt;
&lt;p&gt;This is the mechanism Fried doesn't account for. The excavator doesn't turn the homeowner into a contractor. But it does let the homeowner dig a hole so fast that they're standing in it before they realize they don't know what they're doing. The question isn't whether they wanted to dig. It's what happens now that the hole exists and the house is being built on top of it.&lt;/p&gt;
&lt;p&gt;The bespoke software revolution won't come from people deliberately choosing to become builders. It will come from people accidentally becoming builders because the tools made it so frictionless that building happened before the decision to build was consciously made. And the software they produce will be the fastest-growing category of technical debt in history, because it was created without judgment, deployed without review, and adopted without anyone understanding what's underneath.&lt;/p&gt;
&lt;h3&gt;Who Benefits&lt;/h3&gt;
&lt;p&gt;Fried is right that the excitement about bespoke software comes from software makers. What he doesn't say is why they should be excited. It's not because everyone becomes a builder. It's because everyone becomes a client.&lt;/p&gt;
&lt;p&gt;Every operations manager who builds a broken routing tool and discovers it doesn't handle the edge cases is a future client for someone who can build it properly. Every accounting firm that deploys an AI-built intake system and loses data is a future client for someone who understands data integrity. The DIY phase doesn't replace professional software development. It creates demand for it, at a scale and urgency that didn't exist before, because now the potential clients have firsthand experience with why the problem is hard.&lt;/p&gt;
&lt;p&gt;The judgment bottleneck doesn't prevent the Jevons expansion. It shapes it. More software gets attempted. More software fails. The failures create demand for the constrained resource (qualified judgment) at a rate that exceeds supply. The people who have software judgment become more valuable, not less, because the volume of work that needs their attention has exploded.&lt;/p&gt;
&lt;p&gt;Fried's excavator metaphor is correct. Most homeowners won't become contractors. But the excavator lets them dig enough bad foundations that the contracting business booms. AI doesn't democratize building. It democratizes demand.&lt;/p&gt;
&lt;h3&gt;The Forecast&lt;/h3&gt;
&lt;p&gt;I'll make a prediction specific enough to be wrong about. Within three years, the majority of data-loss and security incidents at small and mid-sized businesses will trace back to AI-assisted internal tools built without professional review. Not because the AI wrote bad code (the code will be syntactically fine), but because the person directing the AI didn't know what to ask for and didn't know what they were missing. The failure mode won't be dramatic. It will be silent: records that were never backed up, access controls that were never configured, race conditions that corrupt data once a month in a pattern nobody notices until the audit.&lt;/p&gt;
&lt;p&gt;There is an irony here that I should name. The people who most need to read this are the ones who never will. The operations manager vibing a routing tool into production this afternoon is not reading a blog about Jevons Paradox and GPU inference. She's solving her problem, and it feels like it's working, and no article on a site called Tiny Computers is going to reach him before the first silent failure does.&lt;/p&gt;
&lt;p&gt;The bespoke software revolution is real. It's just not the revolution anyone is advertising. It's not a million people building great custom tools. It's a million people building adequate tools with invisible structural deficiencies, deployed to production at a velocity that outpaces the world's capacity to review them. The excavator is powerful, the foundations are being dug, and most of them are too shallow.&lt;/p&gt;</description><category>ai</category><category>bespoke software</category><category>economics</category><category>jason fried</category><category>jevons paradox</category><category>judgment</category><category>philosophy</category><category>slop</category><category>software development</category><guid>https://tinycomputers.io/posts/the-excavator-and-the-foundation.html</guid><pubDate>Sat, 21 Mar 2026 13:00:00 GMT</pubDate></item><item><title>The Split Isn't Between People, It's Between Tasks</title><link>https://tinycomputers.io/posts/the-split-isnt-between-people-its-between-tasks.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/the-split-isnt-between-people-its-between-tasks_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;26 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;Les Orchard's &lt;a href="https://baud.rs/FtBjVK"&gt;"Grief and the AI Split"&lt;/a&gt; identifies something real. AI tools have revealed a division among developers that was previously invisible, because before these tools existed, everyone followed the same workflow regardless of motivation. Now the motivations are exposed. Some developers grieve the loss of hand-crafted code as a practice with inherent value. Others see the same tools and feel relief: the tedious parts are handled, the interesting parts remain. Orchard frames this as a split between people. Craft-oriented developers on one side, results-oriented developers on the other.&lt;/p&gt;
&lt;p&gt;He's right that the split exists, and the piece clearly resonated with software creators because it names something people have been feeling but couldn't articulate. The observation is sharp. Where I think it can be extended is in where the line falls.&lt;/p&gt;
&lt;p&gt;Orchard draws the line between people. I think it falls between tasks. The same person crosses that line dozens of times a day, moving between work that demands human judgment and work that doesn't, between moments where the craft concentrates and moments where it was never present in the first place. The split is real. It's just not an identity.&lt;/p&gt;
&lt;h3&gt;The Kernel I Didn't Write&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://tinycomputers.io/posts/jokelaos-bare-metal-x86-kernel.html"&gt;JokelaOS&lt;/a&gt; is a bare-metal x86 kernel: 2,000 lines of C and NASM assembly, booting from a Multiboot header through GDT (Global Descriptor Table, which defines memory segments and access rights) and IDT (Interrupt Descriptor Table, which maps interrupt vectors to service routines) setup, paging, preemptive multitasking with Ring 3 isolation, a network stack that responds to pings, and an interactive shell. No forks. No libc. Every &lt;code&gt;memcpy&lt;/code&gt;, every &lt;code&gt;printf&lt;/code&gt;, every byte-order conversion written from scratch.&lt;/p&gt;
&lt;p&gt;I didn't write most of it. Claude did.&lt;/p&gt;
&lt;p&gt;In Orchard's framework, this should place me firmly in the "results" camp. I used AI to produce 2,000 lines of systems code; clearly I care about the outcome, not the process. But that framing misses what actually happened during the project.&lt;/p&gt;
&lt;p&gt;The decisions that made JokelaOS work were not typing decisions. They were sequencing decisions: bring up serial output first, because without it you have no diagnostics for anything that follows. Initialize the GDT before the IDT, because interrupt handlers need valid segment selectors. Get the bump allocator working before the PMM (Physical Memory Manager), because page tables need permanent allocations before you can manage dynamic ones. These choices come from understanding how x86 protected mode actually works, which subsystems depend on which, and what the failure modes look like when you get the order wrong.&lt;/p&gt;
&lt;p&gt;Claude generated the GDT setup code. I decided what the GDT entries should be, caught the access byte errors, and debugged the triple faults when segment selectors were wrong. Claude wrote the process scheduler. I determined that the TSS (Task State Segment, which tells the CPU where to find the kernel stack when switching privilege levels) needed updating on every context switch and diagnosed the General Protection Faults that occurred when it wasn't. Claude produced the RTL8139 network driver. I decided to bring up ARP before ICMP, caught a byte-order bug in the IP checksum, and validated that the packets leaving QEMU were actually well-formed.&lt;/p&gt;
&lt;p&gt;The typing was delegated. The architecture, the sequencing, the diagnosis, the validation: those were mine. If you asked me whether JokelaOS involved craft, I would say yes, more than most projects I've done. If you asked me where the craft was, I would not point at any line of code.&lt;/p&gt;
&lt;h3&gt;The Board That Failed Twice&lt;/h3&gt;
&lt;p&gt;The &lt;a href="https://tinycomputers.io/posts/fiverr-pcb-design-arduino-giga-shield.html"&gt;Giga Shield&lt;/a&gt; tells a longer version of the same story, and it's messier, because hardware involves the physical world in a way that software doesn't.&lt;/p&gt;
&lt;p&gt;The project started with a $468 Fiverr commission. I gave a designer in Kenya the spec documents, the components I thought should be used, and the form factor requirements: an &lt;a href="https://baud.rs/poSQeo"&gt;Arduino Giga R1&lt;/a&gt; shield with bidirectional level shifters, 72 channels of 3.3V-to-5V translation, KiCad deliverables. He produced a clean design. Nine &lt;a href="https://baud.rs/y9JJt9"&gt;TXB0108PW&lt;/a&gt; auto-sensing translators on a two-layer board. &lt;a href="https://baud.rs/youwpy"&gt;PCBWay&lt;/a&gt; fabricated it. Professional work, quick turnaround, and &lt;a href="https://baud.rs/youwpy"&gt;PCBWay&lt;/a&gt; sponsored the fabrication.&lt;/p&gt;
&lt;p&gt;Then I plugged in the &lt;a href="https://baud.rs/87wbBL"&gt;RetroShield Z80&lt;/a&gt; and the board was blind.&lt;/p&gt;
&lt;p&gt;The TXB0108 detects signal direction automatically by sensing which side is driving. For most applications, that's a feature. For a Z80 bus interface, it's fatal. During bus cycles, the Z80 tri-states its address and data lines. The pins go high-impedance: not high, not low, floating. The TXB0108 can't determine direction from a floating signal. It guesses wrong, and the Arduino reads garbage. I'd paid $468 for a board that couldn't see half of what the processor was doing.&lt;/p&gt;
&lt;p&gt;Nobody caught this in the design phase. Not the Fiverr designer, who was working from the spec I gave him. Not me, when I reviewed the schematic. The TXB0108 datasheet doesn't scream "incompatible with tri-state buses"; you have to understand what tri-stating means in practice and recognize that auto-sensing can't handle it. That understanding came from plugging the board in and watching it fail.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://tinycomputers.io/posts/redesigning-a-pcb-with-claude-code-and-open-source-eda-part-1.html"&gt;redesign&lt;/a&gt; used Claude to replace all nine auto-sensing translators with &lt;a href="https://baud.rs/zQqo34"&gt;SN74LVC8T245&lt;/a&gt; driven level shifters. Driven shifters have an explicit direction pin: you tell them which way to translate, and they do it regardless of whether the signal is being actively driven. Claude wrote Python scripts that pulled apart the KiCad schematic files, extracted all 72 signal mappings across 9 ICs, and generated new board files with the correct components and pin assignments.&lt;/p&gt;
&lt;p&gt;I was about to submit the revised design to PCBWay when I realized we needed a tenth level shifter. The original nine covered not just the digital pins that map to the Z80 RetroShield but all of the analog pins on the Giga, giving complete 3.3V-to-5V coverage across the board. But with driven shifters, each IC has a single direction pin controlling all eight channels. Signals that need to travel in opposite directions at different times can't share an IC without creating bus contention. Some of the channel assignments had conflicting direction requirements, and the only fix was a tenth IC to separate them.&lt;/p&gt;
&lt;p&gt;Adding one more TSSOP-24 package to an already dense two-layer board broke the trace routing. The board that had been routable with nine ICs was unroutable with ten. Moving to four layers helped but still left two to four traces with no viable path. The solution was a six-layer stackup, which needed a copper pour layer to act as a common ground plane. The open-source autorouter Freerouting couldn't handle a full copper pour; its architecture has no concept of flood-fill connectivity. So I used &lt;a href="https://baud.rs/wdr0dP"&gt;Quilter.ai&lt;/a&gt;, an AI trace router, to route the six-layer board with the ground plane that the open-source tooling couldn't represent.&lt;/p&gt;
&lt;p&gt;Count the layers of delegation and intervention in this project. I delegated the initial design to a human professional. Physics revealed the flaw. I delegated the redesign to an AI. I caught the missing tenth shifter before it went to fabrication. I delegated the trace routing to another AI. PCBWay is currently manufacturing these boards. At every stage, the work alternated between labor that could be delegated and judgment that couldn't. The Fiverr designer did skilled labor. Claude did skilled labor. Quilter.ai did skilled labor. The craft was never in the labor. It was in knowing when the labor was wrong.&lt;/p&gt;
&lt;h3&gt;Where the Craft Actually Lives&lt;/h3&gt;
&lt;p&gt;Both of these projects point at the same thing. The craft isn't in the typing, the routing, or the code generation. It's in a layer that sits above and around all of that: the judgment layer.&lt;/p&gt;
&lt;p&gt;The judgment layer is where you decide what to build next. Where you recognize that the output is wrong before you can articulate why. Where you sequence subsystems based on dependency chains that aren't documented anywhere. Where you plug a board in and notice that the readings don't make sense. Where you catch a missing component that the AI, the designer, and the autorouter all missed because none of them were thinking about the problem at that level.&lt;/p&gt;
&lt;p&gt;This layer has specific properties. It requires contact with the problem domain, not just the code or the schematic but the actual behavior of the system under real conditions. It depends on accumulated experience: understanding what tri-stating means in practice, knowing that x86 protected mode has forty years of backward-compatible traps waiting for you. And it's the part that AI is worst at, precisely because it requires grounding in physical or logical reality that language models don't have access to.&lt;/p&gt;
&lt;p&gt;The TXB0108 failure is the clearest example. The information needed to predict this failure existed in the datasheets. But recognizing its relevance required understanding what a Z80 bus cycle actually looks like at the electrical level, which required either experience with the hardware or a simulation environment that nobody had set up. No amount of language model capability substitutes for plugging in the board and watching it fail.&lt;/p&gt;
&lt;h3&gt;The Same Person in Both Modes&lt;/h3&gt;
&lt;p&gt;Orchard describes himself as results-oriented. He learned programming languages as "a means to an end" and gravitated toward AI tools because they let him focus on the outcome. He acknowledges that craft-oriented developers experience genuine loss. His framing is empathetic, but it still draws the line between people.&lt;/p&gt;
&lt;p&gt;The line doesn't hold, because I'm both of his archetypes depending on the hour.&lt;/p&gt;
&lt;p&gt;On Tuesday I might use Claude to generate a hundred lines of systemd service configuration because I need Ollama running on a machine and I don't care about the elegance of the unit file. On Wednesday I might spend three hours hand-debugging why &lt;code&gt;rocm-smi&lt;/code&gt; reports GPU utilization at zero percent: reading kernel logs, checking DKMS module versions, testing &lt;code&gt;HSA_OVERRIDE_GFX_VERSION&lt;/code&gt; values, loading the &lt;code&gt;amdgpu&lt;/code&gt; module manually because it didn't auto-load at boot. The first task is pure delegation. The second is pure craft. Both are mine. Both happened this week.&lt;/p&gt;
&lt;p&gt;When I wrote &lt;a href="https://tinycomputers.io/posts/the-economics-of-owning-your-own-inference.html"&gt;the economics piece&lt;/a&gt;, I used Claude to draft sections and I measured real power draw with &lt;code&gt;nvidia-smi&lt;/code&gt; and &lt;code&gt;rocm-smi&lt;/code&gt; at 500-millisecond intervals. I let AI handle the prose scaffolding and I personally caught that Ollama on the Strix Halo had been running entirely on CPU because the systemd service file was missing an environment variable. Every benchmark I'd trusted before finding that bug was wrong. No AI caught it. I caught it because the numbers felt off.&lt;/p&gt;
&lt;p&gt;These aren't different people. They're different tasks. The identity framing ("I'm a craft developer" or "I'm a results developer") obscures what's actually a task-level decision that experienced people make constantly: this piece of work benefits from my full attention; this piece doesn't.&lt;/p&gt;
&lt;h3&gt;What the Grief Is About&lt;/h3&gt;
&lt;p&gt;The craft-grief that Orchard describes is real and worth taking seriously. Part of it targets the wrong thing. Part of it doesn't.&lt;/p&gt;
&lt;p&gt;What's being mourned is typing as the bottleneck. For forty years, the primary constraint on software projects was the speed at which a human could produce correct code. Design mattered, architecture mattered, but someone still had to sit down and type it. The typing was slow enough that it forced a certain kind of attention. You couldn't write a function without thinking about it, because writing it took long enough that thinking was unavoidable. The bottleneck created the conditions for craft, and it felt like the craft itself.&lt;/p&gt;
&lt;p&gt;AI removes the bottleneck. Code appears in seconds. The thinking isn't forced by the typing anymore; it has to be deliberate. And that shift feels like a loss, because the rhythm of the work has changed. The long, meditative stretches of writing code, where your understanding deepened as your fingers moved, are replaced by short bursts of generation followed by review. The texture is different.&lt;/p&gt;
&lt;p&gt;But the craft didn't live in the texture. It lived in the judgment that the texture incidentally supported. The experienced developer who hand-writes a function isn't doing craft because the typing is slow. The typing is slow, and the craft happens during the slowness, but the craft is the decisions: what to name things, what to abstract, what edge cases to handle, when to stop. Those decisions haven't gotten easier. If anything, they've gotten harder, because AI lets you attempt projects that would have been too large to type by hand, which means you hit the judgment bottleneck more often and at higher stakes.&lt;/p&gt;
&lt;p&gt;JokelaOS would have taken me months to type by hand. I probably wouldn't have attempted it. With AI handling the code generation, I attempted it in days and spent the entire time making architecture and debugging decisions. The project had more craft in it than most things I've built, precisely because the typing wasn't the bottleneck. The judgment was.&lt;/p&gt;
&lt;h3&gt;The Biological Ceiling&lt;/h3&gt;
&lt;p&gt;I wrote in &lt;a href="https://tinycomputers.io/posts/the-ai-vampire-is-jevons-paradox.html"&gt;the AI Vampire piece&lt;/a&gt; that human judgment is the binding constraint in a Jevons cycle operating on cognitive output. AI makes the labor cheaper; demand expands; the expansion concentrates on the one input that can't scale: human attention and judgment. The three-to-four-hour ceiling on deep work is biological, not cultural, and no amount of productivity tooling changes it.&lt;/p&gt;
&lt;p&gt;The task-level split is where this plays out in practice. AI compresses the labor side of every project: the code generation, the trace routing, the prose drafting, the schematic extraction. What remains is denser, harder, and more consequential. Every hour of work has a higher ratio of judgment to labor than it did before AI. That's why Yegge's developers feel burned out, not because they're working more hours, but because every hour is now a judgment hour.&lt;/p&gt;
&lt;p&gt;The craft isn't disappearing. It's being compressed into a smaller, denser layer. The typing is gone. The design reviews are shorter. The code appears instantly. What's left is the part that was always the actual craft: deciding what to build, recognizing when it's wrong, knowing what to test, catching the missing tenth level shifter. That layer is entirely human, it's harder than it used to be because the projects are bigger, and it's the only part that matters.&lt;/p&gt;
&lt;p&gt;Orchard identified the split correctly. The grief is real, the division is real, and the piece resonated because it named something that software creators recognized immediately. The refinement I'd offer is that the line doesn't separate two kinds of people; it separates two kinds of tasks. The craft was never in the code. It was in the decisions that surrounded the code. Those decisions haven't gone anywhere. They've just lost the slow, meditative typing that used to accompany them. What remains is craft at higher concentration, with no filler.&lt;/p&gt;
&lt;p&gt;There was something cathartic about the old way. The hours of typing weren't just production; they were a complete experience. You conceived the idea, worked through the logic, typed every character, fought the compiler, and watched it run. The whole arc from intention to execution passed through your hands. That totality had a satisfaction to it that reviewing AI-generated output doesn't replicate, even when the output is correct.&lt;/p&gt;
&lt;p&gt;And there was something else: the syntax was a sacred tongue. Not everyone could read it. Not everyone could write it. The curly braces, the pointer arithmetic, the register mnemonics formed a language that belonged to the people who had invested years learning to speak it. That exclusivity wasn't gatekeeping for its own sake; it was the mark of hard-won fluency, and it meant something to the people who had it. Now anyone can describe what they want in English and get working code back. The priesthood dissolved overnight.&lt;/p&gt;
&lt;p&gt;I feel that loss. I still create. I still orchestrate. I still catch the errors that the tools miss. But I no longer speak a language that most people can't. The judgment layer is real, and it's where the work that matters happens. But it doesn't carry the same weight as mastery of a difficult notation. Orchestrating a process is not the same as performing it, even if the orchestration requires more skill.&lt;/p&gt;
&lt;p&gt;The grief is real. It's not about the wrong thing. It's about something that actually disappeared.&lt;/p&gt;</description><category>ai</category><category>claude</category><category>craft</category><category>hardware</category><category>jevons paradox</category><category>jokelaos</category><category>judgment</category><category>pcb design</category><category>philosophy</category><category>software development</category><guid>https://tinycomputers.io/posts/the-split-isnt-between-people-its-between-tasks.html</guid><pubDate>Thu, 19 Mar 2026 13:00:00 GMT</pubDate></item><item><title>LLM-Generated Content: What Makes Something Slop</title><link>https://tinycomputers.io/posts/llm-generated-content-what-makes-something-slop.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/llm-generated-content-what-makes-something-slop_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;20 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;Merriam-Webster named "slop" its &lt;a href="https://baud.rs/QUwnPW"&gt;2025 Word of the Year&lt;/a&gt;. In its new usage, the word describes low-quality AI-generated content produced and distributed with minimal human oversight. It captures something the internet has been feeling for a while: the growing suspicion that much of what appears online wasn't written so much as emitted.&lt;/p&gt;
&lt;p&gt;I should be transparent about where I stand. This blog uses AI-generated text-to-speech narration on every post. The articles about &lt;a href="https://tinycomputers.io/posts/the-mathematics-of-pcb-trace-routing.html"&gt;PCB trace routing&lt;/a&gt; describe boards that were auto-routed by algorithms. The code that builds and deploys this site was partially written with Claude Code assistance. I wrote a &lt;a href="https://tinycomputers.io/posts/jevons-paradox.html"&gt;six-part series on Jevons Paradox&lt;/a&gt; with AI tools open in the next terminal window the entire time. I am not writing this from outside the system.&lt;/p&gt;
&lt;p&gt;And yet I know slop when I see it. You probably do too. The interesting question is not whether slop exists (it obviously does) but what exactly we're recognizing when we encounter it. What quality makes certain AI-generated or AI-assisted content feel hollow, and what distinguishes it from output that has substance? The answer matters, because if we can't articulate the distinction, we're left with a binary that helps nobody: reject all AI tools, or accept everything they produce uncritically.&lt;/p&gt;
&lt;h3&gt;You Know It When You See It&lt;/h3&gt;
&lt;p&gt;In 1964, Justice Potter Stewart offered his famous non-definition of obscenity: "I know it when I see it." We're in a similar position with AI slop. Most people can identify it immediately but struggle to explain what they're detecting.&lt;/p&gt;
&lt;p&gt;The surface markers are easy enough to catalog. The hedging language: "It's important to note that..." The false balance, presenting every issue as having exactly two equally valid sides. The emoji padding that serves no communicative purpose. The five-paragraph essay structure applied to every topic regardless of complexity. The confident incorrectness: statements delivered with the same breezy authority whether they're true or fabricated. The vocabulary of caution and qualification that reads less like thoughtfulness and more like a language model covering its bases.&lt;/p&gt;
&lt;p&gt;These are the tells that AI-detection tools try to measure, and they work well enough for obvious cases. But they're symptoms, not the disease. A skilled prompt engineer can eliminate every one of these markers and still produce slop. Conversely, a human writer can exhibit several of them (hedging, false balance, structural rigidity) and still produce something worth reading. The surface features point toward the problem without being the problem itself.&lt;/p&gt;
&lt;p&gt;What we're actually detecting is an absence. Not an absence of quality at the sentence level (LLMs write clean, grammatical sentences) but an absence of something harder to name. The text reads correctly line by line and says nothing paragraph by paragraph. It is fluent without being articulate. It covers a topic without engaging with it. And we recognize this gap almost instantly, the way you recognize a smile that doesn't reach someone's eyes.&lt;/p&gt;
&lt;h3&gt;The Three Properties&lt;/h3&gt;
&lt;p&gt;The MINT Lab at Indiana University proposed a useful framework for thinking about this. They identified three properties that characterize slop: superficial competence, asymmetric effort, and mass producibility.&lt;/p&gt;
&lt;p&gt;Superficial competence is the core mechanism. The text performs competence at the surface level: vocabulary is appropriate, structure is logical, claims are plausible. But it doesn't demonstrate competence at the level of understanding. There's a difference between a sentence that uses the right words and a sentence that conveys the right meaning. Slop consistently achieves the former while missing the latter. The prose is grammatically flawless and semantically empty, a combination that is almost impossible for human writers to produce at scale but trivially easy for language models.&lt;/p&gt;
&lt;p&gt;Think of a student essay that hits every point on the rubric: thesis statement in the right place, three supporting paragraphs, counterargument acknowledged, conclusion that restates the thesis. A teacher reads it and gives it a B+. But the teacher also knows, without being able to point to a specific sentence, that the student didn't learn anything while writing it. The essay demonstrates knowledge of essay structure, not knowledge of the subject. That's superficial competence.&lt;/p&gt;
&lt;p&gt;Asymmetric effort describes the production economics. The author (or deployer) invested minimal effort relative to the volume of output. A single prompt generates 2,000 words in seconds. The resulting text has the length and format of something that would take a human writer hours, but it cost nothing in terms of thought, research, or revision. This asymmetry creates an incentive structure where the marginal cost of publishing approaches zero and the quality feedback loop disappears.&lt;/p&gt;
&lt;p&gt;Mass producibility follows from the first two. If the text is superficially competent and cheap to produce, there's no natural limit on volume. This is how you get AI-generated recipe blogs with 10,000 pages, product review sites with no evidence of product testing, and news aggregators that rewrite wire stories into blandly authoritative summaries. The content fills a shape (a blog post, a review, a news article) without filling it with meaning.&lt;/p&gt;
&lt;p&gt;These three properties interact. Mass production exacerbates the problem of superficial competence because there's no time or incentive for the depth that would distinguish one piece from another. And asymmetric effort means there's no skin in the game: the producer doesn't care whether the content is right, because it cost almost nothing to create and nothing to correct.&lt;/p&gt;
&lt;h3&gt;Greenberg's Ghost&lt;/h3&gt;
&lt;p&gt;There's a version of this argument that's eighty-seven years old.&lt;/p&gt;
&lt;p&gt;In 1939, &lt;a href="https://baud.rs/I3KpKw"&gt;Clement Greenberg&lt;/a&gt; published &lt;a href="https://tinycomputers.io/ClementGreenbergAvant-GardeAndKitsch.pdf"&gt;&lt;em&gt;Avant-Garde and Kitsch&lt;/em&gt;&lt;/a&gt;, one of the most influential essays in twentieth-century art criticism. Greenberg argued that mass culture produces "kitsch," art that "pre-digests art for the spectator and spares him effort, provides him with a shortcut to the pleasure of art that detours what is necessarily difficult in genuine art." Kitsch offers "vicarious experience and faked sensations." It looks like art. It has the shape of art. But it demands nothing from the viewer and delivers nothing in return except the comfortable feeling of having consumed something.&lt;/p&gt;
&lt;p&gt;AI slop does exactly this with information. It pre-digests knowledge for the reader, offering the appearance of understanding without requiring (or enabling) actual understanding. You read 2,000 words about a topic and come away with the sense that you've learned something, but when you try to articulate what you learned, there's nothing solid to grasp. The text gave you the experience of reading an informative article without the substance of one. Vicarious understanding. Faked insight.&lt;/p&gt;
&lt;p&gt;The parallel extends further than you might expect. Greenberg worried that kitsch would overwhelm genuine art because it was cheaper to produce and easier to consume. The same dynamics apply to AI-generated content: it's infinitely cheaper to produce, formats itself for easy consumption, and competes for the same attention as substantive work. Greenberg's nightmare was a culture where the imitation crowds out the real thing. That's recognizably the state of much of the internet in 2026.&lt;/p&gt;
&lt;p&gt;But Greenberg was also, let's be honest, a snob. His framework positioned the critic as the essential gatekeeper: only the trained eye could distinguish art from kitsch, and the masses were essentially passive consumers incapable of judgment. This elitism left him unprepared for Pop Art. When &lt;a href="https://baud.rs/H7xrvU"&gt;Warhol&lt;/a&gt; silk-screened Campbell's soup cans and Lichtenstein blew up comic panels to gallery scale, they took the materials of kitsch and made something genuinely interesting from them. They didn't reject mass culture; they engaged with it in a way that Greenberg's binary framework couldn't accommodate.&lt;/p&gt;
&lt;p&gt;There's an obvious recursive problem here, and I should name it rather than pretend it doesn't exist. This essay was written with AI assistance. It is, in a direct sense, an attempt to take the materials of mass production (an LLM's facility with argument structure, literature survey, prose drafting) and make something that isn't slop. Whether it succeeds is for the reader to judge. But the attempt itself is the Pop Art move: not rejecting the tools of mass culture, but trying to use them to say something specific. If I fail, the essay is kitsch that thinks it's art. If I succeed, Greenberg's binary was too rigid, and the tool was never the problem.&lt;/p&gt;
&lt;p&gt;This tension matters for the slop conversation more broadly. If we define slop as any AI-generated content (regardless of what it does or says), we make the same mistake Greenberg made with kitsch. The question is not the tool; it's whether something is being done with it at all.&lt;/p&gt;
&lt;h3&gt;The Authenticity Problem&lt;/h3&gt;
&lt;p&gt;So what is the actual distinguishing quality? What separates writing that happens to involve AI tools from writing that is slop?&lt;/p&gt;
&lt;p&gt;It's not voice. LLMs can mimic voice convincingly enough to fool most readers. It's not structure; LLMs organize material at least as well as the average human writer. It's not even factual accuracy, since LLMs can be accurate when properly grounded and cited. These are all necessary conditions for good writing, but slop can satisfy all of them and still be slop.&lt;/p&gt;
&lt;p&gt;What's missing is a point of view: the willingness to be wrong about something specific.&lt;/p&gt;
&lt;p&gt;Slop hedges. It covers all sides. It presents every position as having merit and declines to choose between them. It never commits to a claim that could be falsified, challenged, or argued against. And this is not a bug in the technology; it's a feature. Language models are trained to be helpful, harmless, and accurate. Helpfulness means addressing the user's question. Harmlessness means avoiding offense. The intersection of these goals produces text that is relentlessly, pathologically balanced. Every "on the one hand" gets an "on the other hand." Every strong claim gets a qualification. The result is prose that cannot be disagreed with, because it doesn't say anything specific enough to disagree with.&lt;/p&gt;
&lt;p&gt;I notice this constantly in my own AI-assisted drafts. The first pass comes back with every edge sanded off. Where I wrote "Freerouting can't do copper pours, and that's a fatal limitation for production boards," the draft wants to say "Freerouting has some limitations regarding copper pours that may affect certain use cases." The second version is more cautious. It's also emptier. The editorial work, the part that makes writing not-slop, is putting the edges back on: choosing the stronger claim, deleting the qualifications that exist for safety rather than accuracy, deciding that this is what I actually think and I'm willing to defend it.&lt;/p&gt;
&lt;p&gt;Good writing, whether human or AI-assisted, takes a position and defends it. The author exists in the text because they have opinions, not because they have fluency. When I wrote that &lt;a href="https://tinycomputers.io/posts/the-ai-vampire-is-jevons-paradox.html"&gt;Jevons Paradox applies to human attention&lt;/a&gt; in the context of AI-assisted work, that was a specific, falsifiable claim. You could disagree with it. You could argue the model doesn't apply, or that the historical parallels are misleading, or that the biological ceiling on attention changes the dynamics. The argument creates a surface for friction. It takes a stance that could be wrong.&lt;/p&gt;
&lt;p&gt;Slop never takes that risk. It describes all positions and endorses none. It informs without arguing. And because it never commits to anything, it can never be wrong, which means it can never be right either. It occupies a semantic dead zone: technically not false, functionally not true, informationally zero.&lt;/p&gt;
&lt;p&gt;This is the test most people are applying intuitively when they identify something as slop. They're asking: is someone home? Does the text have a perspective, or is it just generating plausible sentences? The "someone" doesn't have to be a human, exactly. It has to be a process that made choices: that included some things and excluded others, that decided this interpretation was better than that one. Slop is text produced by a process that made no choices at all, because the defaults were good enough to fill the space.&lt;/p&gt;
&lt;h3&gt;When AI Output Isn't Slop&lt;/h3&gt;
&lt;p&gt;If the test is commitment and accountability, then it follows that AI-assisted output can clear the bar. But I want to be specific here, not hand-wavy, because vague appeals to "my own experience" are themselves a slop move.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://tinycomputers.io/posts/redesigning-a-pcb-with-claude-code-and-open-source-eda-part-1.html"&gt;Giga Shield project&lt;/a&gt; started with a $468 Fiverr design that didn't work. Nine bidirectional level shifters, professional layout, clean two-layer board. Then I tested it with a Z80 processor, and the auto-sensing TXB0108 chips fell apart. The Z80 tri-states its address bus between cycles; the pins go high-impedance, floating. The TXB0108 can't determine drive direction from a floating signal. It guesses wrong, and the Arduino on the other side reads garbage. I'd paid $468 for a board that was blind to half of what the processor was doing.&lt;/p&gt;
&lt;p&gt;The redesign used Claude Code to generate the entire replacement board from a Python script: no graphical PCB editor, no manual placement, just code that outputs a routable board file. AI wrote the board generator. AI helped parse the KiCad schematic to extract all 72 signal mappings across 9 ICs. Then Freerouting, an open-source autorouter, handled the trace routing.&lt;/p&gt;
&lt;p&gt;Here's the kind of specificity that slop can't contain: after 60 optimization passes (about 45 minutes of compute), Freerouting brought the via count on the Giga Shield from roughly 220 down to 158. I ran 128 parallel instances across three machines with randomized net ordering to explore different regions of the solution space. And still, a hard floor of 5-6 unrouted ground connections remained, because Freerouting's architecture literally cannot represent copper pours, and the 0.65mm-pitch TSSOP-24 packages didn't have physical room for ground vias. That limitation is structural. No amount of prompt engineering or parameter tuning changes the fact that the algorithm has no concept of flood-fill connectivity. I &lt;a href="https://tinycomputers.io/posts/the-mathematics-of-pcb-trace-routing.html"&gt;wrote about this in detail&lt;/a&gt;, including the A* search internals and the specific geometric constraints, and if I got the analysis wrong, anyone can read the &lt;a href="https://github.com/ajokela/giga-shield"&gt;source code&lt;/a&gt; and check.&lt;/p&gt;
&lt;p&gt;I also discovered that Freerouting v2.1.0 produced 152 unrouted connections on the same board where v1.9.0 produced 6. That's a testable, reproducible claim, attached to specific version numbers, specific board files, specific machines. It's the opposite of "AI autorouting tools can sometimes produce inconsistent results," which is the slop version of the same observation. One of those sentences tells you something. The other fills space.&lt;/p&gt;
&lt;p&gt;Even the TTS narration is more complicated than "it just works." The Qwen model mispronounces technical terms. It puts emphasis in odd places. The audio for posts with dense jargon has an uncanny flatness where the model clearly doesn't understand what it's reading. I publish it anyway because it's useful despite its flaws, and because I label it as AI-generated narration, which means I'm not asking the listener to trust it as a human performance. It's a tool with known limitations, deployed for a specific purpose, accountable to its function.&lt;/p&gt;
&lt;p&gt;The common thread isn't that AI made these outputs perfect. It's that they were tested against something outside themselves. The board works or it doesn't. The via count is 158 or it isn't. The audio plays or it doesn't. Slop faces no such test. It exists to fill a container, and its success is measured by whether the container looks full, not by whether what's inside is true.&lt;/p&gt;
&lt;h3&gt;The Compost Argument&lt;/h3&gt;
&lt;p&gt;There's a reasonable counterargument that goes like this: human slop has always existed. Content farms, SEO spam, airport bookstore filler, corporate press releases, academic papers that exist only to pad a CV. The internet was full of low-quality, low-effort content long before large language models existed. AI didn't invent slop; it industrialized it.&lt;/p&gt;
&lt;p&gt;This is true, and it's worth taking seriously. The people arguing that "the idea of AI slop is slop" have a point: if we define slop as low-quality content produced with minimal effort, most of what humans have ever published qualifies. Sturgeon's Law (ninety percent of everything is crud) predates AI by decades.&lt;/p&gt;
&lt;p&gt;But the economics are different now, and economics change everything. When slop required human labor, there was a floor on production cost. A content farm still had to pay writers (however little). An SEO spammer still had to hire someone to string keywords into sentences. That floor limited volume, which limited the ratio of noise to signal in any given information ecosystem.&lt;/p&gt;
&lt;p&gt;AI removes the floor. The marginal cost of producing a 2,000-word article drops to fractions of a cent. The marginal cost of producing 10,000 such articles drops to the cost of an API call and a deployment script. The constraint was never willingness to produce slop; it was cost. With cost eliminated, volume expands without bound. This is, incidentally, &lt;a href="https://tinycomputers.io/posts/jevons-paradox.html"&gt;another case of Jevons Paradox&lt;/a&gt;: make content production cheaper, get more content production, not less.&lt;/p&gt;
&lt;p&gt;Some writers have made what I'll call the compost argument: that cultural slop, even the human-produced kind, serves as a sort of fertilizer. The vast majority of pulp fiction was forgettable, but it created the ecosystem that produced &lt;a href="https://baud.rs/UdOkDt"&gt;Philip K. Dick&lt;/a&gt; and &lt;a href="https://baud.rs/v53yMW"&gt;Ursula K. Le Guin&lt;/a&gt;. Most blog posts are unremarkable, but the blogging ecosystem produced some genuinely important writing. The compost nourishes rare blooms.&lt;/p&gt;
&lt;p&gt;Maybe. But there's a concentration problem. A garden benefits from compost; a garden buried under six feet of compost is just a landfill. If the ratio of slop to substance shifts far enough, the substance becomes unfindable. Search engines surface slop because it's optimized for surfacing. Recommendation algorithms amplify it because engagement metrics can't distinguish between "I read this and learned something" and "I read this and it filled two minutes." The signal doesn't just get drowned out; it gets algorithmically deprioritized in favor of the noise.&lt;/p&gt;
&lt;h3&gt;What the Test Looks Like&lt;/h3&gt;
&lt;p&gt;I've argued that what we recognize as slop is the absence of commitment: text that declines to be wrong about anything specific. I believe this is correct, but I should be honest about where the test gets uncomfortable.&lt;/p&gt;
&lt;p&gt;Committed writing can be terrible. Conspiracy theories are committed. Propaganda is committed. A confidently wrong blog post about vaccine microchips passes the "takes a position" test with flying colors. Commitment is necessary but not sufficient. It separates slop from writing that has a pulse, but it doesn't separate good writing from bad writing. That's a different and older test, one that involves accuracy, evidence, reasoning, and intellectual honesty: all the things we've always used to evaluate arguments. The slop test is prior to all of that. It asks whether there's anything present to evaluate in the first place.&lt;/p&gt;
&lt;p&gt;The tool doesn't determine the category. The commitment does. And if this essay has failed to commit to anything worth arguing against, then by its own logic, it belongs in the landfill with the rest.&lt;/p&gt;</description><category>ai</category><category>authenticity</category><category>kitsch</category><category>philosophy</category><category>slop</category><category>writing</category><guid>https://tinycomputers.io/posts/llm-generated-content-what-makes-something-slop.html</guid><pubDate>Mon, 16 Mar 2026 13:00:00 GMT</pubDate></item></channel></rss>