<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="../assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>TinyComputers.io (Posts about demand expansion)</title><link>https://tinycomputers.io/</link><description></description><atom:link href="https://tinycomputers.io/categories/demand-expansion.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2026 A.C. Jokela 
&lt;!-- div style="width: 100%" --&gt;
&lt;a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"&gt;&lt;img alt="" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/80x15.png" /&gt; Creative Commons Attribution-ShareAlike&lt;/a&gt;&amp;nbsp;|&amp;nbsp;
&lt;!-- /div --&gt;
</copyright><lastBuildDate>Mon, 06 Apr 2026 22:12:57 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>The AI Vampire Is Jevons Paradox</title><link>https://tinycomputers.io/posts/the-ai-vampire-is-jevons-paradox.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/the-ai-vampire-is-jevons-paradox_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;15 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;&lt;img src="https://tinycomputers.io/images/ai-vampire-jevons/burne-jones-the-vampire-1897.jpg" alt="The Vampire, an 1897 painting by Philip Burne-Jones depicting a pale woman draped over a prostrate man, the visual origin of the vampire as metaphor for extraction" style="float: right; max-width: 40%; margin: 0 0 1em 1.5em; border-radius: 4px; box-shadow: 0 30px 40px rgba(0,0,0,.1);"&gt;&lt;/p&gt;
&lt;p&gt;Steve Yegge's &lt;a href="https://baud.rs/dJwDgQ"&gt;"The AI Vampire"&lt;/a&gt; has been circulating among developers and managers for the past few weeks, and it's striking a nerve. The core argument: AI makes you dramatically more productive (Yegge estimates 10x or more) but companies capture the entire surplus. You don't get a shorter workday. You get 10x the output at the same hours, with the cognitive load compressed into pure decision-making. The result is burnout on a scale the industry hasn't seen before. His prescription is blunt: calculate your \$/hr, work three to four hours a day, and refuse to let the vampire drain you dry.&lt;/p&gt;
&lt;p&gt;It's a compelling piece, written with Yegge's characteristic directness and self-awareness. And it describes something real. But as I read it, I kept seeing something he doesn't name, a pattern I've been writing about for months.&lt;/p&gt;
&lt;p&gt;This is the fourth piece in what has become a series on Jevons Paradox and AI economics. The &lt;a href="https://tinycomputers.io/posts/jevons-paradox.html"&gt;first&lt;/a&gt; traced the paradox through the semiconductor industry. The &lt;a href="https://tinycomputers.io/posts/the-jevons-counter-thesis-why-ai-displacement-scenarios-underweight-demand-expansion.html"&gt;second&lt;/a&gt; argued that AI displacement scenarios systematically undercount demand expansion. The &lt;a href="https://tinycomputers.io/posts/moores-law-for-intelligence-what-happens-when-thinking-gets-cheap.html"&gt;third&lt;/a&gt; explored what happens when the cost of intelligence follows a Moore's Law trajectory. Along the way, I responded to &lt;a href="https://tinycomputers.io/posts/something-big-is-happening-a-critique.html"&gt;Matt Shumer's displacement argument&lt;/a&gt; with the same framework.&lt;/p&gt;
&lt;p&gt;Those pieces all looked at the macro picture: markets expanding, new industries forming, total economic activity growing. Yegge is describing the micro picture. What it actually feels like to be a human worker inside a Jevons expansion. And what he's describing, whether he uses the term or not, is Jevons Paradox operating on human attention.&lt;/p&gt;
&lt;h3&gt;The Jevons Pattern, One More Time&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/ai-vampire-jevons/meunier-descent-of-miners-1882.jpg" alt="Descent of the Miners into the Shaft, an 1882 painting by Constantin Meunier showing coal miners descending into a mine, the human beings at the point of production in the original Jevons cycle" style="max-width: 100%; margin: 0 0 1.5em 0; border-radius: 4px; box-shadow: 0 30px 40px rgba(0,0,0,.1);"&gt;&lt;/p&gt;
&lt;p&gt;The pattern is simple enough to state in a sentence: when a critical input gets cheaper, demand expands beyond the efficiency gain. Total consumption of the input rises, not falls.&lt;/p&gt;
&lt;p&gt;Coal got cheaper per unit of useful work. Total coal consumption surged as new applications became viable. Transistors got cheaper per unit of compute. Total compute spending grew by orders of magnitude. Bandwidth got cheaper per unit of data. Total data consumption exploded. The per-unit savings are overwhelmed by the explosion in total units demanded.&lt;/p&gt;
&lt;p&gt;In my previous pieces, I applied this at the macro level. Cognitive output gets cheaper through AI. New industries emerge. Demand for cognitive work expands. The economy restructures around abundant, cheap intelligence. That argument is about markets, GDP, and employment categories: the aerial view.&lt;/p&gt;
&lt;p&gt;But Jevons has always had a micro counterpart. When coal got cheaper, individual mines didn't shut down early; they ran harder, longer, extracting more because the economics now justified it. When compute got cheaper, individual developers didn't write less code; they wrote vastly more, because the constraints that had limited what was practical evaporated. The expansion creates pressure at every level of the system, not just at the top.&lt;/p&gt;
&lt;p&gt;The macro story is about new markets forming. The micro story is about what happens to the people at the point of production, the ones whose labor is the input that just got cheaper.&lt;/p&gt;
&lt;h3&gt;What Yegge Is Actually Describing&lt;/h3&gt;
&lt;p&gt;Yegge's framework centers on a value-capture trap. He presents two scenarios:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Scenario A:&lt;/strong&gt; AI makes you 10x more productive. Your company captures the surplus. You now produce 10x the output at the same salary and hours. The company benefits. You burn out.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Scenario B:&lt;/strong&gt; You recognize the \$/hr math. If you were worth \$150/hr before AI and now produce 10x the output, your effective rate should be \$1,500/hr, or equivalently, you should work one-tenth the hours for the same salary. You work three to four hours a day, produce what used to take a full day, and keep your sanity.&lt;/p&gt;
&lt;p&gt;He frames this as a choice between being exploited and being strategic. And he's honest about the difficulty of Scenario B; most people can't negotiate a three-hour workday, most companies won't accept it, and the competitive dynamics push relentlessly toward Scenario A.&lt;/p&gt;
&lt;p&gt;Yegge's most vivid metaphor is that "AI has turned us all into Jeff Bezos." At Amazon, Bezos sat atop a machine that handled volume (logistics, warehousing, customer service, shipping) while he focused exclusively on high-leverage decisions. AI does the same thing for individual workers. It absorbs the volume work (the boilerplate code, the routine analysis, the standard responses) and leaves you with a residue of pure judgment calls. Every decision is consequential. Every hour is cognitively expensive.&lt;/p&gt;
&lt;p&gt;He also has an important moment of self-awareness. Yegge acknowledges that his own experience (forty years of engineering, unlimited AI tokens, deep familiarity with the tools) represents "unrealistic beauty standards" for the average developer. He's the equivalent of the fitness influencer whose workout routine is their full-time job. Most people don't have his context, his autonomy, or his leverage to negotiate Scenario B.&lt;/p&gt;
&lt;p&gt;And he identifies a crucial accelerant: the startup gold rush. AI has made it cheap enough to launch a company that "a million founders are chasing the same six ideas." This intensifies competition, which intensifies the pressure to push the output dial higher, which feeds the vampire.&lt;/p&gt;
&lt;h3&gt;The Jevons Connection&lt;/h3&gt;
&lt;p&gt;Here's what Yegge is describing in Jevons terms.&lt;/p&gt;
&lt;p&gt;AI makes cognitive output dramatically cheaper. Jevons predicts that demand won't fall in response; it will increase. That's exactly what happens. Companies don't say "same output, fewer hours." They say "10x the output, same hours." The efficiency gain doesn't reduce consumption of the input. It increases consumption. This is the paradox, and it is playing out precisely as the model predicts.&lt;/p&gt;
&lt;p&gt;But there's something different about this Jevons cycle, something that doesn't have a precedent in the historical cases.&lt;/p&gt;
&lt;p&gt;Coal doesn't get tired. Transistors don't burn out. Bandwidth doesn't need a nap. Every prior Jevons cycle involved an inert input. You could mine more coal, fabricate more chips, lay more fiber. When demand expanded, supply expanded to meet it, and the system found a new equilibrium at higher volume. The input didn't resist. It didn't have a biological ceiling.&lt;/p&gt;
&lt;p&gt;Human attention does.&lt;/p&gt;
&lt;p&gt;AI creates a concentration effect that Yegge describes precisely: it absorbs high-volume, routine work and leaves humans with a residue of pure judgment. The judgment work is, by definition, the most cognitively expensive kind of work, the kind that requires deep focus, contextual understanding, and the willingness to be wrong. And demand for this judgment work expands Jevons-style as AI makes the overall process cheaper. More projects get launched. More code gets written. More decisions need to be made. The volume of judgment calls scales with the volume of output, even as AI handles everything else.&lt;/p&gt;
&lt;p&gt;The problem is that the biological supply of deep, focused judgment is fixed. The deep work literature (Cal Newport and others have documented this extensively) converges on roughly three to four hours per day as the upper bound for sustained, cognitively demanding work. This isn't a cultural preference or a lifestyle choice. It's a constraint imposed by neurobiology. Attention is a depletable resource that recovers on a fixed biological schedule.&lt;/p&gt;
&lt;p&gt;This is the first Jevons cycle where expanding demand hits a hard biological ceiling on the input.&lt;/p&gt;
&lt;p&gt;Yegge's startup observation is also a Jevons phenomenon. AI made starting a company cheaper, so the number of startups exploded. More startups means more competition. More competition means more pressure to maximize output per person. The expansion creates its own acceleration, a feedback loop where cheaper cognitive output produces more ventures, which produce more demand for cognitive output, which increases the pressure on the humans in the loop.&lt;/p&gt;
&lt;p&gt;And the "unrealistic beauty standards" problem has a Jevons name too: it's the efficiency benchmark effect. In every Jevons cycle, the most efficient user of the cheaper input sets the competitive pace for everyone else. The factory that adopted steam power first forced every competitor to adopt it or die. The company that adopted AI first forces every competitor to match its output-per-employee or lose. Yegge, with his forty years and unlimited tokens, is the equivalent of the first factory with a Watt engine. His output level becomes the standard against which everyone is measured, even though most people can't replicate his efficiency.&lt;/p&gt;
&lt;h3&gt;Where the Ceiling Matters&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/ai-vampire-jevons/coal-thrusters-trapper-1854.jpg" alt="Two coal thrusters and a trapper in a British coal mine, from J. C. Cobden's White Slaves of England, 1854, the human cost of running an input at maximum extraction" style="float: left; max-width: 40%; margin: 0 1.5em 1em 0; border-radius: 4px; box-shadow: 0 30px 40px rgba(0,0,0,.1);"&gt;&lt;/p&gt;
&lt;p&gt;In every prior Jevons cycle, the resolution was supply expansion. Coal demand surged; mine more coal. Compute demand surged; fabricate more chips. Bandwidth demand surged; lay more fiber. The system found equilibrium at higher volume because the input could scale.&lt;/p&gt;
&lt;p&gt;Human cognitive capacity doesn't scale. You can't mine more judgment. You can't fabricate more attention. The three-to-four-hour ceiling on deep work isn't going to move because a company's OKRs demand it.&lt;/p&gt;
&lt;p&gt;This means a Jevons expansion in demand for human judgment has to resolve differently than prior cycles. There are really only three paths:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Better tooling that reduces the judgment burden.&lt;/strong&gt; AI gets good enough to handle more decisions autonomously, pushing the human-in-the-loop threshold higher. The frontier of what requires human judgment retreats as AI capability advances. This is already happening; the boundary between "AI can handle this" and "a human needs to decide" is moving rapidly. But it's not moving fast enough to outpace the demand expansion, which is why Yegge's burnout observation is accurate right now even if the long-term trajectory favors less human involvement.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Organizational restructuring.&lt;/strong&gt; More people, fewer high-stakes decisions each. Instead of one developer making judgment calls on 10x the output, you have three developers each handling a manageable portion. This is the "hire more" response, and it pushes back against the cost-reduction motive that drives Scenario A. Companies that pursue this path may produce better outcomes but at higher cost, which competitive dynamics tend to punish.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Cultural pushback.&lt;/strong&gt; Yegge's \$/hr formula. Workers internalize the fixed-supply economics of their own attention, price it accordingly, and refuse to let demand expansion drain it below sustainable levels. This is individually rational but collectively difficult; it requires either enough leverage to negotiate, or enough cultural shift to change expectations.&lt;/p&gt;
&lt;p&gt;Yegge's \$/hr formula is, in Jevons terms, an attempt to set equilibrium for a fixed-supply resource. It is the cognitive equivalent of OPEC production quotas, an effort to prevent the price of a scarce input from being driven to zero by unconstrained demand. And like OPEC quotas, it works only if enough participants enforce it.&lt;/p&gt;
&lt;h3&gt;What This Means for the Macro Picture&lt;/h3&gt;
&lt;p&gt;I want to be honest about what Yegge's observation adds to the framework I've been building.&lt;/p&gt;
&lt;p&gt;My previous pieces argued that when cognitive output gets cheaper, demand expansion will create new economic activity that exceeds the displacement. I stand by that argument. But I underweighted the human-in-the-loop constraint. The demand expansion is real: new markets form, new companies launch, total economic activity grows. But every unit of that expanded activity still requires some quantum of human judgment, and that judgment runs on biological hardware with a fixed daily capacity.&lt;/p&gt;
&lt;p&gt;This doesn't invalidate the macro Jevons argument. Demand will expand. New industries will form. Total employment will restructure, not collapse. But the human attention constraint acts as a speed governor on the expansion. The economy can't scale cognitive output infinitely by just pushing the existing workforce harder, because the existing workforce has a biological ceiling on the input that matters most.&lt;/p&gt;
&lt;p&gt;This argues for Yegge's three-to-four-hour workday not as a lifestyle aspiration but as something closer to an economic inevitability, the natural equilibrium point for a Jevons cycle operating on a fixed-supply input. When demand for an input exceeds the maximum sustainable rate of supply, the system must either find a substitute (AI handling more decisions autonomously), expand the supplier base (more workers, shorter hours each), or accept a constrained equilibrium (the three-hour workday). Some combination of all three is likely.&lt;/p&gt;
&lt;p&gt;The interesting implication is that the Jevons expansion and the burnout crisis are not contradictory phenomena. They're the same phenomenon viewed from different vantage points. The macro analyst sees demand expanding and new economic activity forming. The individual worker sees an unsustainable cognitive load. Both are correct. They're describing different aspects of the same system adjusting to a radically cheaper input.&lt;/p&gt;
&lt;h3&gt;The Vampire and the Paradox&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/ai-vampire-jevons/nosferatu-count-orlok-1922.jpg" alt="Max Schreck as Count Orlok in Nosferatu, 1922, the vampire as an image of relentless, impersonal extraction" style="float: right; max-width: 300px; margin: 0 0 1em 1.5em; border-radius: 4px; box-shadow: 0 30px 40px rgba(0,0,0,.1);"&gt;&lt;/p&gt;
&lt;p&gt;Matt Shumer &lt;a href="https://tinycomputers.io/posts/something-big-is-happening-a-critique.html"&gt;worries about displacement&lt;/a&gt;, losing your job to AI. Steve Yegge worries about what happens to the people who aren't displaced, who keep their jobs but get vampired. Both are describing real phenomena. Neither is the whole picture.&lt;/p&gt;
&lt;p&gt;The Jevons framework encompasses both. Demand expansion creates new work, answering Shumer's displacement concern: the economy doesn't contract, it restructures. But the expansion concentrates cognitive load on the humans who remain in the loop, confirming Yegge's burnout observation, because the one input AI can't replace is the one input that can't scale.&lt;/p&gt;
&lt;p&gt;Shumer's error is modeling only the displacement side. Yegge's error is modeling only the extraction side. The full picture includes both: an economy producing vastly more cognitive output, creating genuinely new economic activity, while simultaneously pushing the humans at the center of it toward a biological wall.&lt;/p&gt;
&lt;p&gt;The vampire is real. It's also, like every Jevons cycle, a signal that something genuinely new is being created, that demand is expanding into territory that didn't exist before. The burnout isn't incidental to the expansion. It's a symptom of it. And like every prior Jevons cycle, the system will find an equilibrium, not because anyone plans it, but because a fixed-supply input eventually forces one. The question is how much damage the vampire does before we get there.&lt;/p&gt;</description><category>ai</category><category>burnout</category><category>critique</category><category>demand expansion</category><category>economics</category><category>jevons paradox</category><category>labor</category><category>productivity</category><category>steve yegge</category><category>technology</category><guid>https://tinycomputers.io/posts/the-ai-vampire-is-jevons-paradox.html</guid><pubDate>Wed, 04 Mar 2026 14:00:00 GMT</pubDate></item><item><title>Something Big Is Happening, And Something Big Is Missing</title><link>https://tinycomputers.io/posts/something-big-is-happening-a-critique.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/something-big-is-happening-a-critique_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;18 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;Matt Shumer's &lt;a href="https://baud.rs/POg6A7"&gt;"Something Big Is Happening"&lt;/a&gt; has been making the rounds, forwarded by founders, reposted by VCs, shared by worried parents and recent graduates. If you haven't read it, the core argument is straightforward: AI capabilities are advancing at an unprecedented pace, the public doesn't appreciate how fast things are moving, and roughly half of entry-level white-collar jobs will be displaced within one to five years. He frames this as a personal warning to the non-technical people in his life, drawing an explicit parallel to February 2020, the moment before COVID when the warnings were there but most people weren't listening.&lt;/p&gt;
&lt;p&gt;It is a well-written, earnest piece, and it resonated for a reason. The capability gains are real. The perception gap is real. The practical advice is genuinely useful. Shumer deserves credit for engaging seriously with a question that most people in his position (CEO of an AI company) have financial incentives to either hype or deflect.&lt;/p&gt;
&lt;p&gt;But the piece has a hole in the center of it, and it's the same hole that appears in nearly every AI displacement argument I've encountered. I've written about this through the lens of &lt;a href="https://tinycomputers.io/posts/jevons-paradox.html"&gt;Jevons Paradox&lt;/a&gt;, explored it as a &lt;a href="https://tinycomputers.io/posts/the-jevons-counter-thesis-why-ai-displacement-scenarios-underweight-demand-expansion.html"&gt;direct counter-thesis to displacement scenarios&lt;/a&gt;, and examined what happens when you &lt;a href="https://tinycomputers.io/posts/moores-law-for-intelligence-what-happens-when-thinking-gets-cheap.html"&gt;apply Moore's Law to the cost of intelligence itself&lt;/a&gt;. The pattern is consistent, and Shumer's piece reproduces the analytical error at its core: it models what AI replaces without modeling what AI creates.&lt;/p&gt;
&lt;h3&gt;The Steelman&lt;/h3&gt;
&lt;p&gt;Before critiquing the piece, I want to present its strongest version in good faith, because Shumer gets several important things right, and dismissing the argument wholesale would be intellectually lazy.&lt;/p&gt;
&lt;p&gt;The capability curve is real. METR benchmarks show AI task completion doubling roughly every seven months, possibly accelerating. Shumer cites this data, and it's legitimate. I've experienced the curve firsthand. Over the past year and a half, I've built a &lt;a href="https://tinycomputers.io/posts/open-sourcing-a-high-performance-rust-based-ballistics-engine.html"&gt;high-performance Rust-based ballistics engine&lt;/a&gt; and &lt;a href="https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html"&gt;Lattice, an entire programming language&lt;/a&gt; with a novel phase-based type system, working across GPT-4, GPT-4o, Claude Haiku, Opus, and most recently Opus 4.6 with Claude Code. The progression itself is the data point. Early models could help with fragments. Today's frontier models reason across thousands of lines of interconnected code, tracking type systems, managing compiler passes, understanding how changes in one module ripple through the rest. These aren't toy demos. They're production-quality projects where the AI operated at the architectural level. The capability gap between late 2024 and early 2026 is genuinely striking.&lt;/p&gt;
&lt;p&gt;The perception gap is real too. Shumer makes a point that doesn't get enough attention: most people's experience with AI is limited to free-tier models that lag frontier capabilities by twelve months or more. Someone who tried ChatGPT once in 2024 and found it mediocre is extrapolating from hardware that's already obsolete. The gap between the free experience and the paid frontier experience is larger than most people realize, and Shumer is right to flag it.&lt;/p&gt;
&lt;p&gt;The self-improvement feedback loops are real. OpenAI has stated that GPT-5.3 Codex was "instrumental in creating itself." Anthropic's training pipeline uses prior Claude models to evaluate training examples. Dario Amodei predicts AI autonomously building next-generation versions within one to two years. These aren't speculative claims; they're descriptions of current practice, and they compress the improvement timeline.&lt;/p&gt;
&lt;p&gt;Shumer's practical advice is sound: use paid tools, select the best available models, spend an hour a day experimenting, build financial resilience, develop adaptability as a core skill. This is good counsel regardless of how the macro picture unfolds.&lt;/p&gt;
&lt;p&gt;And the urgency is not manufactured. Whatever you think the economic consequences will be, the pace of capability improvement is unprecedented in the history of technology. Shumer is right that most people are not paying attention. Where he goes wrong is in what he concludes from that observation.&lt;/p&gt;
&lt;h3&gt;The Substitution Fallacy&lt;/h3&gt;
&lt;p&gt;Here is Shumer's core analytical error, and the one that most critiques of his piece also miss.&lt;/p&gt;
&lt;p&gt;He treats "AI can do X" as equivalent to "AI will replace all humans doing X." His piece moves through a list of job categories (legal work, financial analysis, software engineering, medical analysis, customer service) and for each one, the logic is: AI can now perform this work at a level that rivals human professionals, therefore the humans performing this work are at risk. Implicit in this framing is the assumption that the economy stays roughly the same size, with machines doing work that humans used to do. The number of legal analyses needed stays constant. The number of financial models stays constant. The amount of software stays constant. AI just does it cheaper.&lt;/p&gt;
&lt;p&gt;This is the substitution frame, and it has been wrong by orders of magnitude at every prior technological inflection point.&lt;/p&gt;
&lt;p&gt;I explored this in detail in my &lt;a href="https://tinycomputers.io/posts/the-jevons-counter-thesis-why-ai-displacement-scenarios-underweight-demand-expansion.html"&gt;Jevons counter-thesis&lt;/a&gt;. The mechanism is straightforward: when a critical input becomes dramatically cheaper, the addressable market for everything that uses that input expands. New use cases emerge that were previously uneconomical. Existing use cases scale to populations that were previously priced out. Total consumption of the now-cheaper input rises even as the per-unit cost falls.&lt;/p&gt;
&lt;p&gt;The numbers on latent demand are not speculative. Roughly 80% of Americans who need legal help cannot afford it. Personalized tutoring is a luxury good; \$50 to \$100 per hour puts it out of reach for the average family. Custom software development, at \$50,000 or more per engagement, is inaccessible to most small businesses. Personalized financial planning is available only to households with six-figure investable assets. These aren't hypothetical markets. They are documented, unmet demand suppressed by the cost of the human intelligence required to serve them.&lt;/p&gt;
&lt;p&gt;When Shumer writes that his lawyer friend finds AI "rivals junior associates" for contract review and legal research, the Jevons question is: what happens when legal analysis costs one-hundredth what it costs today? The answer isn't "lawyers lose their jobs." It's "hundreds of millions of people who currently have zero legal representation suddenly have access to it." The total volume of legal analysis performed doesn't shrink. It explodes. Whether that explosion employs as many human lawyers as today is a genuine question, but it's a very different question from "AI replaces lawyers," and Shumer's piece never asks it.&lt;/p&gt;
&lt;p&gt;The same logic applies to every category on his list. If financial modeling becomes 100x cheaper, every small business gets CFO-grade analysis, a market expansion of orders of magnitude relative to the current financial services industry. If software development becomes 100x cheaper, the barrier between "person with an idea" and "working application" functionally disappears, and the total volume of software produced doesn't shrink; it expands to include millions of applications that nobody would build at current costs.&lt;/p&gt;
&lt;h3&gt;The Pandemic Analogy Problem&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/something-big-critique/taylor-power-loom-1862.jpg" alt="W.G. Taylor's Patent Power Loom Calico Machine, an 1862 engraving showing Victorian-era visitors in top hats and crinolines admiring an industrial power loom, technology as spectacle, observed without full understanding of its economic consequences" style="float: right; max-width: 40%; margin: 0 0 1em 1.5em; border-radius: 4px; box-shadow: 0 30px 40px rgba(0,0,0,.1);"&gt;&lt;/p&gt;
&lt;p&gt;"Think back to February 2020." It's an emotionally effective opening, and it does exactly what Shumer intends; it activates the memory of a time when the warnings were there but most people didn't act until it was too late. As a rhetorical device, it works. As an analytical framework, it's misleading.&lt;/p&gt;
&lt;p&gt;COVID was a pure externality. It destroyed without creating. A virus doesn't generate new economic activity as it spreads. It imposes costs, disrupts supply chains, and kills people. The appropriate response was defensive: stockpile supplies, get vaccinated, stay home. The framing of individual survival (how do &lt;em&gt;I&lt;/em&gt; get through this) was correct for a pandemic because a pandemic doesn't create opportunity. It just destroys.&lt;/p&gt;
&lt;p&gt;Technology transitions are categorically different. They create as they destroy, and historically, the creation overwhelms the destruction. The better analogy (the one Shumer doesn't use) is the semiconductor revolution. Computing destroyed millions of clerical, typist, switchboard operator, and filing clerk jobs. It also created the software industry, the internet economy, the mobile ecosystem, social media, cloud computing, e-commerce logistics, and millions of roles that had no conceptual precursor in the prior economy. Total employment didn't shrink. It restructured and grew.&lt;/p&gt;
&lt;p&gt;The pandemic analogy does something else that's analytically costly: it frames the correct response as individual survival. How do &lt;em&gt;I&lt;/em&gt; prepare? How do &lt;em&gt;I&lt;/em&gt; stay ahead? This is the right question for a virus. It is the wrong question for a technology transition, where the correct frame is not "how do I survive displacement" but "what new things become possible?" Shumer's advice (use the tools, build adaptability, experiment daily) is good advice. But it's embedded in a survivalist frame that misses the larger economic picture. The person who learned to build websites in 1995 wasn't surviving the death of typesetting. They were participating in the creation of something that would be orders of magnitude larger than the industry it disrupted.&lt;/p&gt;
&lt;h3&gt;A Founder's Experience Is Not the Economy&lt;/h3&gt;
&lt;p&gt;"I describe what I want built, in plain English, and it just appears."&lt;/p&gt;
&lt;p&gt;I believe him. I've had similar experiences. When I built the ballistics engine and Lattice, there were moments where the workflow felt qualitatively different from anything I'd experienced in over three decades of writing software. The capability is real and it's striking.&lt;/p&gt;
&lt;p&gt;But Shumer is generalizing from the thinnest part of the adoption curve. A startup founder building prototypes with frontier AI tools is the absolute highest-leverage, lowest-friction use case for current technology. There are no compliance departments. No regulatory review. No integration with legacy systems built on COBOL. No liability frameworks that require a human signature. No union contracts. No procurement cycles measured in fiscal years.&lt;/p&gt;
&lt;p&gt;The gap between "a founder can build a prototype in an afternoon" and "a hospital deploys AI-driven diagnostics at scale" is measured in years, not months. Regulatory friction, institutional inertia, liability requirements, and cultural resistance are real. The FDA doesn't move at startup speed. Neither do insurance companies, government agencies, or school districts. These aren't trivial obstacles; they're the mechanisms through which society manages risk, and they exist for reasons.&lt;/p&gt;
&lt;p&gt;I don't want to overweight this argument. Institutional friction can be overstated, and appeals to regulation can become a way of avoiding engagement with the underlying capability shift. The important point is narrower: Shumer extrapolates from his personal productivity gain to "nothing done on a computer is safe," and that's an extrapolation error. A founder experiencing sudden personal leverage and projecting that curve onto civilization is a recognizable pattern in tech commentary. It's usually too bullish on the timeline and too narrow on the mechanism.&lt;/p&gt;
&lt;h3&gt;What Gets Created&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/something-big-critique/kaypro-ii-jerusalem-1984.jpg" alt="A boy using a Kaypro II computer running CP/M in Jerusalem, 1984, at the beginning of a cost curve that would eventually put a supercomputer in every pocket" style="float: right; max-width: 300px; margin: 0 0 1em 1.5em; border-radius: 4px; box-shadow: 0 30px 40px rgba(0,0,0,.1);"&gt;&lt;/p&gt;
&lt;p&gt;This is the biggest gap in Shumer's piece, and the biggest gap in most commentary on AI and employment. He spends thousands of words on what AI can replace. He spends zero words on what AI makes possible for the first time.&lt;/p&gt;
&lt;p&gt;I examined this through the &lt;a href="https://tinycomputers.io/posts/moores-law-for-intelligence-what-happens-when-thinking-gets-cheap.html"&gt;Moore's Law for Intelligence&lt;/a&gt; framework: the 10x / 100x / 1,000x staircase of what becomes viable as the cost per unit of machine intelligence drops. The historical pattern from semiconductors is unambiguous: each order-of-magnitude cost reduction didn't just make existing applications cheaper. It created entirely new categories of economic activity that were literally unimaginable at the prior price point.&lt;/p&gt;
&lt;p&gt;Nobody in 1975 predicted Instagram, Uber, or Spotify. Not because they required new physics; they required compute cheap enough to fit in a pocket. The applications were latent, waiting for the cost curve to reach them.&lt;/p&gt;
&lt;p&gt;The same is true for intelligence. We can identify the structural conditions for demand expansion even if we can't predict the specific applications:&lt;/p&gt;
&lt;p&gt;Small businesses with CFO-grade financial analysts, not because they hire CFOs, but because AI makes that analysis accessible at \$50 per month instead of \$200,000 per year. Personalized tutoring for every student, not an incremental improvement on existing education, but a qualitative shift in how learning works for the 95% of families who can't afford human tutors. Legal help for the 80% of Americans currently priced out. Preventive medicine embedded in every checkup, where an AI has read every relevant paper published in the last decade and cross-referenced it against the patient's complete history.&lt;/p&gt;
&lt;p&gt;And the nature of software engineering itself is changing, not replacing engineers but redefining the skill. The workflow is already shifting from "write code line by line" to "describe architecture, direct implementation, review output." At 100x cheaper inference, small teams build products that previously required departments. At 1,000x cheaper, the barrier between having an idea and having working software effectively disappears. That's not displacement of engineers; it's an explosion in the total volume of software that gets built, and it requires people who know what to build and why.&lt;/p&gt;
&lt;p&gt;We can't predict the Instagram of cheap cognition. But we can observe that the structural conditions (massive latent demand, rapidly falling costs, intense competition distributing gains to consumers) are identical to the conditions that preceded every prior wave of demand-driven economic expansion.&lt;/p&gt;
&lt;h3&gt;The Speed Question: Where Shumer Is Strongest&lt;/h3&gt;
&lt;p&gt;The legitimate uncertainty in Shumer's argument isn't whether displacement will happen. It will. The question is whether it happens faster than demand expansion can absorb.&lt;/p&gt;
&lt;p&gt;Prior Jevons cycles unfolded over decades. Agricultural mechanization displaced 90% of farm workers over a century. Computerization restructured white-collar work over roughly forty years. If AI compresses displacement into two to three years, the question of whether demand expansion keeps pace becomes genuinely urgent. This is where Shumer's urgency has teeth, and it's the argument I take most seriously.&lt;/p&gt;
&lt;p&gt;I was honest about this in both the &lt;a href="https://tinycomputers.io/posts/the-jevons-counter-thesis-why-ai-displacement-scenarios-underweight-demand-expansion.html"&gt;counter-thesis&lt;/a&gt; and the &lt;a href="https://tinycomputers.io/posts/moores-law-for-intelligence-what-happens-when-thinking-gets-cheap.html"&gt;Moore's Law piece&lt;/a&gt;: the speed of this transition is unprecedented, and historical analogy doesn't fully resolve the timing question. The transitional pain for people whose livelihoods depend on cognitive labor that AI can replicate is real and potentially severe.&lt;/p&gt;
&lt;p&gt;But notice the asymmetry in Shumer's framing. Disruption happens at AI speed: step-function capability jumps, immediate adoption, rapid displacement. Demand expansion, on the other hand, is treated as essentially static or non-existent. The economy absorbs the shock and contracts. End of story.&lt;/p&gt;
&lt;p&gt;This asymmetry isn't supported by the evidence. The smartphone created a trillion-dollar app economy in under five years. Cloud computing spawned tens of thousands of SaaS companies within a decade. When a critical input becomes 100x cheaper, entrepreneurs move fast, because the profit opportunity is enormous. Shumer's own experience is evidence of this: he's a founder building products at unprecedented speed using AI tools. Scale that behavior across millions of entrepreneurs who suddenly have access to capabilities that were previously reserved for well-funded teams, and the demand side moves faster than any prior technology transition.&lt;/p&gt;
&lt;p&gt;The honest answer is that we don't know whether demand expansion will keep pace with displacement. That's a genuine uncertainty. But Shumer presents it as a foregone conclusion in one direction, displacement wins, full stop, and that's not an evidence-based position. It's a bet against the strongest empirical pattern in economic history.&lt;/p&gt;
&lt;h3&gt;What to Take from This&lt;/h3&gt;
&lt;p&gt;Shumer's practical advice to individuals is sound even if his macro analysis is incomplete. Use the tools. Build adaptability. Experiment daily. Don't ignore the capability curve; it's real, it's fast, and it will restructure how cognitive work gets done.&lt;/p&gt;
&lt;p&gt;But don't mistake a substitution-only model for the full picture. The most consistent empirical pattern in economic history is that when a critical input gets dramatically cheaper, total consumption increases and the economy restructures around the cheaper input. Coal. Transistors. Bandwidth. Lighting. Every time, the predictions that efficiency would destroy demand were wrong, not because displacement didn't happen, but because demand expansion overwhelmed it. Betting that this pattern has finally broken requires an extraordinary burden of proof that Shumer's piece (eloquent, urgent, and emotionally resonant as it is) does not meet.&lt;/p&gt;
&lt;p&gt;Something big &lt;em&gt;is&lt;/em&gt; happening. What's missing from the conversation is the other half of it.&lt;/p&gt;</description><category>ai</category><category>critique</category><category>demand expansion</category><category>displacement</category><category>economics</category><category>jevons paradox</category><category>labor</category><category>technology</category><category>white-collar</category><guid>https://tinycomputers.io/posts/something-big-is-happening-a-critique.html</guid><pubDate>Sun, 01 Mar 2026 14:00:00 GMT</pubDate></item><item><title>Moore's Law for Intelligence: What Happens When Thinking Gets Cheap</title><link>https://tinycomputers.io/posts/moores-law-for-intelligence-what-happens-when-thinking-gets-cheap.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/moores-law-for-intelligence-what-happens-when-thinking-gets-cheap_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;24 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;&lt;img src="https://tinycomputers.io/images/moores-law-intelligence/silicon-wafer.jpg" alt="A silicon wafer with an array of integrated circuit dies, the physical foundation of Moore's Law" style="float: right; max-width: 40%; margin: 0 0 1em 1.5em; border-radius: 4px;"&gt;&lt;/p&gt;
&lt;p&gt;I have written about &lt;a href="https://tinycomputers.io/posts/jevons-paradox.html"&gt;Jevons Paradox&lt;/a&gt; twice now, once through the history of the semiconductor industry, and once as a &lt;a href="https://tinycomputers.io/posts/the-jevons-counter-thesis-why-ai-displacement-scenarios-underweight-demand-expansion.html"&gt;broader examination&lt;/a&gt; of what happens when the cost of a critical economic input collapses. The pattern is consistent: demand expands to overwhelm the savings. Coal. Transistors. Bandwidth. Lighting.&lt;/p&gt;
&lt;p&gt;Those pieces looked at the pattern itself. This one is different. I want to run a thought experiment forward, not backward.&lt;/p&gt;
&lt;p&gt;I've also spent a lot of time on this site looking backward at computing history, watching &lt;a href="https://tinycomputers.io/posts/stewart-cheifet-and-his-computer-chronicles.html"&gt;Stewart Cheifet walk viewers through the early personal computer revolution&lt;/a&gt; on &lt;em&gt;The Computer Chronicles&lt;/em&gt;, examining how &lt;a href="https://tinycomputers.io/posts/language-manipulators-what-a-1983-episode-of-the-computer-chronicles-got-right-and-wrong-about-word-processing.html"&gt;word processing went from a curiosity to a necessity&lt;/a&gt; in a single decade, tracing &lt;a href="https://tinycomputers.io/posts/george-morrow-pioneer-of-personal-computing.html"&gt;George Morrow's&lt;/a&gt; role in making personal computing real, and following &lt;a href="https://tinycomputers.io/posts/cpm-history-and-legacy.html"&gt;CP/M's arc&lt;/a&gt; from operating system of the future to historical footnote. I've &lt;a href="https://tinycomputers.io/posts/cpm-on-physical-retroshield-z80.html"&gt;run CP/M on physical RetroShield hardware&lt;/a&gt;, explored the &lt;a href="https://tinycomputers.io/posts/motorola-68000-processor-and-the-ti-89-graphing-calculator.html"&gt;Motorola 68000&lt;/a&gt; that powered a generation of machines, and dug into &lt;a href="https://tinycomputers.io/posts/infocom-zork-history.html"&gt;how Infocom turned text adventures into a business&lt;/a&gt; at a time when 64K of RAM was generous. That immersion in where computing came from is exactly what makes the forward question so vivid, because at every stage, the people living through the transition couldn't see what was coming next. The engineers building CP/M didn't anticipate DOS. The engineers building DOS didn't anticipate the web. The engineers building the web didn't anticipate the iPhone. The pattern is always the same: cheaper compute enables things that were unimaginable at the prior cost.&lt;/p&gt;
&lt;p&gt;The question isn't "will AI destroy jobs?" or "is the doom scenario wrong?" The question is: &lt;strong&gt;what becomes possible when thinking gets cheap?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Because AI compute is following a cost curve that looks remarkably like the early decades of Moore's Law. And if that continues (if the cost per unit of machine intelligence drops by an order of magnitude every few years) the consequences extend far beyond making today's chatbots cheaper to run.&lt;/p&gt;
&lt;h3&gt;The Cost Curve&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/moores-law-intelligence/moores-law-transistor-count.png" alt="Microprocessor transistor counts from 1971 to 2011 plotted on a logarithmic scale, showing Moore's Law doubling trend" style="max-width: 100%; margin: 0 0 1.5em 0; border-radius: 4px;"&gt;&lt;/p&gt;
&lt;p&gt;Moore's Law, in its original formulation, described the doubling of transistors per integrated circuit roughly every two years. But the economic consequence that mattered wasn't transistor density; it was cost per unit of compute. From the 1960s through the 2010s, the cost per FLOP declined at a compound rate that delivered roughly a 10x improvement every four to five years. A computation that cost \$1 million in 1975 cost \$1 by 2010. That decline didn't just make existing applications cheaper. It created entirely new categories of computing that were inconceivable at the prior cost structure.&lt;/p&gt;
&lt;p&gt;AI inference costs are now following a similar trajectory, but faster. OpenAI's text-davinci-003, released in late 2022, cost \$20 per million tokens. GPT-4o mini, released in mid-2024, delivers substantially better performance at \$0.15 per million input tokens, a 99% cost reduction in under two years. Claude, Gemini, and open-source models have followed similar curves. DeepSeek entered the market in early 2025 with pricing that undercut Western frontier models by roughly 90%, compressing the timeline further through competitive pressure.&lt;/p&gt;
&lt;p&gt;The GPU hardware underneath these models is on its own Moore's Law trajectory. GPU price-performance in FLOP/s per dollar doubles approximately every 2.5 years for ML-class hardware. Architectural improvements in transformers, mixture-of-experts routing, quantization, speculative decoding, and distillation compound on top of the hardware gains. The result is a cost curve where the effective price of a unit of machine reasoning is falling faster than the price of a transistor did during the semiconductor industry's most explosive growth phase.&lt;/p&gt;
&lt;p&gt;This matters because we know, empirically, what happens when the cost of a foundational input follows an exponential decline. We have sixty years of data on it. The compute industry went from a few thousand mainframes serving governments and large corporations to billions of devices in every pocket, every appliance, every traffic light. Total spending on computing didn't shrink as costs fell; it expanded by orders of magnitude, because each 10x cost reduction unlocked categories of use that didn't exist at the prior price point.&lt;/p&gt;
&lt;p&gt;The thought experiment is straightforward: apply that pattern to intelligence itself.&lt;/p&gt;
&lt;h3&gt;Today's Price Points Create Today's Use Cases&lt;/h3&gt;
&lt;p&gt;At current pricing (roughly \$3 per million input tokens for a frontier model like Claude Sonnet), AI is economically viable for a specific class of applications. Customer support automation. Code assistance. Document summarization. Marketing copy. Translation. These are the use cases where the value generated per token comfortably exceeds the cost per token, and where the interaction pattern involves relatively short exchanges.&lt;/p&gt;
&lt;p&gt;But there are vast categories of potential use where current pricing makes the math uncomfortable or impossible. Consider:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Continuous monitoring and analysis.&lt;/strong&gt; A financial analyst who wants an AI to continuously watch SEC filings, earnings calls, patent applications, and news feeds across 500 companies (analyzing each document in full, cross-referencing against historical patterns, and generating alerts) would consume billions of tokens per month. At current prices, this costs tens of thousands of dollars monthly. At 100x cheaper, it costs the price of a SaaS subscription.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Full-codebase reasoning.&lt;/strong&gt; This one is already arriving. Anthropic's Claude Opus 4.6, working through Claude Code, can operate at the repository level, reading files, understanding architecture, running tests, and making changes across an entire codebase in a single session. I've used it to build a &lt;a href="https://tinycomputers.io/posts/open-sourcing-a-high-performance-rust-based-ballistics-engine.html"&gt;high-performance Rust-based ballistics engine&lt;/a&gt; and to develop &lt;a href="https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html"&gt;Lattice, an entire programming language&lt;/a&gt; with a &lt;a href="https://tinycomputers.io/posts/from-tree-walker-to-bytecode-vm-compiling-lattice.html"&gt;bytecode VM compiler&lt;/a&gt;, projects where the AI wasn't autocompleting fragments but reasoning across thousands of lines of interconnected code, tracking type systems, managing compiler passes, and understanding how changes in one module ripple through the rest. The constraint today isn't capability; it's cost. These sessions consume large volumes of tokens, which means they're viable for serious engineering work but not yet cheap enough to run continuously on every commit, every pull request, every deployment. At 100x cheaper, that changes. At 1,000x cheaper, every codebase has an always-on collaborator that has read everything and forgets nothing.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Personalized education at scale.&lt;/strong&gt; A truly personalized AI tutor that adapts to a student's learning style, tracks their understanding across subjects, reviews their homework in detail, explains mistakes with patience, and adjusts its teaching strategy over months, this requires sustained, high-volume token consumption per student. Multiply by millions of students and the current cost structure breaks. At 100x cheaper, it's viable for a school district. At 1,000x cheaper, it's viable for an individual family.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Preventive medicine.&lt;/strong&gt; Analyzing a patient's complete medical history, genetic data, lifestyle information, lab results, and the current research literature to generate genuinely personalized health recommendations (not the generic advice a five-minute doctor's visit produces, but the kind of comprehensive analysis that currently only concierge medicine patients paying \$10,000+ per year receive). At current token prices, this is prohibitively expensive for routine use. At 100x cheaper, it could be embedded in every annual checkup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ambient intelligence.&lt;/strong&gt; The concept of AI that runs continuously in the background of your life (understanding your calendar, email, documents, and goals, proactively surfacing relevant information, drafting responses, scheduling meetings, flagging conflicts) requires sustained inference at volumes that would cost hundreds of dollars per day at current prices. At 1,000x cheaper, it costs less than your phone bill.&lt;/p&gt;
&lt;p&gt;These aren't science fiction scenarios. They're applications of current model capabilities at price points that don't yet exist. The models can already do most of this work. The cost curve is the bottleneck.&lt;/p&gt;
&lt;h3&gt;The 10x / 100x / 1,000x Framework&lt;/h3&gt;
&lt;p&gt;Moore's Law didn't deliver its benefits in a smooth, continuous flow. It came in thresholds, price points at which qualitatively new applications became viable. The pattern with AI compute is likely to follow the same staircase function.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;At 10x cheaper&lt;/strong&gt; (plausible within 1-2 years): AI becomes viable for tasks that are currently "almost worth it." Small businesses that can't justify \$500/month for AI tooling find it worthwhile at \$50/month. Individual professionals (accountants, lawyers, doctors, engineers) integrate AI into their daily workflow not as an occasional tool but as a constant companion. The volume of AI-mediated work increases dramatically, but the character of work doesn't fundamentally change. This is the equivalent of the minicomputer era: the same kind of computing, available to more people.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;At 100x cheaper&lt;/strong&gt; (plausible within 3-5 years): The applications listed above become economically viable. Continuous analysis, full-codebase reasoning, personalized education, preventive medicine at scale. At this price point, AI stops being a tool you use and starts being infrastructure you run on. Every document you write gets reviewed. Every decision you make gets a second opinion. Every student gets a tutor. Every patient gets a diagnostician. The total volume of inference consumed per capita increases by far more than 100x, because new use cases emerge that weren't contemplated at the prior price. This is the personal computer moment: qualitatively new categories of use.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;At 1,000x cheaper&lt;/strong&gt; (plausible within 5-8 years): Intelligence becomes ambient and disposable. You don't think about whether to use AI for a task any more than you think about whether to use electricity for a task. Every appliance, every vehicle, every building, every piece of infrastructure has embedded reasoning running continuously. Your home understands your patterns and adapts. Your car negotiates traffic in real time not just with sensors but with models that predict the behavior of every other vehicle. Agricultural equipment analyzes soil conditions at the individual plant level. Supply chains optimize in real time across thousands of variables. This is the smartphone moment: computing so cheap and pervasive that it becomes invisible.&lt;/p&gt;
&lt;h3&gt;The Compounding Effect&lt;/h3&gt;
&lt;p&gt;There's a dynamic in AI cost reduction that didn't exist with traditional Moore's Law: cheaper inference enables better models, which enables even cheaper inference.&lt;/p&gt;
&lt;p&gt;When inference is expensive, researchers are constrained in how they can train and evaluate models. Each experiment costs real money. Each architecture search consumes significant compute budgets. When inference costs drop, researchers can run more experiments, evaluate more architectures, and discover more efficient approaches, which further reduces costs. Distillation (training a smaller model to mimic a larger one) becomes more practical when the larger model is cheaper to run at scale. Synthetic data generation (using AI to create training data for other AI) becomes more economical. The cost reduction compounds on itself.&lt;/p&gt;
&lt;p&gt;This is already happening. GPT-4 was used to generate synthetic training data for GPT-4o. Claude's training pipeline uses prior Claude models to evaluate and filter training examples. Google's Gemini models help design the next generation of TPU chips that will run future Gemini models. The AI equivalent of "using computers to design better computers" arrived in year three of the current wave, decades earlier in the relative timeline than it took the semiconductor industry to reach the same recursive dynamic.&lt;/p&gt;
&lt;p&gt;The implication is that the cost curve isn't just declining; it's declining at an accelerating rate because each improvement enables the next one. The semiconductor industry saw this acceleration plateau after about fifty years as it approached physical limits of silicon. AI has no equivalent physical constraint on the horizon. The limits are architectural and algorithmic, and those limits have been falling faster than hardware limits ever did.&lt;/p&gt;
&lt;h3&gt;What the Semiconductor Analogy Actually Predicts&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/moores-law-intelligence/cray-1.jpg" alt="A Cray-1 supercomputer on display, showing its distinctive cylindrical tower design with bench seating and exposed cooling plumbing" style="float: right; max-width: 45%; margin: 0 0 1em 1.5em; border-radius: 4px;"&gt;&lt;/p&gt;
&lt;p&gt;In 1975, a Cray-1 supercomputer delivered about 160 MFLOPS and cost \$8 million. In 2025, an iPhone delivers roughly 2 TFLOPS of neural engine performance and costs \$800. That's a 12,500x performance increase at a 10,000x cost decrease, a net improvement of roughly 100 million times in price-performance over fifty years.&lt;/p&gt;
&lt;p&gt;Nobody in 1975 predicted Instagram, Uber, Google Maps, or Spotify. Not because these applications required fundamentally new physics; they just required compute that was cheap enough to run in a device that fit in your pocket. The applications were latent, waiting for the cost curve to reach them.&lt;/p&gt;
&lt;p&gt;The history is instructive at each threshold. When a capable computer crossed below \$20,000 in the early 1980s, it unlocked small business accounting, the same work mainframes did, just for smaller organizations. When it crossed below \$2,000 in the mid-1990s, it unlocked home computing, and with it the web browser, email, and e-commerce. When capable compute crossed below \$200 in the smartphone era, it unlocked ride-sharing, mobile payments, and social media, none of which had any conceptual precursor at the \$20,000 price point. Each 10x reduction didn't just expand the existing market. It created a market that was literally unimaginable at the prior price.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/moores-law-intelligence/ibm-system-360.jpg" alt="An IBM System/360 Model 30 mainframe computer with its distinctive red cabinet and operator control panel" style="float: right; max-width: 45%; margin: 0 0 1em 1.5em; border-radius: 4px;"&gt;&lt;/p&gt;
&lt;p&gt;The same principle applies to intelligence. We are in the mainframe era of AI. The applications we see today (chatbots, code assistants, image generators) are the equivalent of payroll processing and scientific computation on 1960s mainframes. They are real and valuable, but they represent a tiny fraction of what becomes possible when the cost drops by five or six orders of magnitude.&lt;/p&gt;
&lt;p&gt;What are the Instagram and Uber equivalents of cheap intelligence? By definition, we can't fully predict them. But we can identify the structural conditions that will enable them:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When intelligence costs less than attention, delegation becomes default.&lt;/strong&gt; Today, the cognitive cost of formulating a good prompt, evaluating the output, and iterating often exceeds the cost of just doing the task yourself. As models get cheaper, faster, and better at understanding context, the threshold shifts. Eventually, not delegating a cognitive task to AI becomes the irrational choice, the way not using a calculator for arithmetic became irrational.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When intelligence costs less than data storage, everything gets analyzed.&lt;/strong&gt; Today, most data that organizations collect is never analyzed. It's stored, archived, and forgotten, because the cost of human analysis exceeds the expected value of the insights. When AI analysis is effectively free, every dataset gets examined. Every log file gets reviewed. Every customer interaction gets analyzed for patterns. The volume of insight generated from existing data increases by orders of magnitude.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When intelligence costs less than communication overhead, organizations restructure.&lt;/strong&gt; This is already starting. A significant fraction of white-collar work is coordination: meetings, emails, status updates, project management. These exist because humans need to synchronize their mental models of shared projects. AI tools are already compressing this layer: meeting summaries that eliminate the need for half the attendees, project dashboards that maintain themselves, codebases where an AI agent tracks the state of every open issue so developers don't have to sit through standup. When AI can maintain a comprehensive, always-current model of a project's state, much of the coordination overhead that justifies entire job categories (project managers, program managers, business analysts, internal consultants) begins to evaporate. An organization that currently needs 50 people to coordinate a complex project might need 10, with AI handling the information synthesis that previously required human intermediaries. That's a genuine productivity gain. It's also 40 people who need to find something else to do, and the honest answer is that we don't yet know how fast the demand side creates new roles to absorb them.&lt;/p&gt;
&lt;h3&gt;The Demand Expansion Is the Story&lt;/h3&gt;
&lt;p&gt;The instinct when hearing "AI gets 1,000x cheaper" is to think about cost savings. That's the substitution frame: doing the same things for less money. And yes, that will happen. But the semiconductor analogy tells us that cost savings are the boring part of the story.&lt;/p&gt;
&lt;p&gt;When compute got 1,000x cheaper between 1980 and 2000, the interesting story wasn't that scientific simulations got cheaper to run. It was that entirely new industries (PC software, internet services, mobile apps, social media, cloud computing) emerged to consume orders of magnitude more compute than the entire prior industry had used. The efficiency gain on existing applications was dwarfed by the demand expansion from new applications.&lt;/p&gt;
&lt;p&gt;The same will likely be true for intelligence. Consider bandwidth as a parallel case. In 1995, a 28.8 kbps modem made email and basic web pages viable. Nobody was streaming video; it was physically impossible at that bandwidth, not merely expensive. By 2005, broadband had made streaming music viable. By 2015, streaming 4K video was routine. By 2025, cloud gaming and real-time video conferencing were infrastructure-level assumptions. Total bandwidth consumption didn't decline as it got cheaper. It increased by roughly a million times, because each generation of cost reduction enabled applications that consumed orders of magnitude more bandwidth than the previous generation's entire output.&lt;/p&gt;
&lt;p&gt;The interesting story isn't that customer support gets cheaper. It's the applications that are currently impossible (not difficult, not expensive, but literally impossible at current price points) that become not just possible but routine.&lt;/p&gt;
&lt;p&gt;A world where every small business has a CFO-grade financial analyst. Where every patient has a diagnostician who has read every relevant paper published in the last decade. Where every student has a tutor who knows exactly where they're struggling and why. Where every local government has the analytical capacity currently reserved for federal agencies.&lt;/p&gt;
&lt;p&gt;And the nature of building software itself is changing in ways that go beyond "engineers with better tools." For most of computing history, writing code meant a human translating intent into syntax, line by line, function by function. AI assistance started as autocomplete: suggesting the next line, filling in boilerplate. But that phase is already ending. Today, with tools like Claude Code, the workflow has inverted. The human describes what they want (an architecture, a feature, a behavior) and the AI writes the implementation across files, runs the tests, and iterates on failures. The engineer's role shifts from writing code to directing and reviewing it, from syntax to judgment. At 10x cheaper, this is how professional developers work. At 100x cheaper, it's how small teams build products that previously required departments. At 1,000x cheaper, the barrier between "person with an idea" and "working software" functionally disappears. The entire concept of what it means to be a software engineer is being rewritten in real time, not by replacing engineers, but by redefining the skill from "can you write this code?" to "do you know what to build and why?"&lt;/p&gt;
&lt;p&gt;These aren't efficiency improvements on existing systems. They're new capabilities that create new categories of economic activity, new forms of organization, and new kinds of products and services that don't have current analogs, just as social media, ride-sharing, and cloud computing had no analogs in the mainframe era.&lt;/p&gt;
&lt;h3&gt;The Question That Matters&lt;/h3&gt;
&lt;p&gt;I should be honest about what I don't know. The displacement scenarios for white-collar labor are not fantasy. AI is already capable enough to handle work that was solidly middle-class professional territory two years ago: document review, financial analysis, code generation, customer support, content production. The scenarios where this accelerates faster than the economy can absorb are plausible, and anyone who dismisses them outright isn't paying attention. When a technology can replicate cognitive labor at a fraction of the cost, the transitional pain for the people whose livelihoods depend on that labor is real and potentially severe. The speed matters: prior technology transitions unfolded over decades, and AI compression of that timeline into years is a genuine uncertainty that historical analogy doesn't fully resolve.&lt;/p&gt;
&lt;p&gt;But there is a question that displacement scenarios consistently underweight, and it's the one I explored in my &lt;a href="https://tinycomputers.io/posts/the-jevons-counter-thesis-why-ai-displacement-scenarios-underweight-demand-expansion.html"&gt;Jevons counter-thesis&lt;/a&gt;: what happens on the demand side? Every model that projects mass unemployment from cheap AI is implicitly assuming that the economy remains roughly the same size, with machines doing the work humans used to do. That's the substitution frame. And the substitution frame has been wrong at every prior technological inflection point, not slightly wrong, but wrong by orders of magnitude.&lt;/p&gt;
&lt;p&gt;The semiconductor industry's answer, delivered over sixty years of data, is unambiguous. Every order-of-magnitude cost reduction generated more economic activity, more employment, and more total compute consumption than the one before it. The economy didn't shrink as compute got cheaper. It restructured around cheap compute and grew. Roughly 80% of Americans who need legal help can't afford it today. Personalized tutoring is a luxury good. Custom software is out of reach for most small businesses. These aren't speculative markets; they're documented unmet demand suppressed by the cost of human intelligence. When that cost collapses, the demand doesn't stay static.&lt;/p&gt;
&lt;p&gt;The honest answer is that both things will happen simultaneously. Jobs will be displaced, some permanently. And new categories of economic activity will emerge that are currently inconceivable, just as social media and cloud computing were inconceivable in the mainframe era. The question is which force dominates, and how fast the transition occurs. I think the historical pattern favors demand expansion, but I hold that view with the humility of someone who knows the speed of this particular transition is unprecedented.&lt;/p&gt;
&lt;p&gt;AI inference costs are following the same curve as semiconductors, possibly faster. The tokens-per-dollar ratio will improve by orders of magnitude. And when it does, the applications that emerge will make today's AI use cases look as quaint as running payroll on a room-sized mainframe.&lt;/p&gt;
&lt;p&gt;The thought experiment ends where all Jevons stories end: with more consumption, not less. More intelligence deployed, not less. More economic activity built on cheap cognition, not less. The cost curve is the enabling condition. What gets built on top of it is the part we can't fully predict, and historically, that's always been the most interesting part.&lt;/p&gt;</description><category>ai</category><category>compute costs</category><category>demand expansion</category><category>economics</category><category>inference</category><category>jevons paradox</category><category>moore's law</category><category>semiconductors</category><category>technology</category><category>tokens</category><guid>https://tinycomputers.io/posts/moores-law-for-intelligence-what-happens-when-thinking-gets-cheap.html</guid><pubDate>Sat, 28 Feb 2026 14:00:00 GMT</pubDate></item><item><title>The Jevons Counter-Thesis: Why AI Displacement Scenarios Underweight Demand Expansion</title><link>https://tinycomputers.io/posts/the-jevons-counter-thesis-why-ai-displacement-scenarios-underweight-demand-expansion.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/the-jevons-counter-thesis-why-ai-displacement-scenarios-underweight-demand-expansion_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;34 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;&lt;img src="https://tinycomputers.io/images/jevons-counter-thesis/jevons-portrait.jpg" alt="Portrait of William Stanley Jevons, the English economist who first described the paradox of efficiency-driven demand expansion in 1865" style="float: right; max-width: 300px; margin: 0 0 1em 1.5em; border-radius: 4px; box-shadow: 0 30px 40px rgba(0,0,0,.1);"&gt;&lt;/p&gt;
&lt;p&gt;Citrini Research recently published a piece called &lt;a href="https://baud.rs/global-intel-crisis"&gt;"The 2028 Global Intelligence Crisis"&lt;/a&gt;, a thought experiment modeling a scenario in which AI-driven white-collar displacement triggers a cascading economic crisis. In their telling, AI replaces workers, spending drops, firms invest more in AI to protect margins, AI improves, and the cycle repeats. They call it the "Intelligence Displacement Spiral" and project a 57% peak-to-trough drawdown in the S&amp;amp;P 500. No natural brake. No soft landing.&lt;/p&gt;
&lt;p&gt;It is a well-constructed stress test, and worth reading on its own terms. But the scenario achieves its conclusion by modeling only the displacement side of an efficiency revolution while treating the demand-expansion side as essentially zero. This is the core analytical gap, and it is precisely the gap that Jevons Paradox addresses.&lt;/p&gt;
&lt;p&gt;I have &lt;a href="https://tinycomputers.io/posts/jevons-paradox.html"&gt;written about Jevons Paradox before&lt;/a&gt; in the context of the semiconductor industry, specifically how improvements in energy efficiency from the transistor through GPUs have consistently driven &lt;em&gt;more&lt;/em&gt; total energy consumption, not less, by making computing cheap enough to permeate every corner of the economy. The same framework applies to AI and cognitive labor, and the Citrini piece is a useful foil for exploring why.&lt;/p&gt;
&lt;h3&gt;What Jevons Paradox Actually Says&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/jevons-counter-thesis/coal-question-cover.jpg" alt="Title page of The Coal Question by W. Stanley Jevons, second edition, 1866, published by Macmillan and Co." style="float: left; max-width: 200px; margin: 0 1.5em 1em 0; border-radius: 4px; box-shadow: 0 30px 40px rgba(0,0,0,.1);"&gt;&lt;/p&gt;
&lt;p&gt;In 1865, the English economist William Stanley Jevons observed something counterintuitive about coal consumption in Britain. James Watt's steam engine had made coal use dramatically more efficient; you could extract far more useful work per ton of coal than before. The intuitive expectation was that Britain would use less coal. The opposite happened. Total coal consumption surged, because the efficiency gains made coal-powered activities so much cheaper that entirely new applications emerged. Factories that couldn't justify coal-powered machinery at the old efficiency levels now could. Industries that had never used steam power adopted it. The per-unit savings were overwhelmed by the explosion in total units demanded.&lt;/p&gt;
&lt;p&gt;This pattern has recurred across nearly every major input cost reduction in economic history. Semiconductor efficiency improved by roughly a trillionfold over six decades, and total spending on computing did not decline; it expanded from a niche military and scientific expenditure to a multi-trillion-dollar global industry. Bandwidth costs collapsed through the 1990s and 2000s, and total bandwidth consumption didn't decrease; it increased by orders of magnitude as streaming video, social media, cloud computing, and mobile internet emerged. LED lighting is roughly 90% more efficient than incandescent bulbs, and total global illumination has increased, not decreased, as cheap lighting enabled new architectural designs, 24-hour commercial operations, and decorative applications that were uneconomical before.&lt;/p&gt;
&lt;p&gt;The mechanism is straightforward: when a critical input becomes dramatically cheaper, the addressable market for everything that uses that input expands. New use cases emerge that were previously uneconomical. Existing use cases scale to populations that were previously priced out. The total consumption of the now-cheaper input rises even as the per-unit cost falls.&lt;/p&gt;
&lt;p&gt;The Citrini piece implicitly models AI as a &lt;em&gt;substitution&lt;/em&gt; technology: it replaces human cognitive labor, and that's the end of the transaction. Jevons Paradox suggests AI is simultaneously, and perhaps primarily, an &lt;em&gt;expansion&lt;/em&gt; technology: it makes cognitive services so cheap that demand for them can grow faster than the displacement effect.&lt;/p&gt;
&lt;h3&gt;Latent Demand Is Enormous and Unmeasured&lt;/h3&gt;
&lt;p&gt;The Citrini scenario treats the economy as having a fixed quantity of cognitive work. AI absorbs that work, the workers who performed it lose their income, and aggregate demand collapses. But the reason cognitive work costs what it does is that human intelligence has been scarce and expensive. This scarcity has suppressed enormous categories of demand that simply don't show up in current GDP accounting because they've never been economically feasible.&lt;/p&gt;
&lt;p&gt;Consider education. The average American family cannot afford personalized tutoring. A human tutor at \$50-100 per hour is a luxury good. If AI reduces the cost of competent, personalized educational support to near zero, the addressable market isn't the current tutoring market; it's every student in the country. That is a market expansion of potentially 50x or more relative to the existing tutoring industry. The humans who previously worked as tutors are displaced, yes, but the economic activity generated by tens of millions of students receiving personalized education (and the downstream productivity gains from a better-educated workforce) is a new demand category that didn't exist before.&lt;/p&gt;
&lt;p&gt;The same logic applies across dozens of sectors. Legal services: roughly 80% of Americans who need legal help cannot afford it. Personalized financial planning: currently available only to households with six-figure investable assets. Preventive health analysis: limited by the number of available clinicians. Custom software for small businesses: a \$50,000 engagement is out of reach for a business generating \$300,000 in annual revenue. Architecture and design services for middle-income homeowners. Personalized nutrition and fitness programming. Translation and localization for businesses that currently operate only in one language.&lt;/p&gt;
&lt;p&gt;These are not speculative categories. They are documented, unmet needs constrained by the cost of the human intelligence required to serve them. When AI collapses those costs, the question is whether the demand expansion across all of these categories (and others we haven't imagined) can offset the displacement in existing roles. The Citrini piece assumes the answer is no without modeling the question. Jevons Paradox, and the historical base rate, suggests the answer is more likely yes.&lt;/p&gt;
&lt;h3&gt;The Citrini Piece's Own Evidence Supports Jevons&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/jevons-counter-thesis/uk-coal-production.png" alt="UK coal production from 1860 to 2010 showing the dramatic surge in output that Jevons predicted despite improving efficiency, where production quadrupled from 75 million tonnes in 1860 to nearly 300 million tonnes by 1913" style="max-width: 100%; margin: 1em 0; border-radius: 4px; box-shadow: 0 30px 40px rgba(0,0,0,.1);"&gt;&lt;/p&gt;
&lt;p&gt;The piece acknowledges two canonical examples of technological displacement that didn't produce net job losses: ATMs (bank teller employment rose for 20 years after their introduction, because cheaper branch operations led to more branches) and the internet (travel agencies, the Yellow Pages, and brick-and-mortar retail were disrupted, but entirely new industries emerged in their place). It then dismisses both by asserting that "AI is different because it improves at the very tasks humans would redeploy to."&lt;/p&gt;
&lt;p&gt;But this is precisely what critics said at every prior inflection point. When mechanized looms were introduced in the early 19th century, displaced textile workers could not "redeploy" to weaving; the machines did the very thing they were trained for. What actually happened was that radically cheaper cloth created demand for fashion, retail distribution, global trade logistics, cotton cultivation, and marketing, categories that scarcely existed at the prior cost structure. The weavers didn't get their old jobs back. They moved into an economy that had been restructured around abundant, cheap textiles, and that economy was far larger than the one it replaced.&lt;/p&gt;
&lt;p&gt;The Citrini piece's own scenario contains evidence of Jevons-style expansion that it frames exclusively as destruction. The section on agentic commerce describes consumers using AI agents that eliminate friction: price-matching across platforms, renegotiating subscriptions, rebooking travel. The article frames this as the death of intermediation moats. But it is equally a story of market expansion. When an AI agent assembles a complete travel itinerary faster and cheaper than Expedia, the result isn't just that Expedia loses revenue. It's that people who previously found trip planning too cumbersome or too expensive now take trips. Total travel volume can increase even as per-trip intermediation costs fall.&lt;/p&gt;
&lt;p&gt;The DoorDash example is even more explicit. The article describes vibe-coded competitors passing 90-95% of delivery fees to drivers, and AI agents shopping across twenty platforms for the best deal. Delivery becomes cheaper for consumers and more remunerative for drivers. The article frames this as destruction of DoorDash's moat. From a Jevons perspective, it's a textbook demand expansion setup: cheaper delivery means more people order delivery, more restaurants offer delivery, and total delivery volume grows.&lt;/p&gt;
&lt;h3&gt;The Feedback Loop Has a Natural Brake&lt;/h3&gt;
&lt;p&gt;The article's most powerful rhetorical device is the claim that the Intelligence Displacement Spiral has "no natural brake." This is the critical assertion on which the entire doom scenario depends, and it is the assertion most directly challenged by Jevons Paradox.&lt;/p&gt;
&lt;p&gt;The natural brake is price-driven demand expansion. As AI makes cognitive services cheaper, consumers gain access to goods and services they couldn't previously afford. This is true even for displaced workers operating at lower income levels. A former product manager earning \$45,000 as an Uber driver cannot afford a human financial advisor, but can access AI-driven financial planning for near zero cost. They cannot afford a human tutor for their children, but can access AI tutoring. They cannot afford custom software to start a small business, but can build an application using AI tools. The consumption basket shifts: less spending on expensive human-mediated services, more consumption of cheap AI-mediated services that were previously unattainable.&lt;/p&gt;
&lt;p&gt;This doesn't make the individual worse off on net; it partially offsets the income decline through dramatically lower cost of living for intelligence-intensive services. The article's "Ghost GDP" concept (output that shows up in national accounts but doesn't circulate through the real economy) assumes that the efficiency gains accrue entirely to capital owners. But the article itself documents intense competition. Dozens of vibe-coded delivery startups competing for share. Agentic shoppers forcing prices down across every category. Stablecoin payment rails bypassing card interchange fees. In competitive markets, efficiency gains don't stay with producers; they flow to consumers through lower prices. That flow is the transmission mechanism through which Jevons effects operate, and the article describes it vividly while somehow not recognizing it as a countervailing force.&lt;/p&gt;
&lt;h3&gt;The OpEx Substitution Framing Conceals the Demand Side&lt;/h3&gt;
&lt;p&gt;The article makes an astute observation that AI investment increased even as the economy contracted, because companies were substituting AI OpEx for labor OpEx. A company spending \$100M on employees and \$5M on AI shifts to \$70M on employees and \$20M on AI, so total spending falls while AI spending rises. This explains why the AI infrastructure complex continued performing even as the broader economy deteriorated.&lt;/p&gt;
&lt;p&gt;This is a credible supply-side analysis. But it omits the demand-side consequence. If a company produces the same output with fewer workers at lower total cost, competitive pressure pushes the price of that output down. Falling prices expand the addressable market. A SaaS product that cost \$500,000 annually and was affordable only to the Fortune 500 now costs \$50,000 and is accessible to mid-market companies. A consulting engagement that cost \$2 million and was reserved for large enterprises now costs \$200,000 and is available to growth-stage companies. The total number of transactions can grow even as per-transaction revenue falls.&lt;/p&gt;
&lt;p&gt;The article models a world where output stays constant, prices stay constant, costs drop, and the entire surplus accrues to shareholders. In practice, the intense competition the article itself describes (incumbents in knife-fights with each other and with upstart challengers) is precisely the mechanism that prevents this. Competition distributes efficiency gains through lower prices, and lower prices expand markets.&lt;/p&gt;
&lt;h3&gt;The Intelligence Premium Unwind Is Also a Jevons Story&lt;/h3&gt;
&lt;p&gt;The article's most compelling framing is that human intelligence has been the scarce input in the economy for all of modern history, and AI is unwinding that premium. Every institution (the labor market, mortgage underwriting, the tax code) was designed for a world where human cognition was expensive and irreplaceable. As AI makes intelligence abundant, these institutions crack.&lt;/p&gt;
&lt;p&gt;Jevons Paradox applied to this framing produces a different conclusion. When intelligence becomes abundant and cheap, the economy doesn't just produce the same cognitive output more efficiently; it restructures around consuming vastly more intelligence. We don't merely replicate the existing quantity of analysis, decisions, creative output, and coordination at lower cost. We produce orders of magnitude more of it.&lt;/p&gt;
&lt;p&gt;The article's own data point supports this: by March 2027 in their scenario, the median American was consuming 400,000 tokens per day, a 10x increase from the end of 2026. The article cites this as evidence of disruption, but it is fundamentally a Jevons data point. People are consuming &lt;em&gt;more&lt;/em&gt; intelligence, not less. That consumption drives economic activity; someone is building the products and services that consume those tokens, maintaining the infrastructure, curating quality, arbitrating edge cases, and inventing new applications.&lt;/p&gt;
&lt;p&gt;The question is whether that new economic activity employs enough people at high enough wages to offset the displacement. The article assumes it doesn't. History suggests it tends to, though the transition period can be painful and the new employment categories often look nothing like the old ones.&lt;/p&gt;
&lt;h3&gt;The GDP Composition Argument Cuts Both Ways&lt;/h3&gt;
&lt;p&gt;The article makes much of the fact that 70% of US GDP is consumer spending, and that white-collar workers drive a disproportionate share of that spending. When those workers lose income, the consumption base collapses, and GDP follows. This is mechanically sound as far as it goes.&lt;/p&gt;
&lt;p&gt;But Jevons Paradox suggests that the &lt;em&gt;composition&lt;/em&gt; of GDP shifts during efficiency revolutions, not just the level. When agricultural mechanization displaced 90% of farm workers over the course of a century, it did not produce a permanent 90% unemployment rate. GDP restructured around manufacturing and services, categories that were economically marginal when most human labor was occupied with food production. The displaced agricultural workers didn't return to farming. They moved into an economy where cheap food freed up income and labor for other activities.&lt;/p&gt;
&lt;p&gt;The analogous question for AI is: when cognitive labor becomes cheap, what does the economy restructure around? The Citrini piece doesn't attempt to answer this, which is understandable, since predicting the specific industries of the future is a fool's errand. But the &lt;em&gt;pattern&lt;/em&gt; is well-established. Cheap food led to a manufacturing economy. Cheap manufacturing led to a services economy. Cheap cognitive services leads to something else. The article's scenario assumes the chain terminates with "cheap cognitive services leads to nothing," which is historically unprecedented.&lt;/p&gt;
&lt;p&gt;One plausible direction: the economy shifts toward activities where physical presence, human trust, and embodied experience carry a premium precisely &lt;em&gt;because&lt;/em&gt; cognitive tasks are commoditized. Healthcare delivery (not diagnosis, but care), skilled trades, experiential services, community-oriented businesses, and creative work that is valued specifically for its human origin. These are not futuristic speculations; they are existing sectors where human presence is intrinsic to the value proposition. As AI deflates the cost of cognitive services, the &lt;em&gt;relative&lt;/em&gt; value of irreducibly human activities increases, and spending may shift toward them.&lt;/p&gt;
&lt;p&gt;Another direction, potentially larger: entirely new categories of economic activity that we cannot yet name, because they only become viable when intelligence is cheap and abundant. The internet didn't just make existing activities more efficient; it created social media, the gig economy, e-commerce logistics, content creation as a profession, and cloud computing as an industry. None of these were predicted in advance. The equivalent AI-native industries may already be emerging in nascent form, invisible to a GDP accounting framework built for the prior economic structure.&lt;/p&gt;
&lt;h3&gt;Where the Speed Concern Is Legitimate&lt;/h3&gt;
&lt;p&gt;The strongest element of the Citrini scenario is the speed argument. Prior Jevons cycles unfolded over decades, long enough for institutions, education systems, and labor markets to adapt. The article's timeline compresses displacement into roughly 18-24 months, far faster than the demand-expansion side can respond.&lt;/p&gt;
&lt;p&gt;This is a legitimate concern, and it's where the Jevons counter-argument is weakest. If displacement is fast and demand expansion is slow, the interim period can be genuinely severe, even if the long-run equilibrium is positive. Policy response, education, and institutional adaptation all operate on timescales measured in years, not quarters.&lt;/p&gt;
&lt;p&gt;However, the article makes an asymmetric assumption on this point. It models disruption happening at AI speed: step-function capability jumps, immediate corporate adoption, rapid layoff cycles. But it models demand expansion as essentially static, only emerging when government intervention eventually arrives. This ignores that entrepreneurial response to dramatically cheaper inputs has historically been fast. The smartphone created a trillion-dollar app economy in under five years. Cloud computing spawned tens of thousands of SaaS companies within a decade. When a critical input becomes 100x cheaper, entrepreneurs move quickly to build products that exploit the new cost structure, because the profit opportunity is enormous.&lt;/p&gt;
&lt;p&gt;The article's scenario includes dozens of vibe-coded delivery startups appearing rapidly, which is itself evidence of fast entrepreneurial response to cheaper intelligence. It just doesn't extend that observation to other sectors.&lt;/p&gt;
&lt;h3&gt;Where the Counter-Argument Must Be Honest&lt;/h3&gt;
&lt;p&gt;Jevons Paradox is not a universal law. It describes a tendency (a strong historical pattern) not an iron guarantee. The Citrini piece's most potent rebuttal is that prior Jevons cycles involved specific resource inputs (coal, compute, bandwidth, lighting), while AI targets the general-purpose input of intelligence itself. If AI can perform not only existing cognitive tasks but also the &lt;em&gt;new&lt;/em&gt; tasks that would emerge from demand expansion, then the rebound effect could be muted or eliminated entirely. A coal-fired loom couldn't design fashion or run a retail chain. But an AI that can code, analyze, write, plan, and reason might well be capable of staffing the very industries that Jevons expansion would create.&lt;/p&gt;
&lt;p&gt;This is a genuine uncertainty, and intellectual honesty requires acknowledging it. The question reduces to whether human judgment, taste, coordination, creativity, physical presence, and social trust constitute a durable residual demand (activities where humans remain preferred or necessary even when AI is technically capable) or whether those too get absorbed over time. The honest answer is that we don't know.&lt;/p&gt;
&lt;p&gt;What we do know is that the historical base rate strongly favors Jevons over the doom loop. Every prior prediction that a general-purpose technology would produce permanent mass unemployment (mechanized agriculture, factory automation, computerization, the internet) has been wrong, and wrong for the same reason: the predictions modeled displacement without modeling demand expansion. The Citrini piece, for all its sophistication, repeats that analytical pattern.&lt;/p&gt;
&lt;h3&gt;The Bottom Line&lt;/h3&gt;
&lt;p&gt;The Citrini piece is worth reading as a risk scenario. The transitional pain it describes is plausible, and portfolio construction should account for it. But as a base case for the future of the economy, it requires assuming that the most consistent empirical pattern in economic history (that radically cheaper inputs generate demand that exceeds displacement) has finally broken. That's a bet against a very long track record.&lt;/p&gt;
&lt;p&gt;For more on the mechanics of Jevons Paradox and how it has played out across the semiconductor industry from vacuum tubes to modern AI accelerators, see my earlier piece: &lt;a href="https://tinycomputers.io/posts/jevons-paradox.html"&gt;Jevons Paradox and the Semiconductor Industry&lt;/a&gt;.&lt;/p&gt;</description><category>ai</category><category>citrini research</category><category>demand expansion</category><category>economics</category><category>efficiency</category><category>jevons paradox</category><category>labor displacement</category><category>macroeconomics</category><category>technology</category><category>white-collar employment</category><guid>https://tinycomputers.io/posts/the-jevons-counter-thesis-why-ai-displacement-scenarios-underweight-demand-expansion.html</guid><pubDate>Tue, 24 Feb 2026 14:00:00 GMT</pubDate></item></channel></rss>