In 1865, William Stanley Jevons published The Coal Question and identified a paradox: James Watt's more efficient steam engine didn't reduce coal consumption. It increased it. The efficiency gain made coal-powered industry more profitable, which drove more investment, which consumed more coal. The per-unit savings were overwhelmed by the expansion in total units demanded.
In 1948, Norbert Wiener published Cybernetics: Or Control and Communication in the Animal and the Machine and described a mechanism: systems that feed their outputs back into their inputs will either stabilize (negative feedback) or accelerate (positive feedback). A thermostat is negative feedback: the output (heat) reduces the input (the gap between current and target temperature). A microphone pointed at a speaker is positive feedback: the output (sound) amplifies the input (sound), and the system screams.
Jevons saw what happened. Wiener explained why.
They never met. Jevons died in 1882, twelve years before Wiener was born. Their fields barely overlapped. Jevons was an economist and logician working in Manchester. Wiener was a mathematician and engineer at MIT. Neither cited the other. Neither would have had reason to. But they were describing the same phenomenon from different sides: Jevons from economics, Wiener from control theory. Jevons identified the paradox. Wiener provided the mechanism. Together, they explain something about AI that the current conversation consistently misses: the reason demand expands when cognitive tools get cheaper isn't economic irrationality. It's positive feedback. It's the system doing exactly what feedback systems do.
Wiener's Machines
Wiener was not an abstract theorist. He built anti-aircraft fire control systems during World War II, predicting the future position of enemy aircraft based on their observed trajectories. The mathematical problem was filtering signal from noise in real-time feedback data, and the solution required treating the human pilot as a component in a mechanical system: a system that could be modeled, predicted, and countered.
This experience shaped everything he wrote afterward. In Cybernetics, Wiener argued that communication and control were fundamentally the same problem, whether the system involved nerves, wires, or social institutions. A factory is a feedback system. An economy is a feedback system. A conversation is a feedback system. The mathematics of regulation and stability apply to all of them.
In 1950, he published The Human Use of Human Beings, a book aimed at general readers. Its central argument: automation would transform society not by replacing humans but by changing the feedback loops that humans operate within. The automated factory doesn't just make products without workers. It creates a system where the speed of production is no longer limited by human labor, which means the system's dynamics shift to whatever the next bottleneck happens to be.
Wiener's most famous warning was blunt: "The automatic machine is the precise economic equivalent of slave labor. Any labor which competes with slave labor must accept the economic conditions of slave labor." He predicted that automation would produce unemployment that would make the Great Depression "seem a pleasant joke." He wrote this in 1950, when computers filled rooms and could barely calculate ballistic tables.
The Jevons Mechanism
Jevons didn't have the vocabulary of cybernetics. He described his paradox in economic terms: efficiency improvements reduce per-unit cost, lower cost increases demand, increased demand outweighs the efficiency gain, total consumption rises. He was observing a positive feedback loop, but he described it as a paradox because the economic framework he was working within predicted the opposite. If coal becomes more efficient, you should need less of it. The loop that amplifies demand was invisible in the model.
Wiener's framework makes the loop visible. Here's the cybernetic translation of Jevons:
- A system component becomes more efficient (Watt's steam engine, a cheaper semiconductor, an AI model).
- The efficiency reduces the cost of the system's output.
- Lower cost makes new applications viable that were previously too expensive.
- New applications create new demand for the now-cheaper component.
- The new demand feeds back into step 1 as pressure for more efficiency, more production, more investment.
- The loop accelerates.
This is a positive feedback loop. The output (cheaper goods, more applications) amplifies the input (demand for the efficient component). There is no negative feedback mechanism to stabilize the system. The loop runs until it hits an external constraint: a physical limit on the resource, a regulatory intervention, or the saturation of all possible demand.
Jevons observed steps 1 through 4 with coal. He didn't have the mathematical framework to describe steps 5 and 6 as a feedback loop. Wiener had the framework but was focused on machines and automation, not on resource economics. The connection between them is that Jevons Paradox is a specific instance of positive feedback in economic systems, and positive feedback is the phenomenon Wiener spent his career analyzing.
The AI Loop
I've been writing about Jevons Paradox and AI for months. The argument: AI makes cognitive output cheaper, demand for cognitive output expands beyond the efficiency gain, and the expansion concentrates pressure on the one input that can't scale: human judgment. The Vampire piece described the human cost. The Excavator piece described the software quality cost. The split piece described how the craft concentrates in the judgment layer.
What I didn't do was explain the mechanism. Why does demand expand when a cognitive input gets cheaper? Why doesn't the system reach equilibrium at lower total consumption, the way classical economics predicts? What force drives the expansion?
Wiener's answer: positive feedback.
Here's the AI loop, stated in cybernetic terms:
- AI makes code generation cheaper (the efficiency gain).
- Cheaper code generation makes new software projects viable (the demand expansion).
- New projects produce software that requires review, testing, debugging, and maintenance (the output).
- Review and debugging create demand for more AI assistance (the feedback).
- The loop accelerates: more projects, more software, more review, more AI, more projects.
At no point does the loop include a mechanism for slowing down. There is no thermostat. The "temperature" (volume of software in production) rises without limit until it hits an external constraint.
I can see this loop operating in my own work. I built DirtScout, a full-stack land acquisition platform, in a series of conversations with Claude Code. 29,000 lines of code across Python, TypeScript, and infrastructure-as-code. The project would have taken months to type by hand. With AI, I built it in days. But building it in days meant I immediately started adding features: soil analysis, environmental assessments, auction tracking, deal pipeline management, offer letter generation. Each feature was a conversation. Each conversation produced code that needed to be reviewed, tested, and maintained. The faster I built, the more I wanted to build, and the more I built, the more review work accumulated. The loop ran. I didn't notice it running until the maintenance surface area was larger than anything I'd built before.
That's Wiener's loop at the individual level. At the organizational level, the same dynamic plays out with more people and higher stakes. Every developer using AI-assisted tooling ships more code, which creates more surface area for bugs and security vulnerabilities, which creates more demand for review, which creates more demand for AI-assisted review tooling, which ships more code.
The external constraint, as I've argued in previous pieces, is human judgment. The three-to-four-hour ceiling on deep work is biological. It doesn't expand because the feedback loop demands more of it. It's a fixed resource being consumed by an accelerating process. In Wiener's terms, the human component in the feedback loop is a bottleneck with a fixed maximum throughput. The system can't route around it (the judgment is necessary) and can't expand it (the biology doesn't scale). So the system does the only thing a positive feedback loop can do when it hits a fixed constraint: it overloads the constraint.
That's burnout. That's what Steve Yegge described in "The AI Vampire." Wiener would have recognized it instantly. The human component in an accelerating feedback loop reaches its throughput limit and degrades. The system doesn't stop. The system doesn't care. The system is a feedback loop, and feedback loops don't have preferences about their components.
Wiener's Warning, Updated
Wiener warned that automation would make human labor economically equivalent to slave labor. He was wrong about the specifics (manufacturing employment declined but didn't collapse) but right about the dynamic. The feedback loop he described (automation reduces labor cost, which drives more automation, which further reduces labor cost) played out exactly as predicted. It just played out over decades instead of years, and the economy adapted by shifting labor to sectors that weren't yet automated.
The AI version of this warning is different in a way that matters. Wiener's automation loop operated on physical labor. Muscle has substitutes: machines. When the feedback loop overloaded the human muscle component, the system routed around it with hydraulics, robotics, and assembly lines. The human moved to cognitive work, where machines couldn't follow.
AI's feedback loop operates on cognitive labor. Judgment does not have substitutes. When the feedback loop overloads the human judgment component, the system can't route around it the way manufacturing routed around physical labor. There is no higher-order activity to retreat to. Judgment is the top of the stack. The feedback loop either overloads it (burnout) or degrades it (review quality drops, software slop accumulates, the Excavator scenario plays out).
Wiener saw this possibility in the abstract. In The Human Use of Human Beings, he wrote: "The world of the future will be an even more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves." He was pushing back against the utopian narrative of his own era: the idea that automation would create leisure. His counterclaim was that automation would shift the struggle to a harder domain. He was right, and the harder domain turned out to be exactly the one AI is now pressuring: the limits of human cognition.
The Speed Problem
There's a dimension of the AI feedback loop that Wiener's industrial-era examples didn't anticipate: speed.
Wiener's factory automation loop ran at the speed of manufacturing. It took years to design a new factory, months to retool an assembly line, weeks to train workers on new processes. The feedback loop was real, but it operated on a timescale that allowed human institutions (unions, regulations, education systems) to adapt. Walter Reuther, the president of the United Auto Workers, received Wiener's 1949 letter warning about automation and had years to develop a response. The loop was slow enough for governance.
The AI feedback loop runs at the speed of software. An operations manager can go from "I have an idea" to "it's in production" in an afternoon. A developer can ship ten features in the time it used to take to ship one. The loop cycles in hours, not years. Human institutions that adapted to the manufacturing automation loop over decades don't have decades to adapt to the AI loop. They have the time between one deployment and the next.
This is the Jevons Paradox running at software speed. Coal consumption took decades to double after Watt's engine. Computing demand took years to double after each semiconductor generation. AI-assisted software production can double in months. The feedback loop is the same. The clock rate is different. And the human component's clock rate (the biological ceiling on judgment) hasn't changed at all.
The supply side is contracting at the same time. After sixteen consecutive years of growth, undergraduate computer science enrollment turned negative in 2025. The Computing Research Association found that 62% of computing departments reported declining enrollment for 2025-26. Students and their parents are reading the headlines about AI replacing developers and steering toward fields they perceive as more durable. The feedback loop produces an ironic secondary effect: the fear of automation reduces the supply of the human component that the accelerating system needs most. The loop runs faster. The pipeline of people qualified to govern it narrows. Wiener's warning about building governance structures before the loop overloads becomes more urgent as the pool of people who could build those structures shrinks.
Wiener and Heidegger
Wiener and Heidegger never engaged with each other's work, as far as I know. They were writing at the same time (late 1940s, early 1950s), about the same phenomenon (technology reshaping human life), and they arrived at complementary conclusions from completely different starting points.
Heidegger, as I wrote in the Enframing piece, argued that technology changes how we see the world. Everything becomes standing reserve: raw material to be ordered and consumed. The river becomes a power source. The specification becomes code. The transformation is ontological: it changes what things are, not just what we do with them.
Wiener argued that technology changes the dynamics of the systems we operate within. Feedback loops accelerate. Bottlenecks shift. Components that were adequate at one cycle speed become inadequate at a faster one. The transformation is mechanical: it changes the forces acting on us, not necessarily how we understand them.
The two frameworks aren't contradictory. They're describing different aspects of the same process. Heidegger explains why we treat the Zilog manual as raw material for code generation (Enframing). Wiener explains what happens when we do it at scale (positive feedback, demand expansion, bottleneck overload). Jevons measured the economic result (total consumption rises despite efficiency gains).
There's a useful way to layer them. Heidegger describes the precondition: technology must first transform how we see the world (specifications become standing reserve) before the feedback loop can operate. You can't accelerate production of something you don't yet see as producible. Enframing opens the door. Wiener's loop walks through it. Jevons counts what's on the other side.
The sequence matters for AI. First, we began seeing cognitive tasks as automatable (Heidegger's shift in perception). Then, AI tools made the automation practical and cheap (Wiener's efficiency gain). Then, demand for cognitive output expanded beyond what anyone predicted (Jevons' paradox). Each step enables the next. The feedback loop couldn't run until the Enframing was in place, and the economic expansion couldn't happen until the loop was running.
Three disciplines. One phenomenon. The feedback loop that Jevons couldn't name, Wiener formalized, and Heidegger diagnosed as a transformation in our relationship to the world.
The Missing Thermostat
Every stable system has negative feedback. A thermostat, a voltage regulator, a population predator-prey cycle: something measures the output and adjusts the input to keep the system within bounds. Positive feedback without negative feedback is, by definition, unstable. The microphone screams until someone unplugs it.
The AI feedback loop currently has no thermostat. There is no mechanism that measures the volume of unreviewed software in production and slows the rate of production accordingly. There is no mechanism that measures developer burnout and reduces the demand for cognitive output. There is no mechanism that measures the ratio of AI-generated code to human-reviewed code and raises an alarm when it crosses a threshold.
Wiener would argue that this is the actual problem. Not AI itself (a tool, a component, an efficiency gain), but the absence of negative feedback in the system that AI accelerates. His entire career was about designing feedback systems that stabilize rather than explode. His warning about automation wasn't "don't build machines." It was "build the governance structures that keep the feedback loop from overloading its human components."
In 1949, Wiener wrote a letter to Walter Reuther, president of the United Auto Workers union, warning him about the coming wave of industrial automation. He didn't tell Reuther to smash the machines. He told him to prepare the workforce and the institutions for a system that would accelerate beyond their current capacity to manage. The letter went largely unheeded.
We're in the same position now. The feedback loop is running. The human component is approaching its throughput limit. The thermostat doesn't exist. Someone needs to build it, and the people best positioned to do so are the ones inside the loop: the developers and decision-makers who can see the acceleration because they're experiencing it.
Wiener died in Stockholm in 1964 at the age of sixty-nine, two decades before the personal computer and six decades before large language models. He never saw the system he described reach the scale it's reaching now. But the mathematics he wrote down in 1948 describe it precisely. Positive feedback without negative feedback is unstable. The system will find its constraint and overload it. The only question is whether we build the thermostat before or after the overload.
What makes Wiener worth reading today isn't his specific predictions (some were right, some were wrong, the timeline was consistently too compressed). It's his framework. He understood that technological change is not a series of discrete events but a system of coupled feedback loops. Each efficiency gain changes the dynamics of the system it operates within. Each change in dynamics creates pressure on whatever component is now the bottleneck. And each bottleneck, when overloaded, produces consequences that feed back into the system and accelerate the next cycle.
That framework applies to coal in 1865, to factory automation in 1950, and to AI-assisted cognitive work in 2026. The specific resources change. The specific bottlenecks change. The feedback dynamics don't.
John von Neumann, Wiener's contemporary and one of the minds his work most influenced, once said that young mathematicians should not worry about whether their work would be useful because "truth is much too complicated to allow anything but approximations." Wiener's approximation of the feedback dynamics of technological change was good enough that it still describes the system seventy-eight years after he formalized it. Whether it's good enough to help us build the thermostat before we need it is the question his work leaves us with.
What a Thermostat Might Look Like
Wiener didn't just diagnose problems. He designed solutions. His entire field was about building systems that regulate themselves. If he were alive today, he'd be asking: what does negative feedback look like in an AI-accelerated software economy?
Some possibilities:
Mandatory review ratios. For every N lines of AI-generated code deployed to production, M lines must be reviewed by a qualified human. The ratio creates a coupling between the production rate and the review rate, forcing the system to slow down when review capacity is saturated. This is a thermostat: the output (deployed code) is measured against a constraint (review capacity), and the input (generation rate) is throttled accordingly.
Liability assignment. If AI-generated code causes a data breach or financial loss, who pays? Currently, nobody in particular. Assigning liability to the person who deployed the code (not the person who prompted the AI) creates negative feedback: the cost of deployment failure feeds back into the decision to deploy, making people more cautious about shipping unreviewed code. Insurance markets would price this risk and create their own feedback mechanisms.
Institutional adaptation. This is what Wiener actually advocated. Not technical solutions but organizational ones. He told Walter Reuther to prepare the workforce for automation. The equivalent today: companies need to build review capacity at the same rate they build production capacity. Every developer who ships AI-generated code needs a corresponding increase in testing, security review, and architectural oversight. The organizations that treat AI as free productivity without investing in review are the ones that will hit the overload first.
Cultural awareness. Tristan Harris and the Center for Humane Technology have been arguing since 2023 that AI is being deployed faster than any technology in history under maximum incentives to cut corners on safety. Harris makes a distinction that Wiener would have appreciated: the difference between the "possible" (AI's theoretical benefits) and the "probable" (what actually happens given current incentive structures). The probable outcome, without intervention, is that companies race toward capability because the competitive feedback loop punishes restraint. Harris's proposed response is building global consensus that the current trajectory is unacceptable, the way the nuclear test ban and the Montreal Protocol established consensus before those feedback loops ran to their conclusions. In Wiener's terms, Harris is trying to build the thermostat at the cultural level: changing the system's objective function so that it optimizes for something other than pure output volume.
None of these exist at scale today. The thermostat is unbuilt. The loop runs open.
Jevons told us what happens. Wiener told us why. The question that remains is whether anyone is building the feedback mechanism that prevents the system from screaming.