🎧 Listen to this article

Jason Fried posted a sharp critique of the "bespoke software revolution" narrative this week. His argument: most people don't like computers, don't want software projects, and won't become builders just because AI hands them better tools. The three-person accounting firm wants the paperwork gone, not a new system to maintain. The logistics company wants optimized routes, not Joe's side project. The law firm wants leverage on their time, not a codebase.

His metaphor is good: "A powerful excavator doesn't turn a homeowner into a contractor. Most people just want the hole dug by someone else."

He's right about who builds. He's wrong about what happens next.

The Echo Chamber Is Real

Fried's observation about the software community talking to itself lands because it's obviously true. Open any tech feed and the bespoke software excitement is coming from people who already build software for a living. They're excited because AI makes their work faster and more interesting. They project that excitement onto everyone else and conclude that everyone will want to build. This is like assuming everyone wants to change their own oil because you enjoy working on cars.

Most people have no interest in building software. Not because they lack intelligence or creativity, but because software is a means to an end and they'd rather focus on the end. The accounting firm wants to close the books faster. The logistics company wants fewer empty miles. These are domain problems, not software problems, and the people who understand them best have spent their careers on the domain, not on code.

Fried identifies the outliers correctly: the people who go deep with AI building tools were already dabblers. The curiosity was already there. AI didn't create new builders; it gave existing builders a power tool. This is an important observation that the tech community consistently ignores because it's less exciting than "everyone becomes a developer."

Where Fried Stops

But Fried's analysis ends at "most people won't build," and that's where the interesting question starts. Because some people will try.

Not the majority. Not the three-person accounting firm drowning in paperwork. But the accounting firm's nephew who's "good with computers." The operations manager at the logistics company who watched a YouTube tutorial on Cursor. The paralegal at the law firm who built a spreadsheet macro once and now has access to tools that can generate entire applications from a text description.

These people exist in every organization. They're not professional developers. They don't think of themselves as builders. But they have just enough technical confidence to be dangerous, and AI tools have just lowered the barrier enough to let them act on it.

This is not a hypothetical. It's already happening. People are building internal tools with AI assistance, deploying them to their teams, and running business processes on software that no one with software judgment has reviewed. The tools work on the happy path. They do exactly what the builder asked for. The problem is what the builder didn't ask for.

The Happy Path Is All You Get

When a non-developer builds software with AI, they describe what they want: "I need a tool that takes client intake forms, extracts the relevant fields, and puts them in a spreadsheet." The AI builds it. It works. The builder is thrilled.

What the builder didn't specify, and the AI didn't volunteer:

What happens when a client submits a form with special characters that break the parser? What happens when two people submit simultaneously? What happens when the spreadsheet hits the row limit? Where are the backups? Who has access? What happens when the API key expires? What happens when the builder leaves the company and nobody knows how the tool works?

These aren't obscure edge cases. They're the standard failure modes of every software system ever built. Professional developers think about them not because they're smarter, but because they've watched systems fail in these exact ways. That accumulated experience of failure is what I've been calling the judgment layer: the part of building that AI can't replace because it requires contact with the consequences of getting it wrong.

The operations manager building a routing tool in Cursor has domain judgment about logistics. She knows which routes are efficient and which constraints matter. She does not have software judgment about error handling, data integrity, concurrent access, or failure recovery. Professional developers fail at these things constantly too. The difference is that professionals recognize the failure when it happens and have the skills to iterate toward a fix. The operations manager's tool breaks the same way, but she doesn't know it broke, doesn't know why, and doesn't know what to do about it. The AI gave her a tool that satisfies her domain judgment perfectly and her software judgment not at all, because she doesn't have any, and she doesn't know she doesn't have any.

This Has Happened Before

The counterargument writes itself: people have been building bad mission-critical software forever. Hospitals tracked patient records in Access databases. Small banks ran loan portfolios in Excel. Supply chains depended on macros that one person understood. When that person left, nobody could maintain it. The world survived.

This is true, and it's important to take seriously. "Bad software" and "functional software" are not mutually exclusive. The accounting firm's Access database was terrible by every engineering standard and it ran their business for fifteen years. The nurse's Excel tracker was a data integrity nightmare and it kept patient appointments from falling through the cracks. Fried is right that custom software has always been "bloated, confusing, and built wrong in all the ways." He's also right that it existed and that people used it.

So if bad software has always existed and the world kept turning, what changes with AI?

Velocity

The change is speed.

Access took months to build something broken. You had to learn Access first, or find someone who knew it. You had to build the forms, design the tables, write the queries. The pace of construction imposed a natural speed limit on how fast bad software could enter production. By the time you finished, you'd encountered at least some of the failure modes, because the slow process forced you through enough iterations to stumble into them.

AI removes that speed limit. The operations manager can go from "I have an idea" to "it's running in production" in an afternoon. The intake form tool is live before lunch. The routing optimizer is deployed by end of day. The contract parser is running by Friday. Each one works on the happy path. Each one has the same class of unexamined failure modes that Access databases had. But Access databases took months to accumulate. AI-built tools accumulate in days.

More attempts. Same failure rate. More failures. Compressed into a shorter timeline. By the time the first tool breaks, three more have been deployed. By the time someone realizes the intake form tool is silently dropping records with special characters, the routing optimizer and the contract parser are already load-bearing parts of the business.

This is Jevons Paradox applied to the failure mode itself. When building software gets cheaper, you don't get the same amount of bad software for less effort. You get vastly more bad software for the same effort. The per-unit cost of production drops, total production expands, and the total volume of unreviewed, unexamined software in production grows faster than anyone anticipated.

The Judgment Bottleneck

I've argued in previous pieces that human judgment is the binding constraint in AI-augmented work. AI makes the labor cheaper; demand expands; the expansion concentrates on the one input that can't scale: the human capacity for deep, focused evaluation. The three-to-four-hour ceiling on cognitively demanding work is biological, not cultural, and no productivity tool changes it.

Software judgment is a specific instance of this general constraint. Reviewing code for failure modes, reasoning about edge cases, thinking through data integrity, anticipating what happens when components interact in unexpected ways: this is deep work. It requires the kind of sustained attention that depletes on a fixed biological schedule. And the supply of people who have this judgment is not growing. Computer science enrollment is up, but software judgment comes from experience, not coursework. You develop it by watching systems fail, and that takes years.

AI expands the rate at which software enters production. It does not expand the rate at which qualified people can review it. The production side scales. The judgment side doesn't.

And the judgment side may actually be contracting. After sixteen consecutive years of growth, undergraduate CS enrollment turned negative in 2025. The Computing Research Association (CRA) found that 62% of computing departments reported declining enrollment for 2025-26, while only 13% saw increases. At University of California campuses, CS enrollment fell 6% in 2025 after declining 3% in 2024: the first drops since the dot-com crash. Students and their parents are reading the headlines about AI displacing entry-level developers and steering toward fields they perceive as more durable.

The irony is thick. The fear that AI will replace software developers is reducing the supply of software developers at the exact moment that AI is massively expanding the demand for software judgment. Students are fleeing the field because they think AI can do the work. AI is simultaneously creating more work that only humans with software judgment can evaluate. The enrollment decline doesn't just fail to solve the judgment bottleneck; it tightens it.

The gap between "software that exists" and "software that someone qualified has evaluated" widens from both directions: production accelerates while the pipeline of qualified reviewers narrows. Something has to give, and what gives is the review.

Software Slop

I wrote an essay about what makes AI-generated content "slop": superficial competence masking an absence of substance. The text looks right. The grammar is clean. The structure is logical. But it doesn't commit to anything, doesn't engage with anything, doesn't mean anything. It fills the container without filling it with content.

AI-generated software has the same property. The code is syntactically correct. The UI has proper styling, responsive layouts, loading spinners, appropriate error messages. It passes every visual inspection. A manager looking at a demo sees a professional application. A user running through the standard workflow sees something that works.

Underneath: no input validation beyond what the framework provides for free. No error handling beyond try/catch blocks that swallow exceptions. No concurrency protection. No backup strategy. No audit trail. No security beyond defaults. The software is superficially competent and structurally hollow, and you cannot tell the difference by looking at it.

This is what distinguishes the AI-built software problem from the Access database problem. Access databases looked like Access databases. The limitations were visible in the interface. The grey forms, the flat tables, the clunky queries: everyone could see they were using a tool that was not designed for what they were doing with it. The expectations were calibrated, even if the risks weren't.

AI-built software looks like real software. The surface quality has been democratized. What hasn't been democratized is the structural integrity underneath. And because the surface looks professional, the people using it have no signal that anything is missing. The feedback loop that would normally tell you "this is a prototype, not a product" has been severed. The prototype looks like the product, and nobody in the room can tell the difference except the people with software judgment, who weren't in the room when it was built.

What Fried Misses

Fried's framework has one gap. He says the demand for bespoke software won't grow because people don't want software projects. But the demand is already growing, not because people want to build, but because AI collapsed the apparent cost of building to near zero. The operations manager didn't set out to start a software project. She set out to solve a routing problem, and the software was a side effect that happened so fast she didn't register it as a project.

This is the mechanism Fried doesn't account for. The excavator doesn't turn the homeowner into a contractor. But it does let the homeowner dig a hole so fast that they're standing in it before they realize they don't know what they're doing. The question isn't whether they wanted to dig. It's what happens now that the hole exists and the house is being built on top of it.

The bespoke software revolution won't come from people deliberately choosing to become builders. It will come from people accidentally becoming builders because the tools made it so frictionless that building happened before the decision to build was consciously made. And the software they produce will be the fastest-growing category of technical debt in history, because it was created without judgment, deployed without review, and adopted without anyone understanding what's underneath.

Who Benefits

Fried is right that the excitement about bespoke software comes from software makers. What he doesn't say is why they should be excited. It's not because everyone becomes a builder. It's because everyone becomes a client.

Every operations manager who builds a broken routing tool and discovers it doesn't handle the edge cases is a future client for someone who can build it properly. Every accounting firm that deploys an AI-built intake system and loses data is a future client for someone who understands data integrity. The DIY phase doesn't replace professional software development. It creates demand for it, at a scale and urgency that didn't exist before, because now the potential clients have firsthand experience with why the problem is hard.

The judgment bottleneck doesn't prevent the Jevons expansion. It shapes it. More software gets attempted. More software fails. The failures create demand for the constrained resource (qualified judgment) at a rate that exceeds supply. The people who have software judgment become more valuable, not less, because the volume of work that needs their attention has exploded.

Fried's excavator metaphor is correct. Most homeowners won't become contractors. But the excavator lets them dig enough bad foundations that the contracting business booms. AI doesn't democratize building. It democratizes demand.

The Forecast

I'll make a prediction specific enough to be wrong about. Within three years, the majority of data-loss and security incidents at small and mid-sized businesses will trace back to AI-assisted internal tools built without professional review. Not because the AI wrote bad code (the code will be syntactically fine), but because the person directing the AI didn't know what to ask for and didn't know what they were missing. The failure mode won't be dramatic. It will be silent: records that were never backed up, access controls that were never configured, race conditions that corrupt data once a month in a pattern nobody notices until the audit.

There is an irony here that I should name. The people who most need to read this are the ones who never will. The operations manager vibing a routing tool into production this afternoon is not reading a blog about Jevons Paradox and GPU inference. She's solving her problem, and it feels like it's working, and no article on a site called Tiny Computers is going to reach him before the first silent failure does.

The bespoke software revolution is real. It's just not the revolution anyone is advertising. It's not a million people building great custom tools. It's a million people building adequate tools with invisible structural deficiencies, deployed to production at a velocity that outpaces the world's capacity to review them. The excavator is powerful, the foundations are being dug, and most of them are too shallow.