Les Orchard's "Grief and the AI Split" identifies something real. AI tools have revealed a division among developers that was previously invisible, because before these tools existed, everyone followed the same workflow regardless of motivation. Now the motivations are exposed. Some developers grieve the loss of hand-crafted code as a practice with inherent value. Others see the same tools and feel relief: the tedious parts are handled, the interesting parts remain. Orchard frames this as a split between people. Craft-oriented developers on one side, results-oriented developers on the other.
He's right that the split exists, and the piece clearly resonated with software creators because it names something people have been feeling but couldn't articulate. The observation is sharp. Where I think it can be extended is in where the line falls.
Orchard draws the line between people. I think it falls between tasks. The same person crosses that line dozens of times a day, moving between work that demands human judgment and work that doesn't, between moments where the craft concentrates and moments where it was never present in the first place. The split is real. It's just not an identity.
The Kernel I Didn't Write
JokelaOS is a bare-metal x86 kernel: 2,000 lines of C and NASM assembly, booting from a Multiboot header through GDT (Global Descriptor Table, which defines memory segments and access rights) and IDT (Interrupt Descriptor Table, which maps interrupt vectors to service routines) setup, paging, preemptive multitasking with Ring 3 isolation, a network stack that responds to pings, and an interactive shell. No forks. No libc. Every memcpy, every printf, every byte-order conversion written from scratch.
I didn't write most of it. Claude did.
In Orchard's framework, this should place me firmly in the "results" camp. I used AI to produce 2,000 lines of systems code; clearly I care about the outcome, not the process. But that framing misses what actually happened during the project.
The decisions that made JokelaOS work were not typing decisions. They were sequencing decisions: bring up serial output first, because without it you have no diagnostics for anything that follows. Initialize the GDT before the IDT, because interrupt handlers need valid segment selectors. Get the bump allocator working before the PMM (Physical Memory Manager), because page tables need permanent allocations before you can manage dynamic ones. These choices come from understanding how x86 protected mode actually works, which subsystems depend on which, and what the failure modes look like when you get the order wrong.
Claude generated the GDT setup code. I decided what the GDT entries should be, caught the access byte errors, and debugged the triple faults when segment selectors were wrong. Claude wrote the process scheduler. I determined that the TSS (Task State Segment, which tells the CPU where to find the kernel stack when switching privilege levels) needed updating on every context switch and diagnosed the General Protection Faults that occurred when it wasn't. Claude produced the RTL8139 network driver. I decided to bring up ARP before ICMP, caught a byte-order bug in the IP checksum, and validated that the packets leaving QEMU were actually well-formed.
The typing was delegated. The architecture, the sequencing, the diagnosis, the validation: those were mine. If you asked me whether JokelaOS involved craft, I would say yes, more than most projects I've done. If you asked me where the craft was, I would not point at any line of code.
The Board That Failed Twice
The Giga Shield tells a longer version of the same story, and it's messier, because hardware involves the physical world in a way that software doesn't.
The project started with a $468 Fiverr commission. I gave a designer in Kenya the spec documents, the components I thought should be used, and the form factor requirements: an Arduino Giga R1 shield with bidirectional level shifters, 72 channels of 3.3V-to-5V translation, KiCad deliverables. He produced a clean design. Nine TXB0108PW auto-sensing translators on a two-layer board. PCBWay fabricated it. Professional work, quick turnaround, and PCBWay sponsored the fabrication.
Then I plugged in the RetroShield Z80 and the board was blind.
The TXB0108 detects signal direction automatically by sensing which side is driving. For most applications, that's a feature. For a Z80 bus interface, it's fatal. During bus cycles, the Z80 tri-states its address and data lines. The pins go high-impedance: not high, not low, floating. The TXB0108 can't determine direction from a floating signal. It guesses wrong, and the Arduino reads garbage. I'd paid $468 for a board that couldn't see half of what the processor was doing.
Nobody caught this in the design phase. Not the Fiverr designer, who was working from the spec I gave him. Not me, when I reviewed the schematic. The TXB0108 datasheet doesn't scream "incompatible with tri-state buses"; you have to understand what tri-stating means in practice and recognize that auto-sensing can't handle it. That understanding came from plugging the board in and watching it fail.
The redesign used Claude to replace all nine auto-sensing translators with SN74LVC8T245 driven level shifters. Driven shifters have an explicit direction pin: you tell them which way to translate, and they do it regardless of whether the signal is being actively driven. Claude wrote Python scripts that pulled apart the KiCad schematic files, extracted all 72 signal mappings across 9 ICs, and generated new board files with the correct components and pin assignments.
I was about to submit the revised design to PCBWay when I realized we needed a tenth level shifter. The original nine covered not just the digital pins that map to the Z80 RetroShield but all of the analog pins on the Giga, giving complete 3.3V-to-5V coverage across the board. But with driven shifters, each IC has a single direction pin controlling all eight channels. Signals that need to travel in opposite directions at different times can't share an IC without creating bus contention. Some of the channel assignments had conflicting direction requirements, and the only fix was a tenth IC to separate them.
Adding one more TSSOP-24 package to an already dense two-layer board broke the trace routing. The board that had been routable with nine ICs was unroutable with ten. Moving to four layers helped but still left two to four traces with no viable path. The solution was a six-layer stackup, which needed a copper pour layer to act as a common ground plane. The open-source autorouter Freerouting couldn't handle a full copper pour; its architecture has no concept of flood-fill connectivity. So I used Quilter.ai, an AI trace router, to route the six-layer board with the ground plane that the open-source tooling couldn't represent.
Count the layers of delegation and intervention in this project. I delegated the initial design to a human professional. Physics revealed the flaw. I delegated the redesign to an AI. I caught the missing tenth shifter before it went to fabrication. I delegated the trace routing to another AI. PCBWay is currently manufacturing these boards. At every stage, the work alternated between labor that could be delegated and judgment that couldn't. The Fiverr designer did skilled labor. Claude did skilled labor. Quilter.ai did skilled labor. The craft was never in the labor. It was in knowing when the labor was wrong.
Where the Craft Actually Lives
Both of these projects point at the same thing. The craft isn't in the typing, the routing, or the code generation. It's in a layer that sits above and around all of that: the judgment layer.
The judgment layer is where you decide what to build next. Where you recognize that the output is wrong before you can articulate why. Where you sequence subsystems based on dependency chains that aren't documented anywhere. Where you plug a board in and notice that the readings don't make sense. Where you catch a missing component that the AI, the designer, and the autorouter all missed because none of them were thinking about the problem at that level.
This layer has specific properties. It requires contact with the problem domain, not just the code or the schematic but the actual behavior of the system under real conditions. It depends on accumulated experience: understanding what tri-stating means in practice, knowing that x86 protected mode has forty years of backward-compatible traps waiting for you. And it's the part that AI is worst at, precisely because it requires grounding in physical or logical reality that language models don't have access to.
The TXB0108 failure is the clearest example. The information needed to predict this failure existed in the datasheets. But recognizing its relevance required understanding what a Z80 bus cycle actually looks like at the electrical level, which required either experience with the hardware or a simulation environment that nobody had set up. No amount of language model capability substitutes for plugging in the board and watching it fail.
The Same Person in Both Modes
Orchard describes himself as results-oriented. He learned programming languages as "a means to an end" and gravitated toward AI tools because they let him focus on the outcome. He acknowledges that craft-oriented developers experience genuine loss. His framing is empathetic, but it still draws the line between people.
The line doesn't hold, because I'm both of his archetypes depending on the hour.
On Tuesday I might use Claude to generate a hundred lines of systemd service configuration because I need Ollama running on a machine and I don't care about the elegance of the unit file. On Wednesday I might spend three hours hand-debugging why rocm-smi reports GPU utilization at zero percent: reading kernel logs, checking DKMS module versions, testing HSA_OVERRIDE_GFX_VERSION values, loading the amdgpu module manually because it didn't auto-load at boot. The first task is pure delegation. The second is pure craft. Both are mine. Both happened this week.
When I wrote the economics piece, I used Claude to draft sections and I measured real power draw with nvidia-smi and rocm-smi at 500-millisecond intervals. I let AI handle the prose scaffolding and I personally caught that Ollama on the Strix Halo had been running entirely on CPU because the systemd service file was missing an environment variable. Every benchmark I'd trusted before finding that bug was wrong. No AI caught it. I caught it because the numbers felt off.
These aren't different people. They're different tasks. The identity framing ("I'm a craft developer" or "I'm a results developer") obscures what's actually a task-level decision that experienced people make constantly: this piece of work benefits from my full attention; this piece doesn't.
What the Grief Is About
The craft-grief that Orchard describes is real and worth taking seriously. Part of it targets the wrong thing. Part of it doesn't.
What's being mourned is typing as the bottleneck. For forty years, the primary constraint on software projects was the speed at which a human could produce correct code. Design mattered, architecture mattered, but someone still had to sit down and type it. The typing was slow enough that it forced a certain kind of attention. You couldn't write a function without thinking about it, because writing it took long enough that thinking was unavoidable. The bottleneck created the conditions for craft, and it felt like the craft itself.
AI removes the bottleneck. Code appears in seconds. The thinking isn't forced by the typing anymore; it has to be deliberate. And that shift feels like a loss, because the rhythm of the work has changed. The long, meditative stretches of writing code, where your understanding deepened as your fingers moved, are replaced by short bursts of generation followed by review. The texture is different.
But the craft didn't live in the texture. It lived in the judgment that the texture incidentally supported. The experienced developer who hand-writes a function isn't doing craft because the typing is slow. The typing is slow, and the craft happens during the slowness, but the craft is the decisions: what to name things, what to abstract, what edge cases to handle, when to stop. Those decisions haven't gotten easier. If anything, they've gotten harder, because AI lets you attempt projects that would have been too large to type by hand, which means you hit the judgment bottleneck more often and at higher stakes.
JokelaOS would have taken me months to type by hand. I probably wouldn't have attempted it. With AI handling the code generation, I attempted it in days and spent the entire time making architecture and debugging decisions. The project had more craft in it than most things I've built, precisely because the typing wasn't the bottleneck. The judgment was.
The Biological Ceiling
I wrote in the AI Vampire piece that human judgment is the binding constraint in a Jevons cycle operating on cognitive output. AI makes the labor cheaper; demand expands; the expansion concentrates on the one input that can't scale: human attention and judgment. The three-to-four-hour ceiling on deep work is biological, not cultural, and no amount of productivity tooling changes it.
The task-level split is where this plays out in practice. AI compresses the labor side of every project: the code generation, the trace routing, the prose drafting, the schematic extraction. What remains is denser, harder, and more consequential. Every hour of work has a higher ratio of judgment to labor than it did before AI. That's why Yegge's developers feel burned out, not because they're working more hours, but because every hour is now a judgment hour.
The craft isn't disappearing. It's being compressed into a smaller, denser layer. The typing is gone. The design reviews are shorter. The code appears instantly. What's left is the part that was always the actual craft: deciding what to build, recognizing when it's wrong, knowing what to test, catching the missing tenth level shifter. That layer is entirely human, it's harder than it used to be because the projects are bigger, and it's the only part that matters.
Orchard identified the split correctly. The grief is real, the division is real, and the piece resonated because it named something that software creators recognized immediately. The refinement I'd offer is that the line doesn't separate two kinds of people; it separates two kinds of tasks. The craft was never in the code. It was in the decisions that surrounded the code. Those decisions haven't gone anywhere. They've just lost the slow, meditative typing that used to accompany them. What remains is craft at higher concentration, with no filler.
There was something cathartic about the old way. The hours of typing weren't just production; they were a complete experience. You conceived the idea, worked through the logic, typed every character, fought the compiler, and watched it run. The whole arc from intention to execution passed through your hands. That totality had a satisfaction to it that reviewing AI-generated output doesn't replicate, even when the output is correct.
And there was something else: the syntax was a sacred tongue. Not everyone could read it. Not everyone could write it. The curly braces, the pointer arithmetic, the register mnemonics formed a language that belonged to the people who had invested years learning to speak it. That exclusivity wasn't gatekeeping for its own sake; it was the mark of hard-won fluency, and it meant something to the people who had it. Now anyone can describe what they want in English and get working code back. The priesthood dissolved overnight.
I feel that loss. I still create. I still orchestrate. I still catch the errors that the tools miss. But I no longer speak a language that most people can't. The judgment layer is real, and it's where the work that matters happens. But it doesn't carry the same weight as mastery of a difficult notation. Orchestrating a process is not the same as performing it, even if the orchestration requires more skill.
The grief is real. It's not about the wrong thing. It's about something that actually disappeared.