I was born in the back half of 1980, which means I missed the revolution.
By the time I sat down at an Apple IIe in 1986, with its green phosphor glow and chunky 5.25-inch floppies, the war was already over. The Altair 8800 was a museum piece. CP/M was fading into obscurity. The TRS-80 and Commodore PET were yesterday's news. I arrived just in time for the Apple II's twilight years, blissfully unaware that the machine in front of me represented the victory lap of a decade-long transformation.
I never experienced CP/M. I never loaded WordStar from an 8-inch floppy. I never watched VisiCalc recalculate a spreadsheet and felt the shock of a machine doing in seconds what had taken hours by hand. These were foundational moments in computing history, and I missed them entirely.
Now, decades later, I find myself building Z80 emulators and writing compilers for a processor that had already ceded the PC spotlight by the time I could read — though it quietly lived on in the TI graphing calculators that would later get me through high school math, and still powers them today. It's a form of technological archaeology — reconstructing a world I never lived in, trying to understand the texture of an era I only know through documentation and nostalgia. And from this vantage point, watching the current panic over artificial intelligence, I can't help but notice something: we've been here before.
As 2025 quickly fades. ChatGPT writes code. Midjourney creates art. Claude analyzes documents. The headlines scream that knowledge workers are doomed, that white-collar jobs will evaporate, that "this time it's different."
But it's not different. It's never different. And VisiCalc can prove it.
The World Before the Spreadsheet
To understand why VisiCalc mattered, you need to understand what "spreadsheet" meant in 1978. It wasn't software. It was paper.
Accountants, analysts, and financial planners worked with literal sheets of paper, ruled into rows and columns. They called them spreadsheets because the worksheets spread across multiple pages, sometimes taped together into unwieldy grids that covered entire desks. Every number was written by hand. Every calculation was performed with a mechanical adding machine or, if you were modern, an electronic calculator.
Here's what financial planning looked like: You'd spend hours, maybe days, building a projection. Revenue assumptions in one column, cost structures in another, profit margins calculated cell by cell. Then your boss would ask a simple question: "What if we increase prices by 5%?"
And you'd start over.
Not from the pricing cell — from every cell that pricing touched. The cascade of recalculations could take hours. A complex model might require a full day to revise. And if you made an error somewhere in the middle? Good luck finding it in a forest of pencil marks and eraser smudges.
Word processing was no better. Before WordStar and its competitors, documents were produced on typewriters. The IBM Selectric was the gold standard — a marvel of engineering that let you swap font balls and correct single characters with lift-off tape. But if you found a typo on page 47 of a 60-page contract, you had options: live with it, or retype pages 47 through 60.
Typing was a specialized profession. Companies maintained typing pools — rooms full of secretaries whose primary job was converting handwritten drafts and dictation into finished documents. A skilled typist was a valuable employee precisely because the work was so labor-intensive.
And if you needed computing power for serious analysis, you went to the mainframe. You submitted your job to the MIS department, waited in a queue, and paid by the CPU-minute. Time-sharing systems charged hundreds of dollars per hour. Computing was a scarce resource, rationed by bureaucracy.
This was knowledge work in the mid-1970s: manual, slow, expensive, and error-prone.
The Revolution No One Expected
Dan Bricklin was a Harvard MBA student in 1978 when he had the insight that would change everything. Sitting in a classroom, he watched a professor work through a financial model on a blackboard. The professor would write numbers, perform calculations, and fill in cells. Then he'd change an assumption, and the recalculation cascade would begin — erasing, recomputing, rewriting, sometimes running out of blackboard space.
Bricklin's thought was simple: what if the blackboard could recalculate itself?
Working with programmer Bob Frankston, Bricklin built VisiCalc — the "visible calculator." It ran on the Apple II, which was itself a hobbyist curiosity, a machine that enthusiasts bought to tinker with BASIC programs and play primitive games. VisiCalc transformed it into a business tool.
The software shipped in 1979, priced at \$100. Within a year, it was selling 12,000 copies per month. More importantly, it was selling Apple IIs. The \$2,000 computer became justifiable as a business expense because VisiCalc made it productive.
Consider the economics. A financial analyst in 1980 earned perhaps \$25,000 per year. A secretary earned \$12,000 to \$15,000. The Apple II plus VisiCalc cost roughly \$2,500. If the software saved a few weeks of analyst time, or let one analyst do the work that had previously required two, it paid for itself almost immediately.
But the real magic wasn't cost savings — it was capability. Suddenly you could ask "what if?" as many times as you wanted. Change an assumption, watch the spreadsheet ripple with recalculations, and see the answer in seconds. Financial modeling went from a laborious exercise in arithmetic to an exploratory conversation with your data.
WordStar, released a year earlier in 1978, performed the same transformation for documents. Write, edit, revise, move paragraphs, fix typos — all before committing anything to paper. The document existed as a malleable thing, not a fixed artifact produced through irreversible mechanical action.
Together, these applications (and others like dBASE for databases and SuperCalc as a VisiCalc competitor) created the productivity software category. They didn't sell computers to hobbyists; they sold computers to businesses. And they did it by solving mundane problems: arithmetic and typing.
The pundits of the era made predictions. Accountants would become obsolete. Secretaries would be eliminated. The typing pool would vanish. Knowledge work itself was being automated.
What Actually Happened
The predictions were wrong. Or rather, they were right about the transformation but wrong about the outcome.
Typing pools did shrink. The specialized profession of "typist" largely disappeared as word processing became a universal skill. But administrative assistants didn't vanish — their job changed. Instead of spending hours producing documents, they spent hours managing calendars, coordinating logistics, and handling communication. The mechanical work evaporated; the judgment work remained.
Bookkeepers declined as a profession. The person whose job was to maintain ledgers and perform routine calculations found that job automated. But accountants — the people who interpreted the numbers, made recommendations, and exercised judgment — grew in number. The Bureau of Labor Statistics shows steady growth in accounting employment through the 1980s and 1990s, even as the basic arithmetic of accounting was completely automated.
Financial analysts became more valuable, not less. The spreadsheet didn't replace them; it amplified them. An analyst who could build sophisticated models in VisiCalc or Lotus 1-2-3 was worth more than one limited to paper. The ceiling rose.
And here's the crucial point: the total amount of analysis, documentation, and financial modeling exploded. When something becomes cheaper and faster to produce, you produce more of it. Companies that had operated with crude annual budgets started building detailed monthly projections. Reports that had been quarterly became weekly. The volume of knowledge work grew to fill the new capacity.
This pattern — automation making workers more productive, which increases demand for the work, which maintains or increases employment — has a name in economics. It's called the Jevons paradox, originally observed in coal consumption: as steam engines became more efficient, total coal usage increased rather than decreased, because efficiency made steam power economical for more applications.
The same paradox applies to labor. Make an accountant 10x more productive, and you don't need 1/10th as many accountants. You do 10x as much accounting.
The Pattern Repeats
VisiCalc wasn't the first technology to trigger predictions of labor displacement, and it certainly wasn't the last. The pattern repeats with remarkable consistency:
ATMs (1970s-present): Automated Teller Machines were supposed to eliminate bank tellers. The math seemed obvious — why pay a human to dispense cash when a machine could do it? Yet U.S. bank teller employment roughly doubled between 1970 and 2010. The explanation: ATMs made bank branches cheaper to operate, so banks opened more branches, each requiring fewer but still some tellers. And the tellers' jobs shifted from cash handling to sales, complex transactions, and customer relationships.
CAD Software (1980s): Computer-aided design was going to eliminate draftsmen. Instead, it eliminated hand drafting while increasing demand for designers. The ability to iterate quickly, produce more alternatives, and handle more complex designs meant more design work overall.
Desktop Publishing (1980s): PageMaker and QuarkXPress would kill graphic designers by letting anyone create professional documents. Instead, the volume of designed materials exploded, and graphic design became a larger profession. The average quality rose because the floor rose.
Legal Research Databases (1990s): LexisNexis and Westlaw would eliminate paralegals by automating case research. Instead, faster research enabled more litigation, more thorough preparation, and more legal work overall.
Electronic Trading (1990s-2000s): Algorithmic trading would eliminate floor traders and financial professionals. It did eliminate floor traders, but the financial sector's employment grew as new roles emerged: quants, algorithm developers, risk managers, compliance officers.
In every case, the predictions followed the same logic: Technology X automates task Y, therefore workers who do Y are obsolete. And in every case, the predictions missed the second-order effects: automation makes the overall activity more valuable, demand increases, and workers shift to higher-judgment versions of the same work.
The AI Moment
Which brings us to now.
ChatGPT was released in November 2022. Within two months, it had 100 million users. Within a year, AI assistants were embedded in products from Microsoft to Google to Adobe. Large language models could write essays, generate code, summarize documents, answer questions, and produce content that was — on first glance — indistinguishable from human output.
The predictions arrived immediately. Programmers would become obsolete. Writers were doomed. Customer service, legal research, financial analysis, medical diagnosis — all would be automated. Goldman Sachs estimated 300 million jobs would be affected. The World Economic Forum issued reports. Thought leaders proclaimed that "this time it's different."
But is it?
Let's apply the VisiCalc framework. What exactly does AI automate?
First drafts, not final judgment. AI can produce a draft document, a code snippet, an analysis outline. What it cannot do is determine whether that draft serves the actual goal, handles the edge cases that matter, or fits the political context of the organization. The human reviews, revises, and takes responsibility.
Pattern matching, not pattern breaking. Large language models are, at their core, sophisticated pattern matchers trained on existing text. They excel at producing outputs that look like their training data. They struggle with genuine novelty — situations unlike anything in the training corpus, problems that require inventing new approaches rather than recombining old ones.
The middle of the distribution, not the edges. AI handles routine cases well. It struggles with outliers. The customer service bot can resolve common issues; the unusual complaint needs a human. The coding assistant can generate boilerplate; the architectural decision requires judgment.
Production, not accountability. AI can produce outputs, but it cannot be held accountable for them. When the document goes to the client, someone signs it. When the code ships to production, someone owns it. When the decision has consequences, someone faces them. That someone is human, because accountability requires agency, and agency requires humanity.
This is exactly the pattern we saw with spreadsheets. VisiCalc automated arithmetic, not judgment. It automated production, not accountability. It handled the routine middle, not the novel edges. And the humans who learned to use it became more valuable, not less.
The Irreducible Human
Why do humans remain in the loop? Not for sentimental reasons. Not because we want to preserve jobs. But because certain functions cannot be automated, regardless of how sophisticated the technology.
Accountability requires agency. When something goes wrong, someone must be responsible. Legal systems, regulatory frameworks, and social structures all assume a responsible party. AI systems can produce outputs, but they cannot be sued, fired, jailed, or shamed. The human who relies on AI output remains accountable for that output. This isn't a bug; it's a feature of how human society functions.
Context is infinite and local. AI models are trained on general patterns. Your specific situation — your company's politics, your client's unspoken concerns, your industry's unwritten rules — is not in the training data. The model knows what words typically follow other words. It doesn't know that your CFO hates bullet points, that your customer is going through a divorce, or that mentioning the competitor's product is forbidden in this meeting. The human provides context.
Trust requires relationship. Business transactions ultimately rest on trust between humans. You hire the lawyer, not the legal database. You trust your doctor, not the diagnostic algorithm. You buy from salespeople, not recommendation engines. AI can support these relationships, but it cannot replace them, because trust is a human phenomenon.
The feedback loop requires humans. Here's a subtle but critical point: AI systems are trained on human-generated data. If humans stop producing original work, the training data stops improving. The model learns to produce outputs that look like human outputs because it was trained on human outputs. Remove the humans, and you get a system trained on its own outputs — a recursive degradation. We are the curriculum.
Novel situations require genuine understanding. AI excels at interpolation — finding patterns within the space of its training data. It struggles with extrapolation — handling situations outside that space. Genuine novelty, by definition, lies outside the training distribution. The unprecedented situation, the black swan event, the "we've never seen this before" moment — these require human judgment, because no pattern matching can help when there's no pattern to match.
The Reskilling Reality
None of this means AI changes nothing. It changes a lot. The question is what kind of change.
When spreadsheets arrived, certain skills became less valuable. Manual arithmetic, once essential for financial work, became irrelevant. The ability to maintain error-free ledgers through careful penmanship mattered less. Slide rule proficiency joined buggy whip maintenance in the museum of obsolete competencies.
But new skills became essential. Building spreadsheet models, understanding the logic of cell references, knowing how to structure data for analysis — these became core professional competencies. "Computer literacy" emerged as a job requirement. People who learned the new tools thrived; people who refused to adapt struggled.
AI is triggering the same shift. Consider what becomes less valuable:
Writing first drafts from scratch. When AI can produce a competent first draft in seconds, the ability to stare at a blank page and produce prose is less differentiating. The value shifts to editing, directing, and refining.
Routine research and compilation. When AI can summarize documents, extract key points, and synthesize information, the human who only does that work has a problem. The value shifts to evaluating sources, asking the right questions, and interpreting results.
Basic code production. When AI can generate boilerplate, implement standard patterns, and translate requirements into code, the programmer whose main skill is typing syntax is in trouble. The value shifts to architecture, debugging, code review, and understanding what the system should do.
And consider what becomes more valuable:
Judgment and curation. AI produces. Humans evaluate. The ability to look at AI output and quickly determine what's useful, what's wrong, and what's missing becomes essential. This is editing in the broadest sense — not just fixing typos, but directing the creative process.
Domain expertise plus AI fluency. The accountant who understands both accounting and how to leverage AI tools is more valuable than either an accountant who ignores AI or an AI operator who doesn't understand accounting. The combination is the new competency.
Handling exceptions and edge cases. As AI handles the routine middle, humans focus on the exceptions. The unusual customer complaint, the novel legal situation, the unprecedented technical problem — these become the human domain. Expertise in handling weirdness becomes more valuable.
Relationship and trust building. As transactional work becomes automated, relationship work becomes relatively more important. The human who can build trust, navigate politics, and close deals face-to-face has a durable advantage.
This is exactly what happened with spreadsheets. The value shifted from arithmetic to analysis, from production to judgment, from routine to exception. The workers who adapted thrived. The workers who clung to obsolete methods struggled.
The Transition Is Never Painless
I don't want to minimize the disruption. Real people, with real skills, face real challenges when technology shifts beneath them.
The typing pool secretary in 1985 had spent years developing speed and accuracy on the Selectric. She could type 80 words per minute with minimal errors. She knew the quirks of carbon paper, the rhythm of the carriage return, the muscle memory of the key layout. These skills, honed over a decade, became worthless in the span of a few years.
Some of those secretaries learned WordPerfect and became administrative assistants. Some moved into other roles entirely. Some struggled, unable or unwilling to adapt, and found themselves squeezed out of the workforce. The aggregate statistics — employment levels, productivity growth, economic expansion — hide individual stories of dislocation and difficulty.
The same will be true of AI. Some knowledge workers will adapt smoothly, integrating AI tools into their workflow and becoming more productive. Some will resist, clinging to methods that worked in 2020 but feel increasingly obsolete by 2030. Some will find themselves displaced, their particular bundle of skills suddenly less valuable in a market that's moved on.
The historical pattern tells us that the net outcome is positive — that technological transitions create more opportunity than they destroy, that the economy adjusts, that new roles emerge. But history is cold comfort to the individual caught in the transition. The typewriter repairman didn't care that computer technicians were a growing field. He cared that his skills were worthless.
This is why the reskilling conversation matters. Not because AI will eliminate all jobs — it won't — but because the specific jobs, the specific skills, the specific ways of working will change. And navigating that change requires awareness, adaptability, and often institutional support.
The workers who thrived through the spreadsheet revolution weren't necessarily the most skilled at the old methods. They were the ones who recognized the shift and moved with it. The accountant who embraced Lotus 1-2-3, even if she was mediocre at mental arithmetic, outcompeted the brilliant human calculator who refused to touch a keyboard.
The same pattern is emerging now. The programmer who integrates AI assistance, even if she's not the fastest typist, will outcompete the keyboard wizard who insists on writing every character manually. The writer who uses AI for drafts and focuses on editing and judgment will outcompete the prose stylist who spends hours on first drafts. The analyst who lets AI handle data compilation and focuses on interpretation will outcompete the Excel jockey who takes pride in manual formula construction.
Adaptation isn't optional. It wasn't optional in 1980, and it isn't optional now.
The Long View
I spend my weekends building emulators for 50-year-old processors. I write compilers that target the Z80, a chip that was designed when Gerald Ford was president. I run BASIC and FORTH on simulated hardware, watching instructions execute that were first written when disco was young.
From this perspective, the current AI moment looks familiar. Technology extends human capability. It always has. The accountant with VisiCalc wasn't replaced; she was amplified. The writer with WordStar wasn't obsolete; he was leveraged. The analyst with a spreadsheet could do in hours what had taken days, and that made analysis more valuable, not less.
When I run my Z80 emulator — JavaScript interpreting WebAssembly interpreting 1976 machine code — I'm witnessing layers of abstraction that would have seemed like science fiction to the engineers who designed the original chip. But the fundamental relationship remains: humans using tools to extend their capabilities.
The nature of work changes. It always changes. The bookkeeper becomes the accountant. The typist becomes the administrative assistant. The draftsman becomes the designer. The job titles shift, the tools evolve, the skills required transform. But the need for human judgment, human accountability, human creativity, and human relationships remains.
This isn't optimism. It's pattern recognition. The 45-year pattern from VisiCalc to ChatGPT is consistent: technology that automates tasks changes the nature of work without eliminating the need for workers. The "this time it's different" predictions have been wrong every time, not because technology isn't powerful, but because the predictions misunderstand the relationship between automation and human labor.
The spreadsheet didn't eliminate the need for human intelligence. It made human intelligence more valuable by freeing it from arithmetic. AI won't eliminate the need for human judgment. It will make human judgment more valuable by freeing it from production.
We've been here before. And we'll be here again, decades from now, when some new technology triggers the same predictions, and historians look back at our AI panic the way we look back at the VisiCalc panic — as an understandable overreaction that missed the larger pattern.
The work changes. The workers adapt. The need for humans persists.
It's not different this time. It never is.
