TinyComputers.io

Featured

Investing in the Jevons Expansion

If AI follows a Jevons trajectory — and five pieces in this series argue that it does — the investment question isn't whether demand will expand, but where the expansion creates bottlenecks. This piece maps the Jevons framework onto concrete investment layers: energy, physical infrastructure, custom silicon, and the application tier. It also addresses the most common objection — that GPU diminishing returns cap the expansion — and explains why multiple overlapping cost curves make the case stronger, not weaker.

Sponsors

Recent Articles

Generating Technical Handbooks with AI: Parallel Agents, Source Code, and 2,400 Pages

I used Claude Code with Opus 4.6 and the Claude Agent SDK to generate three technical handbooks — totaling over 2,400 pages and 232 chapters — from real project source code. The key was a framework that launches 10-12 AI agents in parallel, each reading actual codebases and writing LaTeX chapters simultaneously. This post describes how the system works, what goes right, what goes wrong, and what it means for the future of technical documentation.

The AI Vampire Is Jevons Paradox

Steve Yegge's "The AI Vampire" describes AI-driven burnout as an extraction problem — companies capture the productivity surplus while workers absorb the cognitive toll. But what he's actually describing is Jevons Paradox applied to human attention. AI makes cognitive output dramatically cheaper, demand for it expands exactly as the model predicts, and the expansion concentrates on the one input that can't scale: human judgment.

Moore's Law for Intelligence: What Happens When Thinking Gets Cheap

AI inference costs are declining at a rate that mirrors the early decades of Moore's Law. If the cost per token continues to fall by an order of magnitude every two to three years, the implications extend far beyond making current AI applications cheaper. This piece explores what becomes possible — not what gets displaced — when intelligence-per-dollar follows the same exponential curve that turned computing from a military luxury into a pocket commodity.

The Jevons Counter-Thesis: Why AI Displacement Scenarios Underweight Demand Expansion

A response to Citrini Research's "The 2028 Global Intelligence Crisis" scenario. Using Jevons Paradox and historical precedent, this piece argues that AI displacement models systematically undercount the demand expansion that follows when a critical input — in this case, cognitive labor — becomes dramatically cheaper. The article examines latent demand, competitive price dynamics, the speed of entrepreneurial response, and why the historical base rate favors expansion over permanent contraction.

A Stack-Based Bytecode VM for Lattice: 100 Opcodes, Serialization, and a Self-Hosted Compiler

A deep dive into the Lattice bytecode VM — a stack-based virtual machine with 100 opcodes, computed goto dispatch, pre-compiled concurrency sub-chunks, an ephemeral bump arena for string temporaries, a binary .latc serialization format, and a self-hosted compiler written in Lattice itself. Covers the instruction set architecture, upvalue-based closures, how structured concurrency compiles without AST dependency, and validation across 815 tests under AddressSanitizer.

Rue: Steve Klabnik's AI-Assisted Experiment in Memory Safety Without the Pain

A comprehensive review of Rue, a new systems programming language designed by Steve Klabnik and implemented primarily with Claude AI. Rue explores whether memory safety without garbage collection can be achieved more accessibly than Rust through affine types and mutable value semantics instead of a borrow checker. After a deep dive into Rue's features, type system, borrow-and-inout model, and the AI-assisted development story behind it, we evaluate what the language delivers today, what it's missing, and whether it points toward a real alternative for systems programmers frustrated by Rust's learning curve.

Review of "Crafting Interpreters" by Robert Nystrom

A review of "Crafting Interpreters" by Robert Nystrom, a book that guides readers through building two complete interpreters for the same language—first a tree-walk interpreter in Java, then a bytecode virtual machine in C. Nystrom's conversational writing style, hand-drawn illustrations, and incremental approach to complexity produce one of the best programming books in recent memory, transforming what could be an intimidating academic subject into an engaging, hands-on journey through scanning, parsing, scope resolution, closures, classes, inheritance, and garbage collection.