<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="../assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>TinyComputers.io (Posts about language design)</title><link>https://tinycomputers.io/</link><description></description><atom:link href="https://tinycomputers.io/categories/language-design.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2026 A.C. Jokela 
&lt;!-- div style="width: 100%" --&gt;
&lt;a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"&gt;&lt;img alt="" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/80x15.png" /&gt; Creative Commons Attribution-ShareAlike&lt;/a&gt;&amp;nbsp;|&amp;nbsp;
&lt;!-- /div --&gt;
</copyright><lastBuildDate>Mon, 06 Apr 2026 22:13:01 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Teaching an LLM a Language It Has Never Seen</title><link>https://tinycomputers.io/posts/teaching-llms-languages-theyve-never-seen.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/teaching-llms-languages-theyve-never-seen_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;33 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a href="https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html"&gt;Lattice&lt;/a&gt; is a programming language I designed. Its central feature is the phase system: every runtime value carries a mutability tag that transitions between states the way matter moves between liquid and solid. You declare a variable with &lt;code&gt;flux&lt;/code&gt; (mutable) or &lt;code&gt;fix&lt;/code&gt; (immutable). You &lt;code&gt;freeze&lt;/code&gt; a value to make it immutable, &lt;code&gt;thaw&lt;/code&gt; it to get a mutable copy, and &lt;code&gt;sublimate&lt;/code&gt; it to make it permanently frozen. &lt;code&gt;forge&lt;/code&gt; blocks let you build something mutably and have the result exit as immutable. None of this exists in any other language.&lt;/p&gt;
&lt;p&gt;Lattice does not appear in Claude's training data. I designed the language after the knowledge cutoff. There is no Lattice source code on GitHub (other than my own repository). There are no Stack Overflow answers. There is no tutorial ecosystem, no community blog posts, no textbook chapters. The only documentation that exists is the code itself, a 38-chapter handbook I wrote, and three blog posts on this site.&lt;/p&gt;
&lt;p&gt;Claude writes Lattice fluently. It writes correct programs using the phase system, the concurrency primitives, the module system, and the trait/impl pattern. It writes struct definitions with per-field phase annotations. It uses &lt;code&gt;forge&lt;/code&gt; blocks and &lt;code&gt;anneal&lt;/code&gt; expressions correctly. And it wrote a 4,955-line self-hosted compiler in Lattice, for Lattice: a complete tokenizer, parser, and bytecode code generator that reads &lt;code&gt;.lat&lt;/code&gt; source files and emits &lt;code&gt;.latc&lt;/code&gt; bytecode binaries.&lt;/p&gt;
&lt;p&gt;The question is how any of this is possible when the model has never seen the language before.&lt;/p&gt;
&lt;h3&gt;The Rust Smell&lt;/h3&gt;
&lt;p&gt;The answer starts with syntax. Here is a Lattice function:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fn&lt;span class="w"&gt; &lt;/span&gt;greet(name:&lt;span class="w"&gt; &lt;/span&gt;String)&lt;span class="w"&gt; &lt;/span&gt;-&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;String&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;    &lt;/span&gt;return&lt;span class="w"&gt; &lt;/span&gt;"Hello,&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;!"
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And here is the Rust equivalent:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nf"&gt;greet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;&amp;amp;&lt;/span&gt;&lt;span class="kt"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="fm"&gt;format!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello, {name}!"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;fn&lt;/code&gt; keyword, the colon-separated type annotations, the &lt;code&gt;-&amp;gt;&lt;/code&gt; return type, the curly braces: Claude has seen these patterns millions of times in Rust code. When it encounters them in Lattice, it doesn't need to learn a new syntax. It needs to recognize a familiar one.&lt;/p&gt;
&lt;p&gt;This extends deep into the language. Lattice structs look like Rust structs:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;struct Point {
    x: Float,
    y: Float
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Lattice enums look like Rust enums:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;enum&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Shape&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;Circle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Float&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;Rectangle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Float&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Float&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Lattice match expressions look like Rust match expressions:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;match shape {
    Shape::Circle(r) =&amp;gt; pi() &lt;span class="gs"&gt;* r *&lt;/span&gt; r,
    Shape::Rectangle(w, h) =&amp;gt; w * h,
    _ =&amp;gt; 0.0
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Lattice traits and impl blocks look like Rust traits and impl blocks:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;trait&lt;span class="w"&gt; &lt;/span&gt;Printable&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;    &lt;/span&gt;fn&lt;span class="w"&gt; &lt;/span&gt;display(self:&lt;span class="w"&gt; &lt;/span&gt;any)&lt;span class="w"&gt; &lt;/span&gt;-&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;String
}

impl&lt;span class="w"&gt; &lt;/span&gt;Printable&lt;span class="w"&gt; &lt;/span&gt;for&lt;span class="w"&gt; &lt;/span&gt;Point&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;    &lt;/span&gt;fn&lt;span class="w"&gt; &lt;/span&gt;display(self:&lt;span class="w"&gt; &lt;/span&gt;any)&lt;span class="w"&gt; &lt;/span&gt;-&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;String&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;        &lt;/span&gt;return&lt;span class="w"&gt; &lt;/span&gt;"(&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="bp"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;,&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="bp"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;)"
&lt;span class="w"&gt;    &lt;/span&gt;}
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Closures use the same &lt;code&gt;|params| body&lt;/code&gt; syntax. The &lt;code&gt;..&lt;/code&gt; range operator works the same way. The &lt;code&gt;?&lt;/code&gt; postfix operator propagates errors. &lt;code&gt;for item in collection&lt;/code&gt; iterates. &lt;code&gt;let&lt;/code&gt; binds variables. The structural similarity is pervasive enough that a model trained on Rust can parse and generate Lattice code without any Lattice-specific training.&lt;/p&gt;
&lt;p&gt;I did not design Lattice to be AI-friendly. I designed it because Rust's syntax is good and I wanted to use it for a language with different semantics. But the side effect is that Claude can write Lattice from day one because the syntax activates the same neural pathways that Rust does. The model doesn't know it's writing a different language. It knows it's writing code that looks like Rust, and the structural patterns transfer.&lt;/p&gt;
&lt;h3&gt;The Phase System: Where Familiarity Ends&lt;/h3&gt;
&lt;p&gt;The Rust resemblance carries Claude through basic Lattice programs without difficulty. Where it gets interesting is the phase system, because this is where Lattice has no analog in any language Claude has seen.&lt;/p&gt;
&lt;p&gt;In Rust, mutability is a static property: &lt;code&gt;let mut x = 5;&lt;/code&gt; or &lt;code&gt;let x = 5;&lt;/code&gt;. You decide at declaration time and the compiler enforces it. In Lattice, mutability is a runtime state that values transition through:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux counter = 0          // mutable
counter = counter + 1     // allowed: counter is fluid

freeze(counter)           // transition: fluid → crystal
counter = counter + 1     // runtime error: counter is crystal

flux copy = thaw(counter) // get a mutable copy
copy = copy + 1           // allowed: copy is fluid
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Claude handles this correctly. When I describe the phase system and provide examples, Claude generates code that uses &lt;code&gt;flux&lt;/code&gt; and &lt;code&gt;fix&lt;/code&gt; declarations appropriately, calls &lt;code&gt;freeze()&lt;/code&gt; at the right points, and avoids mutating crystal values. The model maps &lt;code&gt;flux&lt;/code&gt; to "mutable variable" and &lt;code&gt;fix&lt;/code&gt; to "immutable variable" in its internal representation, and the transition functions (&lt;code&gt;freeze&lt;/code&gt;, &lt;code&gt;thaw&lt;/code&gt;) become explicit state changes that it tracks through the program.&lt;/p&gt;
&lt;p&gt;The harder constructs are the ones with no familiar analog.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;forge&lt;/code&gt; blocks are mutable construction zones whose output exits as immutable:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fix config = forge {
    flux c = {}
    c.host = "localhost"
    c.port = 8080
    c.debug = false
    c   // exits the forge block as crystal
}
// config is now crystal; cannot be modified
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Claude gets this right because the pattern (build something mutably, freeze the result) maps to the builder pattern in Rust and other languages. The syntax is novel but the concept isn't.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;anneal&lt;/code&gt; is harder. It temporarily thaws a crystal value into a mutable binding for the duration of a block, then re-freezes it:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fix settings = forge { flux s = {}; s.theme = "dark"; s }

anneal(settings) |s| {
    s.theme = "light"   // temporarily mutable
}
// settings is crystal again, with theme = "light"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Claude produces correct &lt;code&gt;anneal&lt;/code&gt; code when given the semantics, but it occasionally generates patterns that would work in Rust (taking a &lt;code&gt;&amp;amp;mut&lt;/code&gt; reference) but don't apply in Lattice (where &lt;code&gt;anneal&lt;/code&gt; is the only way to modify a crystal value in place). The model's Rust intuitions are strong enough to produce syntactically valid Lattice but sometimes semantically incorrect programs, because it defaults to Rust's mutation model when the Lattice-specific construct is unfamiliar.&lt;/p&gt;
&lt;p&gt;The reactive phase system is where Claude needs the most guidance. &lt;code&gt;react&lt;/code&gt;, &lt;code&gt;bond&lt;/code&gt;, and &lt;code&gt;seed&lt;/code&gt; have no precedent in any mainstream language:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux&lt;span class="w"&gt; &lt;/span&gt;temperature&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;72.0

react("temperature",&lt;span class="w"&gt; &lt;/span&gt;fn(name,&lt;span class="w"&gt; &lt;/span&gt;old_phase,&lt;span class="w"&gt; &lt;/span&gt;new_phase)&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;    &lt;/span&gt;print("&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;changed&lt;span class="w"&gt; &lt;/span&gt;from&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="n"&gt;old_phase&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;to&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="n"&gt;new_phase&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;")
})

freeze(temperature)&lt;span class="w"&gt;  &lt;/span&gt;//&lt;span class="w"&gt; &lt;/span&gt;triggers&lt;span class="w"&gt; &lt;/span&gt;the&lt;span class="w"&gt; &lt;/span&gt;reaction&lt;span class="w"&gt; &lt;/span&gt;callback
&lt;/pre&gt;&lt;/div&gt;

&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux primary = "active"
flux mirror = "active"

bond("mirror", "primary", "sync")  // when primary changes phase, mirror follows

freeze(primary)  // mirror also freezes
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Claude can produce these patterns when given the API, but it doesn't intuit them. It never suggests &lt;code&gt;react&lt;/code&gt; or &lt;code&gt;bond&lt;/code&gt; unprompted, because there's nothing in its training data that would trigger the association. These constructs must be taught explicitly. The Rust smell gets Claude through 80% of Lattice. The last 20% requires actual specification.&lt;/p&gt;
&lt;h3&gt;The Spectrum of Difficulty&lt;/h3&gt;
&lt;p&gt;Working with Claude on Lattice code over several months has revealed a clear gradient of difficulty:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Trivial (Rust transfer):&lt;/strong&gt; Functions, structs, enums, match expressions, closures, for loops, string interpolation, module imports, error propagation with &lt;code&gt;?&lt;/code&gt;. Claude writes these correctly on the first attempt because they're syntactically identical to Rust.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Easy (new vocabulary, familiar concept):&lt;/strong&gt; &lt;code&gt;flux&lt;/code&gt;/&lt;code&gt;fix&lt;/code&gt; declarations, &lt;code&gt;freeze()&lt;/code&gt;/&lt;code&gt;thaw()&lt;/code&gt; calls, basic phase checking. Claude maps these to mutable/immutable patterns it already knows. The vocabulary is new; the concept isn't.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Moderate (new pattern, teachable):&lt;/strong&gt; &lt;code&gt;forge&lt;/code&gt; blocks, &lt;code&gt;anneal&lt;/code&gt; expressions, &lt;code&gt;crystallize&lt;/code&gt; blocks, struct field-level phase annotations (alloy structs). These require explanation, but once Claude sees one or two examples, it generalizes correctly. The builder pattern and block-scoped mutation are close enough to existing patterns that the model bridges the gap.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hard (no analog, requires specification):&lt;/strong&gt; Reactive phase operations (&lt;code&gt;react&lt;/code&gt;, &lt;code&gt;bond&lt;/code&gt;, &lt;code&gt;seed&lt;/code&gt;), phase pattern matching (&lt;code&gt;fluid val =&amp;gt;&lt;/code&gt;, &lt;code&gt;crystal val =&amp;gt;&lt;/code&gt;), the concurrency constraint that only crystal values can be sent on channels, strict mode's consumption semantics for &lt;code&gt;freeze&lt;/code&gt;. Claude can use these but never invents them. They must be explicitly described.&lt;/p&gt;
&lt;p&gt;The concurrency constraint is a good example of the "hard" category. In Lattice, data sent on a channel must be crystal:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="nv"&gt;let&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Channel&lt;/span&gt;::&lt;span class="nv"&gt;new&lt;/span&gt;&lt;span class="ss"&gt;()&lt;/span&gt;
&lt;span class="nv"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mutable"&lt;/span&gt;

&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ch&lt;/span&gt;.&lt;span class="k"&gt;send&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;runtime&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;error&lt;/span&gt;:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;cannot&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;send&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;fluid&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;value&lt;/span&gt;

&lt;span class="nv"&gt;freeze&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;ch&lt;/span&gt;.&lt;span class="k"&gt;send&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;works&lt;/span&gt;:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;now&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;crystal&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This rule exists because crystal values are deeply immutable: they can't be modified by the sender after transmission, which eliminates data races structurally. Claude understands the concept (Rust has &lt;code&gt;Send&lt;/code&gt; and &lt;code&gt;Sync&lt;/code&gt; traits that serve a similar purpose), but it doesn't automatically apply Lattice's specific rule without being told. Left to its own devices, Claude will try to send fluid values on channels, because that's what you'd do in Go or Python. The constraint must be stated.&lt;/p&gt;
&lt;p&gt;Strict mode (&lt;code&gt;#mode strict&lt;/code&gt; at the top of a file) is another case where Claude needs explicit guidance. In strict mode, &lt;code&gt;let&lt;/code&gt; is banned (you must use &lt;code&gt;flux&lt;/code&gt; or &lt;code&gt;fix&lt;/code&gt;), &lt;code&gt;freeze()&lt;/code&gt; consumes the original binding (Rust-like move semantics), and crystal bindings cannot be assigned to at all, not even as a runtime error. Claude can write strict-mode Lattice, but it defaults to casual-mode patterns unless reminded. The model's prior is "permissive runtime" because that's what most dynamic languages are.&lt;/p&gt;
&lt;p&gt;The gradient correlates exactly with how much the construct resembles something in Rust or another mainstream language. When the syntax is familiar, Claude's transfer learning handles it. When the concept is familiar but the syntax is new, one or two examples are enough. When both the syntax and the concept are novel, Claude needs the specification.&lt;/p&gt;
&lt;h3&gt;The Self-Hosted Compiler&lt;/h3&gt;
&lt;p&gt;The strongest evidence that Claude can deeply understand a language it was never trained on is &lt;code&gt;latc.lat&lt;/code&gt;: a &lt;a href="https://tinycomputers.io/posts/a-stack-based-bytecode-vm-for-lattice.html"&gt;4,955-line self-hosted compiler&lt;/a&gt; written in Lattice, for Lattice.&lt;/p&gt;
&lt;p&gt;The compiler reads &lt;code&gt;.lat&lt;/code&gt; source files and emits &lt;code&gt;.latc&lt;/code&gt; bytecode binaries. It has twelve sections:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Opcode constant definitions (mapping all 100+ VM opcodes to integers)&lt;/li&gt;
&lt;li&gt;Token stream and cursor helpers (&lt;code&gt;peek&lt;/code&gt;, &lt;code&gt;advance&lt;/code&gt;, &lt;code&gt;expect&lt;/code&gt;, &lt;code&gt;match_tok&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Compiler state management (save/restore for nested compilation)&lt;/li&gt;
&lt;li&gt;Error reporting&lt;/li&gt;
&lt;li&gt;Bytecode emit helpers (&lt;code&gt;emit_byte&lt;/code&gt;, &lt;code&gt;emit_jump&lt;/code&gt;, &lt;code&gt;patch_jump&lt;/code&gt;, &lt;code&gt;emit_loop&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Constant pool management (integers, floats, strings, closures)&lt;/li&gt;
&lt;li&gt;Scope and variable resolution (&lt;code&gt;begin_scope&lt;/code&gt;, &lt;code&gt;end_scope&lt;/code&gt;, &lt;code&gt;resolve_local&lt;/code&gt;, upvalue tracking)&lt;/li&gt;
&lt;li&gt;Expression parsing (precedence climbing, binary/unary ops, calls, field access)&lt;/li&gt;
&lt;li&gt;Statement compilation (let/flux/fix, if/while/for, return, match, try/catch)&lt;/li&gt;
&lt;li&gt;Declaration compilation (functions, structs, enums, traits, impl blocks)&lt;/li&gt;
&lt;li&gt;Binary serialization (writing the LATC file format with magic bytes, version header, chunk data)&lt;/li&gt;
&lt;li&gt;Main entry point&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Claude wrote this. Not "Claude assisted with this" or "Claude generated boilerplate for this." Claude wrote a recursive descent parser for Lattice's grammar, a bytecode compiler that emits correct opcodes for the phase system, and a binary serializer that produces files the C runtime can load and execute. The compiler bootstraps: you run it with the C-based &lt;code&gt;clat&lt;/code&gt; interpreter, and it produces bytecode that the same interpreter executes.&lt;/p&gt;
&lt;p&gt;The compiler itself uses Lattice's phase system for its own internal state. The compiler's mutable working data (the bytecode buffer, the constant pool, the local variable tracking arrays) is declared with &lt;code&gt;flux&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;c_lines&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;constants&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;local_name_arr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;local_depth_arr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;local_captured_arr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;local_count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is the compiler eating its own dogfood. The mutable state that the compiler needs to build bytecode is declared using the same phase system that the compiler is compiling. The phase keywords aren't decorative here; they're structurally necessary because the compiler modifies these arrays on every opcode emission and scope transition.&lt;/p&gt;
&lt;p&gt;The compiler has 118 functions across 12 sections, with 554 opcode references. It handles every construct in the language: &lt;code&gt;flux&lt;/code&gt;/&lt;code&gt;fix&lt;/code&gt; declarations, &lt;code&gt;forge&lt;/code&gt; blocks, &lt;code&gt;freeze&lt;/code&gt;/&lt;code&gt;thaw&lt;/code&gt;/&lt;code&gt;sublimate&lt;/code&gt; calls, &lt;code&gt;anneal&lt;/code&gt; and &lt;code&gt;crystallize&lt;/code&gt; expressions, struct and enum definitions with phase annotations, trait/impl blocks, match expressions with phase-aware pattern matching, structured concurrency with &lt;code&gt;scope&lt;/code&gt;/&lt;code&gt;spawn&lt;/code&gt;, channel operations, &lt;code&gt;try&lt;/code&gt;/&lt;code&gt;catch&lt;/code&gt;, &lt;code&gt;defer&lt;/code&gt;, and the complete expression grammar with correct operator precedence.&lt;/p&gt;
&lt;p&gt;Writing a self-hosted compiler requires understanding the language at every level simultaneously. The tokenizer must know every keyword, operator, and delimiter. The parser must handle every grammatical production, including the phase-specific constructs (&lt;code&gt;forge&lt;/code&gt;, &lt;code&gt;anneal&lt;/code&gt;, &lt;code&gt;crystallize&lt;/code&gt;) that exist nowhere in Claude's training data. The code generator must emit the correct opcodes for phase transitions, reactive bindings, and structured concurrency. And the whole thing must be written in the language being compiled, which means Claude is writing Lattice to compile Lattice, using constructs it learned from examples rather than training data.&lt;/p&gt;
&lt;p&gt;The compiler's serialization section writes the LATC binary format byte by byte:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fn serialize_latc(ch: any) {
    ser_buf = Buffer::new(0)

    // Header: "LATC" + version(1) + reserved(0)
    write_u8(76)    // 'L'
    write_u8(65)    // 'A'
    write_u8(84)    // 'T'
    write_u8(67)    // 'C'
    write_u16_le(1) // format version
    write_u16_le(0) // reserved

    serialize_chunk(ch)
    return ser_buf
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is not pattern matching against compiler source code from the training data. No Lattice compiler exists in the training data. Claude wrote a compiler for a language that has no prior art, in a language that has no prior art, producing a binary format that has no prior art. Every decision (the magic bytes, the chunk serialization order, the upvalue encoding) came from understanding the specification I provided and the runtime behavior of the C-based interpreter.&lt;/p&gt;
&lt;h3&gt;What I Actually Gave Claude&lt;/h3&gt;
&lt;p&gt;The teaching process was less structured than you might expect. There was no formal curriculum, no staged introduction of concepts, no carefully sequenced lesson plan. And I should be honest about the recursive nature of what happened: Claude Code was the primary tool for building Lattice itself. The language, the C implementation, the grammar, the runtime, the test suite, the handbook: all of it was built with Claude Code. I designed the language and directed the implementation, but Claude wrote the C, the LaTeX, and the example programs.&lt;/p&gt;
&lt;p&gt;So the situation is: Claude wrote Lattice (the implementation), and then Claude wrote in Lattice (the programs and the self-hosted compiler). The model built the language and then learned the language it built. The "teaching material" that Claude uses to write Lattice code is documentation and examples that Claude itself produced in earlier sessions.&lt;/p&gt;
&lt;p&gt;The artifacts:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The C implementation: ~80 source files, the parser, the VM, the phase system runtime. Built with Claude Code from my architectural direction.&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html"&gt;handbook&lt;/a&gt;: 38 chapters covering every feature, with worked examples. Written in LaTeX with Claude Code. This lives in a repository that Claude can read in subsequent sessions.&lt;/li&gt;
&lt;li&gt;Example programs (&lt;code&gt;examples/phase_demo.lat&lt;/code&gt;, &lt;code&gt;examples/sorting.lat&lt;/code&gt;, &lt;code&gt;examples/state_machine.lat&lt;/code&gt;) that demonstrate idiomatic Lattice. Written by Claude Code.&lt;/li&gt;
&lt;li&gt;815 test files under AddressSanitizer that exercise every construct. Written by Claude Code.&lt;/li&gt;
&lt;li&gt;An EBNF grammar reference as an appendix to the handbook.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;When I work with Claude on Lattice code, I don't paste the entire handbook into the context window. Claude has access to the project directory. It reads files as needed. If I ask it to write a function that uses &lt;code&gt;forge&lt;/code&gt;, it reads &lt;code&gt;examples/phase_demo.lat&lt;/code&gt; or &lt;code&gt;chapters/ch11-phases-explained.tex&lt;/code&gt; to see how &lt;code&gt;forge&lt;/code&gt; works. If I ask it to add an opcode to the compiler, it reads &lt;code&gt;include/stackopcode.h&lt;/code&gt; and &lt;code&gt;src/stackvm.c&lt;/code&gt; to understand the existing instruction set.&lt;/p&gt;
&lt;p&gt;The key insight: Claude doesn't need to be trained on a language to write it. It needs access to the specification and examples at inference time. And in this case, those specifications and examples were produced by Claude itself in prior sessions. The model's understanding is constructed on the fly from documentation in its context, not retrieved from weights. This is why the Rust resemblance matters so much: the syntax gives Claude a structural scaffold, and the specification (which Claude wrote) fills in the semantics.&lt;/p&gt;
&lt;p&gt;This is also why the self-hosted compiler was possible. By the time Claude wrote &lt;code&gt;latc.lat&lt;/code&gt;, it had already written the entire language implementation, the handbook, the test suite, and hundreds of example programs. The language had moved from "novel" to "familiar" through accumulated context, not through training. Each session built on the last. Each example reinforced the phase system's rules. By the time the compiler was attempted, Claude's working understanding of Lattice (constructed from its own prior output) was deep enough to write a 5,000-line program that correctly compiles the language. The model taught itself a language by building the language first.&lt;/p&gt;
&lt;h3&gt;Why Syntax Matters More Than Semantics&lt;/h3&gt;
&lt;p&gt;The Lattice experience suggests something counterintuitive about how LLMs interact with programming languages: syntax transfer is more powerful than semantic understanding.&lt;/p&gt;
&lt;p&gt;Claude can write correct Lattice because Lattice looks like Rust. The semantic differences (phase system vs. ownership, runtime type checking vs. compile-time guarantees, garbage collection vs. RAII) are significant, but they don't prevent Claude from producing working code. The model generates syntactically valid Lattice from Rust patterns and then adjusts the semantics when corrected.&lt;/p&gt;
&lt;p&gt;This has implications for language design. If you want AI tooling to support your language from day one, without waiting for it to appear in training data, design your syntax to rhyme with something popular. Lattice's resemblance to Rust wasn't designed for AI, but it is the reason AI can write it. A language with a radically different syntax (APL, Forth, J) would be much harder for Claude to learn from examples alone, even if the semantics were simpler.&lt;/p&gt;
&lt;p&gt;The reverse is also true: a language with familiar syntax but deeply unfamiliar semantics (like Lattice's reactive phase system) will produce code that looks correct but occasionally behaves wrong. Claude's Rust intuitions are strong enough to generate valid-looking phase code, but the model sometimes falls back to Rust's mutation model when the Lattice-specific behavior is more constrained. The syntax transfers perfectly. The semantics require teaching.&lt;/p&gt;
&lt;h3&gt;Implications for Language Designers&lt;/h3&gt;
&lt;p&gt;If you're designing a new programming language in 2026, the AI tooling question is unavoidable. Your language won't have IDE plugins, autocompleters, or AI coding assistants on day one. The community doesn't exist yet. The training data doesn't include your language. Every other language your users work with has Copilot or Claude support. Yours doesn't.&lt;/p&gt;
&lt;p&gt;Lattice suggests a strategy: make your syntax rhyme with something an LLM already knows.&lt;/p&gt;
&lt;p&gt;This isn't about copying Rust. Lattice has genuinely novel semantics. The phase system, the reactive bindings, the alloy structs with per-field phase annotations: none of these exist in Rust. But they're expressed through syntax (keywords, braces, type annotations, block expressions) that maps directly to Rust's structural patterns. Claude can parse the syntax without help and learn the semantics from examples.&lt;/p&gt;
&lt;p&gt;The alternative is designing a syntax so novel that LLMs can't bootstrap from existing knowledge. This is a legitimate design choice; some ideas genuinely need new notation. But the cost is high: your users won't get AI assistance until your language appears in training data, which requires the language to become popular first, which is harder without AI assistance. It's a chicken-and-egg problem that familiar syntax sidesteps.&lt;/p&gt;
&lt;p&gt;The practical recommendation: novel semantics, familiar syntax. Invent the ideas. Borrow the notation. Let the LLM cross the bridge on syntax and learn the semantics on the other side.&lt;/p&gt;
&lt;h3&gt;What This Means for the "AI Writes Code" Conversation&lt;/h3&gt;
&lt;p&gt;The Lattice case study complicates the popular narrative about AI code generation in both directions.&lt;/p&gt;
&lt;p&gt;For the optimists who say AI can learn anything: Claude cannot invent the reactive phase system. It cannot propose &lt;code&gt;bond&lt;/code&gt; or &lt;code&gt;seed&lt;/code&gt; or &lt;code&gt;anneal&lt;/code&gt; without being told they exist. The novel constructs, the ones that make Lattice a genuinely different language rather than a Rust reskin, are invisible to the model until explicitly specified. AI transfer learning has limits, and those limits are at the boundaries of what the training data contains.&lt;/p&gt;
&lt;p&gt;For the pessimists who say AI can only regurgitate training data: Claude wrote a 5,000-line self-hosted compiler for a language it has never seen. That is not regurgitation. The compiler produces correct bytecode for constructs (phase transitions, reactive bonds, per-field phase annotations) that exist in no other language. The model assembled knowledge from its understanding of compilers generally, Rust syntax specifically, and the Lattice specification I provided, and produced something genuinely new. Antirez called this "assembling knowledge" when he observed the same phenomenon with his &lt;a href="https://baud.rs/KJoorR"&gt;Z80 emulator project&lt;/a&gt;. I think that's the right term.&lt;/p&gt;
&lt;p&gt;The truth is somewhere that neither camp wants to occupy. LLMs can go far beyond their training data when the new territory is structurally adjacent to something they know. They cannot go beyond their training data when the new territory is structurally novel. The boundary between "adjacent" and "novel" is syntax. Familiar syntax is a bridge. Novel syntax is a wall. Novel semantics behind familiar syntax is a trap: the model crosses the bridge confidently and then occasionally falls.&lt;/p&gt;
&lt;p&gt;Lattice exists in all three zones simultaneously. Its Rust-like surface lets Claude cross the bridge. Its phase system is the novel semantics behind familiar syntax. And the self-hosted compiler is proof that the bridge, once crossed, supports weight that no one expected.&lt;/p&gt;
&lt;p&gt;I didn't set out to test the limits of LLM language understanding when I designed Lattice. I set out to build a programming language with a novel approach to mutability. The AI dimension was a side effect: I used Claude Code as my development tool because I use Claude Code for everything, and the language happened to be learnable because it happened to look like Rust. But the result is one of the more complete demonstrations of LLM transfer learning applied to a genuinely novel domain: not just writing programs in an unfamiliar language, but writing a compiler for that language, in that language, from a specification that exists nowhere in the training data.&lt;/p&gt;
&lt;p&gt;The 4,955 lines of &lt;code&gt;latc.lat&lt;/code&gt; are the proof that LLMs can go further than their training data when the conditions are right. The conditions are: familiar syntax, clear specification, accessible examples, and a human who knows when the model is wrong. Remove any one of those and the compiler doesn't get written. But with all four in place, the model produces something that works, that compiles, and that no human typed by hand.&lt;/p&gt;</description><category>ai</category><category>claude</category><category>compilers</category><category>language design</category><category>lattice</category><category>llm</category><category>phase system</category><category>programming languages</category><category>rust</category><category>self-hosting</category><guid>https://tinycomputers.io/posts/teaching-llms-languages-theyve-never-seen.html</guid><pubDate>Thu, 02 Apr 2026 13:00:00 GMT</pubDate></item><item><title>A Stack-Based Bytecode VM for Lattice: 100 Opcodes, Serialization, and a Self-Hosted Compiler</title><link>https://tinycomputers.io/posts/a-stack-based-bytecode-vm-for-lattice.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/a-stack-based-bytecode-vm-for-lattice_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;29 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;When I &lt;a href="https://tinycomputers.io/posts/from-tree-walker-to-bytecode-vm-compiling-lattice.html"&gt;first wrote about&lt;/a&gt; Lattice's move from a tree-walking interpreter to a bytecode VM, the instruction set had 62 opcodes, concurrency primitives still delegated to the tree-walker, and programs couldn't be serialized. The VM was a foundation, correct and complete enough to become the default, but clearly a starting point.&lt;/p&gt;
&lt;p&gt;That was ten versions ago. The bytecode VM now has 100 opcodes, compiles concurrency primitives into standalone sub-chunks with zero AST dependency at runtime, ships a binary serialization format for ahead-of-time compilation, includes an ephemeral bump arena for short-lived string temporaries, and (perhaps most satisfyingly) has a self-hosted compiler written entirely in Lattice that produces the same &lt;code&gt;.latc&lt;/code&gt; bytecode files as the C implementation.&lt;/p&gt;
&lt;p&gt;This post walks through what changed and why. The full technical treatment is available as a &lt;a href="https://tinycomputers.io/papers/lattice_vm.pdf"&gt;research paper&lt;/a&gt;; this is the practitioner's version.&lt;/p&gt;
&lt;h3&gt;Why Keep Going&lt;/h3&gt;
&lt;p&gt;The &lt;a href="https://tinycomputers.io/posts/from-tree-walker-to-bytecode-vm-compiling-lattice.html"&gt;original bytecode VM&lt;/a&gt; solved the immediate problems: it eliminated recursive AST dispatch overhead and gave Lattice a single execution path for file execution, the REPL, and the WASM playground. But three issues remained.&lt;/p&gt;
&lt;p&gt;First, &lt;code&gt;OP_SCOPE&lt;/code&gt; and &lt;code&gt;OP_SELECT&lt;/code&gt; (Lattice's structured concurrency opcodes) still stored AST node pointers in the constant pool and dropped into the tree-walking evaluator at runtime. This meant the AST had to stay alive during concurrent execution, which defeated one of the main motivations for having a bytecode VM in the first place.&lt;/p&gt;
&lt;p&gt;Second, the AST dependency made serialization impossible. You can serialize bytecode to a file, but you can't easily serialize an arbitrary C pointer to an AST node. Programs had to be parsed and compiled on every run.&lt;/p&gt;
&lt;p&gt;Third, the dispatch loop used a plain &lt;code&gt;switch&lt;/code&gt; statement. Not a crisis, but computed goto dispatch is a well-known improvement for bytecode interpreters, and leaving it on the table felt unnecessary.&lt;/p&gt;
&lt;p&gt;All three problems are solved now. Let me start with the instruction set, since everything else builds on it.&lt;/p&gt;
&lt;h3&gt;100 Opcodes&lt;/h3&gt;
&lt;p&gt;The instruction set grew from 62 to 100 opcodes, organized into 16 functional categories:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Representative opcodes&lt;/th&gt;
&lt;th style="text-align: right;"&gt;Count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Stack manipulation&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CONSTANT&lt;/code&gt;, &lt;code&gt;NIL&lt;/code&gt;, &lt;code&gt;TRUE&lt;/code&gt;, &lt;code&gt;FALSE&lt;/code&gt;, &lt;code&gt;UNIT&lt;/code&gt;, &lt;code&gt;POP&lt;/code&gt;, &lt;code&gt;DUP&lt;/code&gt;, &lt;code&gt;SWAP&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Arithmetic/logical&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ADD&lt;/code&gt;, &lt;code&gt;SUB&lt;/code&gt;, &lt;code&gt;MUL&lt;/code&gt;, &lt;code&gt;DIV&lt;/code&gt;, &lt;code&gt;MOD&lt;/code&gt;, &lt;code&gt;NEG&lt;/code&gt;, &lt;code&gt;NOT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bitwise&lt;/td&gt;
&lt;td&gt;&lt;code&gt;BIT_AND&lt;/code&gt;, &lt;code&gt;BIT_OR&lt;/code&gt;, &lt;code&gt;BIT_XOR&lt;/code&gt;, &lt;code&gt;BIT_NOT&lt;/code&gt;, &lt;code&gt;LSHIFT&lt;/code&gt;, &lt;code&gt;RSHIFT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Comparison&lt;/td&gt;
&lt;td&gt;&lt;code&gt;EQ&lt;/code&gt;, &lt;code&gt;NEQ&lt;/code&gt;, &lt;code&gt;LT&lt;/code&gt;, &lt;code&gt;GT&lt;/code&gt;, &lt;code&gt;LTEQ&lt;/code&gt;, &lt;code&gt;GTEQ&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;String&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CONCAT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Variables&lt;/td&gt;
&lt;td&gt;&lt;code&gt;GET/SET_LOCAL&lt;/code&gt;, &lt;code&gt;GET/SET/DEFINE_GLOBAL&lt;/code&gt;, &lt;code&gt;GET/SET_UPVALUE&lt;/code&gt;, &lt;code&gt;CLOSE_UPVALUE&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Control flow&lt;/td&gt;
&lt;td&gt;&lt;code&gt;JUMP&lt;/code&gt;, &lt;code&gt;JUMP_IF_FALSE&lt;/code&gt;, &lt;code&gt;JUMP_IF_TRUE&lt;/code&gt;, &lt;code&gt;JUMP_IF_NOT_NIL&lt;/code&gt;, &lt;code&gt;LOOP&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Functions&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CALL&lt;/code&gt;, &lt;code&gt;CLOSURE&lt;/code&gt;, &lt;code&gt;RETURN&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Iterators&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ITER_INIT&lt;/code&gt;, &lt;code&gt;ITER_NEXT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data structures&lt;/td&gt;
&lt;td&gt;&lt;code&gt;BUILD_ARRAY&lt;/code&gt;, &lt;code&gt;INDEX&lt;/code&gt;, &lt;code&gt;SET_INDEX&lt;/code&gt;, &lt;code&gt;GET_FIELD&lt;/code&gt;, &lt;code&gt;INVOKE&lt;/code&gt;, etc.&lt;/td&gt;
&lt;td style="text-align: right;"&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Exceptions/defer&lt;/td&gt;
&lt;td&gt;&lt;code&gt;PUSH_EXCEPTION_HANDLER&lt;/code&gt;, &lt;code&gt;THROW&lt;/code&gt;, &lt;code&gt;DEFER_PUSH&lt;/code&gt;, &lt;code&gt;DEFER_RUN&lt;/code&gt;, etc.&lt;/td&gt;
&lt;td style="text-align: right;"&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase system&lt;/td&gt;
&lt;td&gt;&lt;code&gt;FREEZE&lt;/code&gt;, &lt;code&gt;THAW&lt;/code&gt;, &lt;code&gt;CLONE&lt;/code&gt;, &lt;code&gt;MARK_FLUID&lt;/code&gt;, &lt;code&gt;REACT&lt;/code&gt;, &lt;code&gt;BOND&lt;/code&gt;, &lt;code&gt;SEED&lt;/code&gt;, etc.&lt;/td&gt;
&lt;td style="text-align: right;"&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Builtins/modules&lt;/td&gt;
&lt;td&gt;&lt;code&gt;PRINT&lt;/code&gt;, &lt;code&gt;IMPORT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Concurrency&lt;/td&gt;
&lt;td&gt;&lt;code&gt;SCOPE&lt;/code&gt;, &lt;code&gt;SELECT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Integer fast paths&lt;/td&gt;
&lt;td&gt;&lt;code&gt;INC_LOCAL&lt;/code&gt;, &lt;code&gt;DEC_LOCAL&lt;/code&gt;, &lt;code&gt;ADD_INT&lt;/code&gt;, &lt;code&gt;SUB_INT&lt;/code&gt;, &lt;code&gt;LOAD_INT8&lt;/code&gt;, etc.&lt;/td&gt;
&lt;td style="text-align: right;"&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wide variants&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CONSTANT_16&lt;/code&gt;, &lt;code&gt;GET_GLOBAL_16&lt;/code&gt;, &lt;code&gt;SET_GLOBAL_16&lt;/code&gt;, &lt;code&gt;DEFINE_GLOBAL_16&lt;/code&gt;, &lt;code&gt;CLOSURE_16&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Special&lt;/td&gt;
&lt;td&gt;&lt;code&gt;RESET_EPHEMERAL&lt;/code&gt;, &lt;code&gt;HALT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;&lt;strong&gt;100&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The growth came from three directions: the integer fast-path opcodes (8 new), the wide constant variants (5 new), and the concurrency/arena opcodes. Let me explain each.&lt;/p&gt;
&lt;h4&gt;Integer Fast Paths&lt;/h4&gt;
&lt;p&gt;Tight loops like &lt;code&gt;for i in 0..1000&lt;/code&gt; spend most of their time incrementing a counter and comparing it to a bound. The generic &lt;code&gt;OP_ADD&lt;/code&gt; has to check whether its operands are integers, floats, or strings (for concatenation), which adds branching overhead on every iteration.&lt;/p&gt;
&lt;p&gt;The integer fast-path opcodes (&lt;code&gt;OP_ADD_INT&lt;/code&gt;, &lt;code&gt;OP_SUB_INT&lt;/code&gt;, &lt;code&gt;OP_MUL_INT&lt;/code&gt;, &lt;code&gt;OP_LT_INT&lt;/code&gt;, &lt;code&gt;OP_LTEQ_INT&lt;/code&gt;) skip the type check entirely and operate directly on &lt;code&gt;int64_t&lt;/code&gt; values. &lt;code&gt;OP_INC_LOCAL&lt;/code&gt; and &lt;code&gt;OP_DEC_LOCAL&lt;/code&gt; handle the &lt;code&gt;i += 1&lt;/code&gt; and &lt;code&gt;i -= 1&lt;/code&gt; patterns as single-byte instructions that modify the stack slot in place, no push or pop required.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;OP_LOAD_INT8&lt;/code&gt; encodes a signed byte directly in the instruction stream. The integer &lt;code&gt;42&lt;/code&gt; becomes two bytes (&lt;code&gt;OP_LOAD_INT8&lt;/code&gt;, &lt;code&gt;0x2A&lt;/code&gt;) instead of a three-byte &lt;code&gt;OP_CONSTANT&lt;/code&gt; plus an eight-byte constant pool entry. Any integer in [-128, 127] gets this treatment.&lt;/p&gt;
&lt;h4&gt;Wide Constant Variants&lt;/h4&gt;
&lt;p&gt;The original instruction set used a single byte for constant pool indices, limiting each chunk to 256 constants. This is fine for most functions, but the self-hosted compiler (a 2,000-line Lattice program compiled as a single top-level script) blows past that limit easily.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;OP_CONSTANT_16&lt;/code&gt;, &lt;code&gt;OP_GET_GLOBAL_16&lt;/code&gt;, &lt;code&gt;OP_SET_GLOBAL_16&lt;/code&gt;, &lt;code&gt;OP_DEFINE_GLOBAL_16&lt;/code&gt;, and &lt;code&gt;OP_CLOSURE_16&lt;/code&gt; use two-byte big-endian indices, supporting up to 65,536 constants per chunk. The compiler automatically switches to wide variants when an index exceeds 255.&lt;/p&gt;
&lt;h3&gt;The Compiler&lt;/h3&gt;
&lt;p&gt;The bytecode compiler performs a single-pass walk over the AST. It maintains a chain of &lt;code&gt;Compiler&lt;/code&gt; structs linked via &lt;code&gt;enclosing&lt;/code&gt; pointers, one per function being compiled. Variable references resolve through three tiers: local (scan the current compiler's locals array), upvalue (recursively check enclosing compilers), and global (fall through to &lt;code&gt;OP_GET_GLOBAL&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Three compilation modes handle different use cases. &lt;code&gt;compile()&lt;/code&gt; is the standard file mode: it compiles all declarations and emits an implicit call to &lt;code&gt;main()&lt;/code&gt; if one is defined. &lt;code&gt;compile_module()&lt;/code&gt; is for imports, identical to &lt;code&gt;compile()&lt;/code&gt; but skips the auto-call. &lt;code&gt;compile_repl()&lt;/code&gt; preserves the last expression on the stack as the iteration's return value (displayed with &lt;code&gt;=&amp;gt;&lt;/code&gt; prefix) and keeps the known-enum table alive across REPL iterations so enum declarations persist.&lt;/p&gt;
&lt;p&gt;The compiler implements several optimizations during code generation. Binary operations on literal operands are folded at compile time: &lt;code&gt;3 + 4&lt;/code&gt; emits a single &lt;code&gt;OP_LOAD_INT8 7&lt;/code&gt; rather than two loads and an &lt;code&gt;OP_ADD&lt;/code&gt;. The pattern &lt;code&gt;x += 1&lt;/code&gt; is detected and emitted as the single-byte &lt;code&gt;OP_INC_LOCAL&lt;/code&gt;, which modifies the stack slot in place. And every statement is wrapped by &lt;code&gt;compile_stmt_reset()&lt;/code&gt;, which appends &lt;code&gt;OP_RESET_EPHEMERAL&lt;/code&gt; to trigger the ephemeral arena cleanup.&lt;/p&gt;
&lt;h3&gt;Computed Goto Dispatch&lt;/h3&gt;
&lt;p&gt;The dispatch loop now uses GCC/Clang's labels-as-values extension for computed goto:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="cp"&gt;#ifdef VM_USE_COMPUTED_GOTO&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;static&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;void&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;dispatch_table&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;OP_CONSTANT&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;lbl_OP_CONSTANT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;OP_NIL&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;lbl_OP_NIL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;// ... all 100 entries&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="cp"&gt;#endif&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(;;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;uint8_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;READ_BYTE&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="cp"&gt;#ifdef VM_USE_COMPUTED_GOTO&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;goto&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;dispatch_table&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="cp"&gt;#endif&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;switch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Each opcode handler ends with a &lt;code&gt;goto *dispatch_table[READ_BYTE()]&lt;/code&gt; rather than breaking back to the top of the loop. This eliminates the switch statement's bounds check and branch table indirection, replacing it with a single indirect jump. The CPU's branch predictor sees different jump sites for different opcodes, which improves prediction accuracy compared to a single switch that all opcodes funnel through.&lt;/p&gt;
&lt;p&gt;On platforms without the extension, it falls back to a standard switch. The VM works correctly either way.&lt;/p&gt;
&lt;h3&gt;Pre-Compiled Concurrency&lt;/h3&gt;
&lt;p&gt;This is the change I'm most pleased with, because it solves the problem cleanly.&lt;/p&gt;
&lt;p&gt;Lattice has three concurrency primitives: &lt;code&gt;scope&lt;/code&gt; defines a concurrent region, &lt;code&gt;spawn&lt;/code&gt; launches a task within that region, and &lt;code&gt;select&lt;/code&gt; multiplexes over channels. In the tree-walker, these work by passing AST node pointers to spawned threads, which then evaluate the subtrees independently. The bytecode VM's original implementation did the same thing: &lt;code&gt;OP_SCOPE&lt;/code&gt; stored an &lt;code&gt;Expr*&lt;/code&gt; pointer in the constant pool and called the tree-walking evaluator at runtime.&lt;/p&gt;
&lt;p&gt;The solution is to compile each concurrent body into a standalone &lt;code&gt;Chunk&lt;/code&gt; at compile time. The compiler provides two helpers: &lt;code&gt;compile_sub_body()&lt;/code&gt; for statement blocks and &lt;code&gt;compile_sub_expr()&lt;/code&gt; for expressions. Each creates a fresh &lt;code&gt;Compiler&lt;/code&gt;, compiles the code into a new chunk, emits &lt;code&gt;OP_HALT&lt;/code&gt;, and stores the resulting chunk in the parent's constant pool as a &lt;code&gt;VAL_CLOSURE&lt;/code&gt; constant.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;OP_SCOPE&lt;/code&gt; uses variable-length encoding: a spawn count, a sync body chunk index, and one chunk index per spawn body. At runtime, the VM:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Exports locals&lt;/strong&gt; to the global environment using the &lt;code&gt;local_names&lt;/code&gt; debug table, so sub-chunks can access parent variables via &lt;code&gt;OP_GET_GLOBAL&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Runs the sync body&lt;/strong&gt; (if present) via a recursive &lt;code&gt;vm_run()&lt;/code&gt; call&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Spawns threads&lt;/strong&gt; for each spawn body, each running on a cloned VM&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Joins&lt;/strong&gt; all threads and propagates errors&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;OP_SELECT&lt;/code&gt; similarly encodes per-arm metadata: flags, channel expression chunk index, body chunk index, and binding name index. The VM evaluates channel expressions, polls for readiness, and executes the winning arm.&lt;/p&gt;
&lt;p&gt;The key insight is that sub-chunks run as &lt;code&gt;FUNC_SCRIPT&lt;/code&gt; without lexical access to the parent's locals. Since they can't use upvalues to reach into the parent frame, the VM exports the parent's live locals into the global environment before running any sub-chunk, using a pushed scope that gets popped after all sub-chunks complete. This is slightly more expensive than true lexical capture, but it keeps the sub-chunks completely self-contained: no AST, no parent frame dependency, fully serializable.&lt;/p&gt;
&lt;h3&gt;Bytecode Serialization&lt;/h3&gt;
&lt;p&gt;With AST dependency eliminated, serialization becomes straightforward. The &lt;code&gt;.latc&lt;/code&gt; binary format starts with an 8-byte header:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;[4C 41 54 43]  magic: "LATC"
[01 00]        format version: 1
[00 00]        reserved
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The rest is a recursive chunk encoding: code length + bytecode bytes, line numbers for source mapping, typed constants (with a one-byte type tag for each), and local name debug info. Constants use seven type tags:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: right;"&gt;Tag&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Encoding&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;0&lt;/td&gt;
&lt;td&gt;Int&lt;/td&gt;
&lt;td&gt;8-byte signed LE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;1&lt;/td&gt;
&lt;td&gt;Float&lt;/td&gt;
&lt;td&gt;8-byte IEEE 754&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;2&lt;/td&gt;
&lt;td&gt;Bool&lt;/td&gt;
&lt;td&gt;1 byte&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;3&lt;/td&gt;
&lt;td&gt;String&lt;/td&gt;
&lt;td&gt;length-prefixed (u32 + bytes)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;4&lt;/td&gt;
&lt;td&gt;Nil&lt;/td&gt;
&lt;td&gt;no payload&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;5&lt;/td&gt;
&lt;td&gt;Unit&lt;/td&gt;
&lt;td&gt;no payload&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;6&lt;/td&gt;
&lt;td&gt;Closure&lt;/td&gt;
&lt;td&gt;param count + variadic flag + recursive sub-chunk&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The &lt;code&gt;Closure&lt;/code&gt; tag is what makes this recursive: a function constant contains its parameter metadata followed by a complete serialized sub-chunk. Nested functions serialize naturally to arbitrary depth.&lt;/p&gt;
&lt;p&gt;The CLI integrates this cleanly:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="c1"&gt;# Compile to .latc&lt;/span&gt;
clat&lt;span class="w"&gt; &lt;/span&gt;compile&lt;span class="w"&gt; &lt;/span&gt;input.lat&lt;span class="w"&gt; &lt;/span&gt;-o&lt;span class="w"&gt; &lt;/span&gt;output.latc

&lt;span class="c1"&gt;# Run pre-compiled bytecode (auto-detects .latc suffix)&lt;/span&gt;
clat&lt;span class="w"&gt; &lt;/span&gt;output.latc

&lt;span class="c1"&gt;# Or compile and run in one step (the default)&lt;/span&gt;
clat&lt;span class="w"&gt; &lt;/span&gt;input.lat
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Loading validates magic bytes, checks the format version, and uses a bounds-checking &lt;code&gt;ByteReader&lt;/code&gt; that produces descriptive error messages for truncated or malformed inputs.&lt;/p&gt;
&lt;h3&gt;The Ephemeral Bump Arena&lt;/h3&gt;
&lt;p&gt;String concatenation is a common source of short-lived allocations. An expression like &lt;code&gt;"hello " + name + "!"&lt;/code&gt; creates intermediate strings that are immediately consumed and discarded. In a language with deep-clone-on-read semantics, these temporaries add up.&lt;/p&gt;
&lt;p&gt;The ephemeral bump arena is a simple optimization: string concatenation in &lt;code&gt;OP_ADD&lt;/code&gt; and &lt;code&gt;OP_CONCAT&lt;/code&gt; allocates into a bump arena (&lt;code&gt;vm-&amp;gt;ephemeral&lt;/code&gt;) instead of the general-purpose heap. These allocations are tagged with &lt;code&gt;REGION_EPHEMERAL&lt;/code&gt;, and &lt;code&gt;OP_RESET_EPHEMERAL&lt;/code&gt; (emitted by the compiler at every statement boundary) resets the arena in O(1), reclaiming all temporary strings at once.&lt;/p&gt;
&lt;p&gt;The tricky part is escape analysis. If a temporary string gets assigned to a global variable, stored in an array, or passed to a compiled closure, it needs to be promoted out of the ephemeral arena before the arena is reset. The VM handles this at specific escape points: &lt;code&gt;OP_DEFINE_GLOBAL&lt;/code&gt;, &lt;code&gt;OP_CALL&lt;/code&gt; (for compiled closures), &lt;code&gt;array.push&lt;/code&gt;, and &lt;code&gt;OP_SET_INDEX_LOCAL&lt;/code&gt;. Each of these calls &lt;code&gt;vm_promote_value()&lt;/code&gt;, which deep-clones the string to the regular heap if its region is ephemeral.&lt;/p&gt;
&lt;p&gt;The arena uses a page-based allocator with 4 KB pages. Resetting doesn't free pages; it just moves the bump pointer back to zero, so subsequent allocations reuse the same memory without any &lt;code&gt;malloc&lt;/code&gt;/&lt;code&gt;free&lt;/code&gt; overhead. The full design and safety proof are covered in a &lt;a href="https://tinycomputers.io/papers/lattice_arena_safety.pdf"&gt;companion paper&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Closures and the Storage Hack&lt;/h3&gt;
&lt;p&gt;The upvalue system hasn't changed architecturally since the &lt;a href="https://tinycomputers.io/posts/from-tree-walker-to-bytecode-vm-compiling-lattice.html"&gt;first VM post&lt;/a&gt;; it's still the Lua-inspired open/closed model where &lt;code&gt;ObjUpvalue&lt;/code&gt; structs start pointing into the stack and get closed (deep-cloned to the heap) when variables go out of scope. But the encoding grew to accommodate the wider instruction set.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;OP_CLOSURE&lt;/code&gt; uses variable-length encoding: a constant pool index for the function's compiled chunk, an upvalue count, and then &lt;code&gt;[is_local, index]&lt;/code&gt; byte pairs for each captured variable. &lt;code&gt;OP_CLOSURE_16&lt;/code&gt; uses a two-byte big-endian function index for chunks with more than 256 constants.&lt;/p&gt;
&lt;p&gt;The storage hack (repurposing &lt;code&gt;closure.body&lt;/code&gt; (NULL), &lt;code&gt;closure.native_fn&lt;/code&gt; (Chunk pointer), &lt;code&gt;closure.captured_env&lt;/code&gt; (ObjUpvalue** cast), and &lt;code&gt;region_id&lt;/code&gt; (upvalue count)) remains unchanged. A sentinel value &lt;code&gt;VM_NATIVE_MARKER&lt;/code&gt; distinguishes C-native functions from compiled closures:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="cp"&gt;#define VM_NATIVE_MARKER ((struct Expr **)(uintptr_t)0x1)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;A closure with &lt;code&gt;body == NULL&lt;/code&gt; and &lt;code&gt;native_fn != NULL&lt;/code&gt; is either a C native (if &lt;code&gt;default_values == VM_NATIVE_MARKER&lt;/code&gt;) or a compiled bytecode function (otherwise). This avoids adding VM-specific fields to the &lt;code&gt;LatValue&lt;/code&gt; union, which matters when values are deep-cloned frequently.&lt;/p&gt;
&lt;h3&gt;The Self-Hosted Compiler&lt;/h3&gt;
&lt;p&gt;The file &lt;code&gt;compiler/latc.lat&lt;/code&gt; is a bytecode compiler written entirely in Lattice, approximately 2,060 lines that read &lt;code&gt;.lat&lt;/code&gt; source, produce bytecode, and write &lt;code&gt;.latc&lt;/code&gt; files using the same binary format as the C implementation:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="c1"&gt;# Use the self-hosted compiler&lt;/span&gt;
clat&lt;span class="w"&gt; &lt;/span&gt;compiler/latc.lat&lt;span class="w"&gt; &lt;/span&gt;input.lat&lt;span class="w"&gt; &lt;/span&gt;output.latc

&lt;span class="c1"&gt;# Run the result&lt;/span&gt;
clat&lt;span class="w"&gt; &lt;/span&gt;output.latc
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The architecture mirrors the C compiler: lexing via the built-in &lt;code&gt;tokenize()&lt;/code&gt; function, a recursive-descent parser, single-pass code emission, and scope management with upvalue resolution. But Lattice's value semantics required some creative workarounds.&lt;/p&gt;
&lt;p&gt;The biggest constraint is that structs and maps are pass-by-value. In C, the compiler uses a &lt;code&gt;Compiler&lt;/code&gt; struct with mutable fields: local arrays, scope depth, a chunk pointer. In Lattice, passing a struct to a function creates a copy, so mutations in the callee don't propagate back. The self-hosted compiler works around this with parallel global arrays: &lt;code&gt;code&lt;/code&gt;, &lt;code&gt;constants&lt;/code&gt;, &lt;code&gt;c_lines&lt;/code&gt;, &lt;code&gt;local_names&lt;/code&gt;, &lt;code&gt;local_depths&lt;/code&gt;, &lt;code&gt;local_captured&lt;/code&gt;. Since array mutations via &lt;code&gt;.push()&lt;/code&gt; and index assignment are in-place (via &lt;code&gt;resolve_lvalue&lt;/code&gt;), global arrays work where structs don't.&lt;/p&gt;
&lt;p&gt;Nested function compilation uses explicit &lt;code&gt;save_compiler()&lt;/code&gt; / &lt;code&gt;restore_compiler()&lt;/code&gt; functions that copy all global arrays to local temporaries and back. It's verbose but correct. The Buffer type (used for serialization output) is also pass-by-value, so a global &lt;code&gt;ser_buf&lt;/code&gt; accumulates serialized bytes across function calls.&lt;/p&gt;
&lt;p&gt;Other language constraints: no &lt;code&gt;else if&lt;/code&gt; (requires &lt;code&gt;else { if ... }&lt;/code&gt; or &lt;code&gt;match&lt;/code&gt;), mandatory type annotations on function parameters (&lt;code&gt;fn foo(a: any)&lt;/code&gt;), and &lt;code&gt;test&lt;/code&gt; is a keyword so you can't use it as an identifier.&lt;/p&gt;
&lt;p&gt;The self-hosted compiler currently handles expressions, variables, functions with closures, control flow (if/else, while, loop, for, break, continue, match), structs, enums, exceptions, defer, string interpolation, and imports. Not yet implemented: concurrency primitives and advanced phase operations (react, bond, seed). The bootstrapping chain is:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;latc.lat → [C VM interprets] → output.latc → [C VM executes]
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Full self-hosting (where &lt;code&gt;latc.lat&lt;/code&gt; compiles itself) requires adding concurrency support and closing the remaining feature gaps.&lt;/p&gt;
&lt;h3&gt;The VM Execution Engine&lt;/h3&gt;
&lt;p&gt;The VM maintains a 4,096-slot value stack, a 256-frame call stack, an exception handler stack (64 entries), a defer stack (256 entries), a global environment, the open upvalue linked list, the ephemeral arena, and a module cache. A pre-allocated &lt;code&gt;fast_args[16]&lt;/code&gt; buffer avoids heap allocation for most native function calls.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;OP_CALL&lt;/code&gt; instruction discriminates three callee types. Native C functions (marked with &lt;code&gt;VM_NATIVE_MARKER&lt;/code&gt;) get the fast path: arguments are popped into &lt;code&gt;fast_args&lt;/code&gt;, the C function pointer is invoked, and the return value is pushed. No call frame allocated. Compiled closures get the full treatment: the VM promotes ephemeral values in the current frame (so the callee's &lt;code&gt;OP_RESET_EPHEMERAL&lt;/code&gt; doesn't invalidate the caller's temporaries), then pushes a new &lt;code&gt;CallFrame&lt;/code&gt; with the instruction pointer at byte 0 of the callee's chunk. Callable structs look up a constructor-named field and dispatch accordingly.&lt;/p&gt;
&lt;p&gt;Exception handling uses a handler stack. &lt;code&gt;OP_PUSH_EXCEPTION_HANDLER&lt;/code&gt; records the current IP, chunk, call frame index, and stack top. When &lt;code&gt;OP_THROW&lt;/code&gt; executes, the nearest handler is popped, the call frame and value stacks are unwound, the error value is pushed, and execution resumes at the handler's saved IP. Deferred blocks interact correctly: &lt;code&gt;OP_DEFER_RUN&lt;/code&gt; executes all defer entries registered at or above the current frame before the frame is popped by &lt;code&gt;OP_RETURN&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Iterators avoid closure allocation entirely. &lt;code&gt;OP_ITER_INIT&lt;/code&gt; converts a range or array into an internal iterator occupying two stack slots (collection + cursor index). &lt;code&gt;OP_ITER_NEXT&lt;/code&gt; advances the cursor, pushes the next element, or jumps to a specified offset when exhausted. The tree-walker used closure-based iterators for &lt;code&gt;for&lt;/code&gt; loops; the bytecode version is simpler and avoids the allocation.&lt;/p&gt;
&lt;h3&gt;Ref&amp;lt;T&amp;gt;: The Escape Hatch from Value Semantics&lt;/h3&gt;
&lt;p&gt;Everything described so far operates in a world where values are deep-cloned on every read. Maps are pass-by-value. Structs are pass-by-value. Pass a collection to a function and the function gets its own copy; mutations don't propagate back. This is correct and eliminates aliasing bugs, but it creates a real problem: how do you share mutable state when you actually need to?&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Ref&amp;lt;T&amp;gt;&lt;/code&gt; is the answer. It's a reference-counted shared mutable wrapper, the one type in Lattice that deliberately breaks value semantics:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;LatRef&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="c1"&gt;// the wrapped inner value&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="n"&gt;refcount&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="c1"&gt;// reference count&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;When a &lt;code&gt;Ref&lt;/code&gt; is cloned (which happens on every variable read, like everything else), the VM bumps the refcount and copies the pointer. It does &lt;em&gt;not&lt;/em&gt; deep-clone the inner value. Multiple copies of a &lt;code&gt;Ref&lt;/code&gt; share the same underlying &lt;code&gt;LatRef&lt;/code&gt;, so mutations through one are visible through all others. This is the explicit opt-in to reference semantics that the rest of the language avoids.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;let r = Ref::new([1, 2, 3])
let r2 = r              // shallow copy — same LatRef
r.push(4)
print(r2.get())          // [1, 2, 3, 4] — shared state
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The VM provides transparent proxying: &lt;code&gt;OP_INDEX&lt;/code&gt;, &lt;code&gt;OP_SET_INDEX&lt;/code&gt;, and &lt;code&gt;OP_INVOKE&lt;/code&gt; all check for &lt;code&gt;VAL_REF&lt;/code&gt; and delegate to the inner value. Indexing into a &lt;code&gt;Ref&amp;lt;Array&amp;gt;&lt;/code&gt; indexes the inner array. Calling &lt;code&gt;.push()&lt;/code&gt; on a &lt;code&gt;Ref&amp;lt;Array&amp;gt;&lt;/code&gt; mutates the inner array directly. At the language level, a Ref mostly behaves like the value it wraps; you just get shared mutation instead of isolated copies.&lt;/p&gt;
&lt;p&gt;Ref has its own methods (&lt;code&gt;get()&lt;/code&gt;/&lt;code&gt;deref()&lt;/code&gt; to clone the inner value out, &lt;code&gt;set(v)&lt;/code&gt; to replace it, &lt;code&gt;inner_type()&lt;/code&gt; to inspect the wrapped type) plus proxied methods for whatever the inner value supports (map &lt;code&gt;set&lt;/code&gt;/&lt;code&gt;get&lt;/code&gt;/&lt;code&gt;keys&lt;/code&gt;, array &lt;code&gt;push&lt;/code&gt;/&lt;code&gt;pop&lt;/code&gt;, etc.).&lt;/p&gt;
&lt;p&gt;The phase system applies to Refs too. Freezing a Ref blocks all mutation: &lt;code&gt;set()&lt;/code&gt;, &lt;code&gt;push()&lt;/code&gt;, index assignment all check &lt;code&gt;obj-&amp;gt;phase == VTAG_CRYSTAL&lt;/code&gt; and error with "cannot set on a frozen Ref." This makes frozen Refs safe to share across concurrent boundaries; they're immutable handles to immutable data.&lt;/p&gt;
&lt;p&gt;This introduces a third memory management strategy alongside the dual-heap (mark-and-sweep for fluid values, arenas for crystal values) and the ephemeral bump arena. Refs use reference counting: &lt;code&gt;ref_retain()&lt;/code&gt; on clone, &lt;code&gt;ref_release()&lt;/code&gt; on free, with the inner value freed when the count hits zero. It's a deliberate trade-off: reference counting is simple and deterministic, and since Refs are the uncommon case (most Lattice code uses value semantics), the lack of cycle collection hasn't been an issue in practice.&lt;/p&gt;
&lt;h3&gt;Validation&lt;/h3&gt;
&lt;p&gt;The VM is validated by &lt;strong&gt;815 tests&lt;/strong&gt; covering every feature: arithmetic, closures, upvalues, phase transitions, exception handling, defer, iterators, data structures, concurrency, modules, bytecode serialization, and the self-hosted compiler.&lt;/p&gt;
&lt;p&gt;All 815 tests pass under both normal compilation and AddressSanitizer builds (&lt;code&gt;make asan&lt;/code&gt;), which dynamically checks for heap buffer overflows, use-after-free, stack buffer overflows, and memory leaks. For a VM with manual memory management, upvalue lifetime tracking, and an ephemeral arena that reclaims memory at statement boundaries, ASan validation is essential.&lt;/p&gt;
&lt;p&gt;Both execution modes (bytecode VM (default) and tree-walker (&lt;code&gt;--tree-walk&lt;/code&gt;)) share the same test suite and produce identical results:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;make&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="c1"&gt;# bytecode VM: 815 passed&lt;/span&gt;
make&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;TREE_WALK&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# tree-walker: 815 passed&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Feature parity is complete:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th style="text-align: center;"&gt;Tree-walker&lt;/th&gt;
&lt;th style="text-align: center;"&gt;Bytecode VM&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Phase system (freeze/thaw/clone/forge)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Closures with upvalues&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Exception handling (try/catch/throw)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Defer blocks&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pattern matching&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Structs with methods&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enums with payloads&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Arrays, maps, tuples, sets, buffers&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Iterators (for-in, ranges)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Module imports&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Concurrency (scope/spawn/select)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Channels&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase reactions/bonds/seeds&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Contracts (require/ensure)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Variable tracking (history)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bytecode serialization (.latc)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;---&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Computed goto dispatch&lt;/td&gt;
&lt;td style="text-align: center;"&gt;---&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ephemeral bump arena&lt;/td&gt;
&lt;td style="text-align: center;"&gt;---&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Specialized integer ops&lt;/td&gt;
&lt;td style="text-align: center;"&gt;---&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The last four rows are VM-only features that have no tree-walker equivalent.&lt;/p&gt;
&lt;h3&gt;What's Next&lt;/h3&gt;
&lt;p&gt;The VM is feature-complete but not performance-optimized. The obvious next steps are register allocation to reduce stack traffic, type-specialized dispatch paths guided by runtime profiling, tail call optimization for recursive patterns, and constant pool deduplication across compilation units. Further out, the bytecode provides a natural intermediate representation for JIT compilation.&lt;/p&gt;
&lt;p&gt;On the self-hosting front, adding concurrency primitives to &lt;code&gt;latc.lat&lt;/code&gt; would close the gap to full self-compilation, where the Lattice compiler compiles itself, producing a &lt;code&gt;.latc&lt;/code&gt; file that can then compile other programs without the C implementation in the loop.&lt;/p&gt;
&lt;p&gt;The full technical details (including encoding diagrams, the complete opcode listing, compilation walkthroughs, and references to related work in Lua, CPython, YARV, and WebAssembly) are in the &lt;a href="https://tinycomputers.io/papers/lattice_vm.pdf"&gt;research paper&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The source code is at &lt;a href="https://baud.rs/fIe3gx"&gt;github.com/ajokela/lattice&lt;/a&gt;, and the project site is at &lt;a href="https://baud.rs/bwvnYT"&gt;lattice-lang.org&lt;/a&gt;.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;git clone https://github.com/ajokela/lattice.git
cd lattice &amp;amp;&amp;amp; make
./clat
&lt;/pre&gt;&lt;/div&gt;</description><category>bytecode</category><category>c</category><category>closures</category><category>compilers</category><category>concurrency</category><category>interpreters</category><category>language design</category><category>lattice</category><category>phase system</category><category>programming languages</category><category>self-hosting</category><category>serialization</category><category>upvalues</category><category>virtual machine</category><guid>https://tinycomputers.io/posts/a-stack-based-bytecode-vm-for-lattice.html</guid><pubDate>Fri, 20 Feb 2026 18:00:00 GMT</pubDate></item><item><title>Rue: Steve Klabnik's AI-Assisted Experiment in Memory Safety Without the Pain</title><link>https://tinycomputers.io/posts/rue-programming-language-review.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/rue-programming-language-review_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;30 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;h3&gt;The Pitch&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/rue-programming-language/rue-lang-dev-homepage.png" alt="The rue-lang.dev homepage showing the tagline 'Exploring memory safety that's easier to use' with feature cards for Early Stage, Familiar Syntax, and Native Compilation" style="float: right; max-width: 50%; margin: 0 0 1em 1.5em; border-radius: 4px;"&gt;&lt;/p&gt;
&lt;p&gt;Every few years, a new programming language appears that promises to fix what its predecessors got wrong. Most quietly vanish. A handful become infrastructure. What makes &lt;a href="https://baud.rs/rue-lang"&gt;Rue&lt;/a&gt; interesting is not that it promises to replace Rust (its creator explicitly says it won't), but that the person making it is one of the most qualified people alive to attempt the experiment.&lt;/p&gt;
&lt;p&gt;Steve Klabnik spent thirteen years in the Rust ecosystem. He co-authored &lt;a href="https://baud.rs/rust-book-nostarch"&gt;&lt;em&gt;The Rust Programming Language&lt;/em&gt;&lt;/a&gt;, the official book that has introduced more people to Rust than any other resource. He served on the Rust core team, led its documentation team, and worked at Mozilla during Rust's formative years. He later joined Oxide Computer Company, where he contributed to a scratch-built operating system written in Rust. Before all of that, he was a prolific contributor to Ruby on Rails and authored &lt;a href="https://baud.rs/rust-ruby-learn"&gt;&lt;em&gt;Rust for Rubyists&lt;/em&gt;&lt;/a&gt;, one of the earliest attempts to bridge the Ruby and Rust communities.&lt;/p&gt;
&lt;p&gt;If anyone has earned the right to look at Rust and say "I think we can do this differently," it's Klabnik. And that's exactly what Rue is: a research project asking whether memory safety without garbage collection can be achieved with less cognitive overhead than Rust demands.&lt;/p&gt;
&lt;p&gt;But Rue is also something else entirely: an experiment in whether a single person, assisted by an AI, can build a programming language from scratch without funding, without a team, and without a multi-year timeline. The compiler is written in Rust, designed by Klabnik, and implemented primarily by Claude, Anthropic's AI assistant. The project went from nothing to a working compiler in roughly two weeks, producing approximately 100,000 lines of Rust code across 700+ commits.&lt;/p&gt;
&lt;h3&gt;The Person Behind It&lt;/h3&gt;
&lt;p&gt;Understanding Rue requires understanding Klabnik's trajectory through programming language communities. He entered professional programming through Ruby, becoming one of the most prolific open-source contributors to the Rails ecosystem in the early 2010s. His final commits to Rails landed in late 2013, around the time he discovered Rust 0.5 during a Christmas visit to his parents' house in rural Pennsylvania.&lt;/p&gt;
&lt;p&gt;What followed was a thirteen-year involvement with Rust that few can match. Beyond the official book (now in its second edition, co-authored with Carol Nichols and published by No Starch Press), Klabnik shaped how the Rust community communicates, documents, and teaches. His work on Rust's documentation established patterns that other language communities later adopted. At Mozilla, he helped shepherd Rust through its 1.0 release. At Oxide, he worked on systems software at the lowest levels the language supports.&lt;/p&gt;
&lt;p&gt;Klabnik describes himself as having been an AI skeptic until 2025, when he found that large language models had crossed a threshold of genuine usefulness for programming. The shift was dramatic enough that he now writes most of his code with AI assistance. Rue is the product of that conversion, not just a language experiment but a methodology experiment, testing what happens when a deeply experienced language designer directs an AI to implement his vision.&lt;/p&gt;
&lt;p&gt;The name follows a deliberate pattern from his career: Ruby, Rust, Rue. He notes three associations: "rue the day" (the negative connotation, a nod to the skepticism any new language faces), the rue plant (paralleling Rust's fungal connotation), and brevity.&lt;/p&gt;
&lt;h3&gt;What Rue Is Today&lt;/h3&gt;
&lt;p&gt;Let's be direct about the current state: Rue is a version 0.1.0 research project. The website prominently warns that it is "not ready for real use" and to "expect bugs, missing features, and breaking changes." Klabnik himself has described the language as "still very janky" and cautions against reading too deeply into current implementation details. The &lt;a href="https://baud.rs/rue-readme"&gt;GitHub README&lt;/a&gt; puts it plainly: "Not everything in here is good, or accurate, or anything: I'm just messing around."&lt;/p&gt;
&lt;p&gt;With that caveat firmly in place, here's what exists.&lt;/p&gt;
&lt;p&gt;Rue compiles to native machine code targeting x86-64 and ARM64. There is no virtual machine, no interpreter, and no garbage collector. The compiler is written in Rust (95.8% of the repository) and produces binaries directly. It builds using &lt;a href="https://baud.rs/buck2-build"&gt;Buck2&lt;/a&gt;, Meta's build system, and the project includes a specification, a ten-chapter tutorial, a blog, and a benchmark dashboard that tracks compilation time, memory usage, and binary size across Linux and macOS.&lt;/p&gt;
&lt;p&gt;The language itself is statically typed with type inference. Variables are declared with &lt;code&gt;let&lt;/code&gt; (immutable by default) or &lt;code&gt;let mut&lt;/code&gt; (mutable), following Rust's convention. The type system includes signed and unsigned integers (&lt;code&gt;i8&lt;/code&gt; through &lt;code&gt;i64&lt;/code&gt;, &lt;code&gt;u8&lt;/code&gt; through &lt;code&gt;u64&lt;/code&gt;), booleans, fixed-size arrays, structs, and enums. There are no strings. There is no standard library. There are no traits, no closures, no iterators, no modules, no error handling beyond panics, and no heap allocation. Generics exist behind a &lt;code&gt;--preview comptime&lt;/code&gt; flag (more on that below), but they are not yet part of the stable language.&lt;/p&gt;
&lt;p&gt;Read that list of absences again. It is long. And it is honest.&lt;/p&gt;
&lt;h3&gt;The Type System and Memory Model&lt;/h3&gt;
&lt;p&gt;Rue's central technical bet is that affine types with mutable value semantics can deliver memory safety more intuitively than Rust's borrow checker and lifetime annotations.&lt;/p&gt;
&lt;p&gt;In practice, this means values in Rue are moved by default. When you assign a struct to a new variable or pass it to a function, the original becomes invalid. The compiler enforces single ownership at the type level. This is Rust's move semantics without the escape hatches that references and borrowing provide. If you want a type to be copied instead of moved, you annotate the struct definition with &lt;code&gt;@copy&lt;/code&gt;, which enables value duplication.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/rue-programming-language/rue-borrow-inout.png" alt="Rue's Borrow and Inout tutorial page showing the inout keyword at the call site with comparisons to Go and Python where mutation is invisible" style="float: right; max-width: 50%; margin: 0 0 1em 1.5em; border-radius: 4px;"&gt;&lt;/p&gt;
&lt;p&gt;Where Rust provides shared references (&lt;code&gt;&amp;amp;T&lt;/code&gt;) and mutable references (&lt;code&gt;&amp;amp;mut T&lt;/code&gt;) governed by the borrow checker's aliasing rules, Rue provides two simpler mechanisms: &lt;code&gt;borrow&lt;/code&gt; and &lt;code&gt;inout&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;A &lt;code&gt;borrow&lt;/code&gt; parameter grants read-only access to a value without copying it. An &lt;code&gt;inout&lt;/code&gt; parameter grants temporary mutable access; the function can modify the value, and changes persist after the function returns. Critically, both are marked at the call site, not just in the function signature. When reading Rue code, you can immediately see which arguments a function will modify:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;sort(inout values, borrow config);
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The tutorial makes the design motivation explicit: "When you read code, you want to understand what it does without tracing through every function." In Go or Python, a function receiving a slice or object might mutate it invisibly. In Rue, mutation is always syntactically visible where the function is called.&lt;/p&gt;
&lt;p&gt;This is genuinely appealing. One of Rust's persistent pain points is that understanding what a function does to its arguments requires reading the signature carefully, and even then, interior mutability patterns like &lt;code&gt;RefCell&lt;/code&gt; can make the signature misleading. Rue's approach trades expressiveness for legibility.&lt;/p&gt;
&lt;p&gt;The trade-off, however, is severe. Without general references, you cannot build self-referential data structures. Linked lists, trees, and graphs (the bread and butter of systems programming data structures) have no obvious implementation path in current Rue. The Hacker News discussion around the language's announcement surfaced this concern immediately: without references that can be stored in data structures, you cannot implement iterators that borrow from containers. This is not a missing feature that will be added later; it is a fundamental consequence of the design choice.&lt;/p&gt;
&lt;p&gt;Klabnik has acknowledged this directly: "There is going to inherently be some expressiveness loss. There is no silver bullet." The question Rue poses is whether the expressiveness that remains is sufficient for a useful class of programs. The answer today is: we don't know yet.&lt;/p&gt;
&lt;h3&gt;What the Tutorial Reveals&lt;/h3&gt;
&lt;p&gt;The ten-chapter tutorial walks from installation through a capstone quicksort implementation. It covers variables, types, functions, control flow, arrays, structs, enums, and the borrow/inout system. The final chapter implements partitioning and recursive sorting, demonstrating how the language's features compose.&lt;/p&gt;
&lt;p&gt;Working through it reveals a language that feels like a simplified Rust with the hard parts surgically removed. Pattern matching on enums is exhaustive, as in Rust. Structs have named fields and move semantics. Arrays are fixed-size with runtime bounds checking that panics on out-of-bounds access. Integers overflow-check by default.&lt;/p&gt;
&lt;p&gt;The syntax is deliberately familiar to anyone who has written Rust, Go, or C. Function signatures look like Rust without lifetimes:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="nt"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;partition&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nt"&gt;arr&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;inout&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i32&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="cp"&gt;]&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;low&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;u64&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;high&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;u64&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;u64&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Control flow uses &lt;code&gt;if&lt;/code&gt;/&lt;code&gt;else&lt;/code&gt; and &lt;code&gt;while&lt;/code&gt; without parentheses around conditions. There is no &lt;code&gt;for&lt;/code&gt; loop; iteration requires manual index management with &lt;code&gt;while&lt;/code&gt;, which feels like a notable omission even for an early-stage language.&lt;/p&gt;
&lt;p&gt;What's conspicuously absent from the tutorial is any program that does something useful beyond computation. There is no file I/O, no string handling, no network access, no memory allocation. The only output mechanism is &lt;code&gt;@dbg()&lt;/code&gt;, a built-in debug print. The actual &lt;code&gt;hello.rue&lt;/code&gt; in the examples directory is revealing: it is &lt;code&gt;fn main() -&amp;gt; i32 { 42 }&lt;/code&gt;. There are no strings, so there is no "Hello, World." The fizzbuzz example is equally telling; it returns integers (1 for Fizz, 2 for Buzz, 3 for FizzBuzz) because it cannot print the words. The specification explicitly excludes coverage of a standard library "when one exists."&lt;/p&gt;
&lt;h3&gt;The AI Development Story&lt;/h3&gt;
&lt;p&gt;The most widely discussed aspect of Rue is not its type system but how it was built. Klabnik's blog posts describe a process where he provided architectural direction and design decisions while Claude wrote the vast majority of the implementation code. The project's blog posts are co-credited to both Klabnik and Claude, with some posts authored solely by the AI.&lt;/p&gt;
&lt;p&gt;The timeline is striking. The first commit landed on December 15, 2025. By December 22 (one week later), the compiler could handle basic types, structs, control flow, and had accumulated 130 commits. By early January 2026, the project had 777 specification tests across two platforms and the compiler had grown to handle enums, pattern matching, arrays, and the borrow/inout system.&lt;/p&gt;
&lt;p&gt;The repository now contains over 700 commits from 4 contributors, with 1,100+ GitHub stars. For a personal hobby project by a well-known developer, this represents meaningful interest, though not the kind of momentum that suggests organic community adoption.&lt;/p&gt;
&lt;p&gt;This development model raises legitimate questions. When Claude writes the compiler and Claude writes the blog posts, what does it mean for a human to have "designed" the language? Klabnik's answer is that he makes all architectural and design decisions (what features to include, how the type system works, what trade-offs to accept) while the AI handles implementation. He compares it to an architect and a construction crew: the architect doesn't lay bricks, but the building reflects the architect's vision.&lt;/p&gt;
&lt;p&gt;The analogy is imperfect. A construction crew doesn't suggest design changes mid-build based on patterns learned from every other building ever constructed. But the broader point (that directing implementation is itself a form of authorship) is reasonable, and Klabnik's thirteen years of language implementation experience give him credibility that a less experienced designer directing an AI would lack.&lt;/p&gt;
&lt;h3&gt;What the Source Code Reveals&lt;/h3&gt;
&lt;p&gt;The documentation and tutorial tell one story. The &lt;a href="https://baud.rs/eWuAO3"&gt;GitHub repository&lt;/a&gt; tells a more nuanced one. Digging into the examples directory and compiler crates reveals features in various stages of development that the official tutorial has not yet caught up with.&lt;/p&gt;
&lt;p&gt;Most notably, generics exist as a preview feature. The &lt;code&gt;examples/generics.rue&lt;/code&gt; file demonstrates Zig-style comptime type parameters:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="kd"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;comptime&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bigger&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;i32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;comptime T: type&lt;/code&gt; syntax tells the compiler to monomorphize, creating specialized versions (&lt;code&gt;max__i32&lt;/code&gt;, &lt;code&gt;max__bool&lt;/code&gt;, etc.) for each concrete type used at call sites. This is a meaningful step beyond what the tutorial covers, and it follows the Zig model rather than Rust's trait-bounded generics. Whether this approach will scale to real-world code, and whether it can support the kind of type-level abstraction that traits and interfaces provide, remains to be seen. You must compile with &lt;code&gt;rue --preview comptime&lt;/code&gt; to access this feature, signaling that it is experimental even by Rue's standards.&lt;/p&gt;
&lt;p&gt;The compiler architecture itself is surprisingly mature for a project of this age. The repository contains 18 separate crates: a lexer, parser, intermediate representation (&lt;code&gt;rue-rir&lt;/code&gt;), an abstract intermediate representation (&lt;code&gt;rue-air&lt;/code&gt;), control flow graph analysis (&lt;code&gt;rue-cfg&lt;/code&gt;), code generation, linking, a fuzzer, a spec test runner, and a VS Code extension. This is not a toy single-file compiler; it is a modular pipeline that reflects real compiler engineering, even if the language it compiles remains minimal.&lt;/p&gt;
&lt;p&gt;Beyond generics, the gaps are still significant. There are no traits or interfaces, so there is no polymorphism beyond what enums and comptime monomorphization provide. There are no closures or first-class functions. There is no error handling mechanism; functions can panic, but there is no &lt;code&gt;Result&lt;/code&gt; type or equivalent. There are no modules or visibility controls. There are no methods on types. There is no string type.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;unchecked&lt;/code&gt; keyword provides an escape hatch similar to Rust's &lt;code&gt;unsafe&lt;/code&gt;, allowing low-level memory operations while keeping such code "visibly separate from normal safe code." The specification includes chapters on unchecked code syntax and unchecked intrinsics, suggesting that Klabnik recognizes the eventual need for the kind of low-level access that Rust provides through its unsafe system.&lt;/p&gt;
&lt;p&gt;The performance page tracks compilation metrics (time, memory, binary size) across x86-64 and ARM64 on both Linux and macOS, but provides no runtime benchmarks comparing Rue's generated code to C, Rust, or Go. This is understandable for a research project, but it means we cannot evaluate whether Rue's native compilation delivers competitive performance.&lt;/p&gt;
&lt;h3&gt;The Fundamental Question&lt;/h3&gt;
&lt;p&gt;Every new systems language must answer the same question: what programs can I write in this language that I couldn't write (or couldn't write as well) in an existing one?&lt;/p&gt;
&lt;p&gt;For Rust, the answer was clear from early on: memory-safe systems programs without garbage collection pauses. For Go, it was concurrent network services with fast compilation. For Zig, it was a C replacement with better defaults and comptime.&lt;/p&gt;
&lt;p&gt;Rue's answer, as it stands today, is: nothing. Not yet. You cannot write a program in Rue that you couldn't write more easily in any mainstream language, because Rue cannot perform I/O, allocate memory, manipulate strings, or interact with the operating system.&lt;/p&gt;
&lt;p&gt;This is not a criticism so much as a statement of development stage. The interesting question is what Rue's answer &lt;em&gt;could&lt;/em&gt; become if development continues. The borrow/inout model is genuinely simpler than Rust's borrow checker for the cases it handles. Generics are already emerging through the comptime preview. If Rue can stabilize them, grow a heap allocator, a string type, and enough standard library to write real programs (without reintroducing the complexity it was designed to avoid), it could serve programmers who find Rust's learning curve prohibitive but need more safety than C or Go provide.&lt;/p&gt;
&lt;p&gt;That is a big "if." History is littered with languages that simplified an existing language's hard parts and then discovered that the hard parts existed for good reasons. Rust's borrow checker is complex because the problems it solves are complex. Removing it and replacing it with a simpler system necessarily means either accepting less expressiveness or finding a genuinely novel solution that the Rust team somehow missed in over a decade of research.&lt;/p&gt;
&lt;p&gt;Klabnik is explicit that he doesn't claim to have found such a solution. Rue is a research project exploring a design space, not a product claiming to have mapped it.&lt;/p&gt;
&lt;h3&gt;Who Should Pay Attention&lt;/h3&gt;
&lt;p&gt;If you're looking for a language to write software in today, Rue is not it. The project's own documentation says so clearly and repeatedly.&lt;/p&gt;
&lt;p&gt;If you're interested in programming language design, Rue is worth following for two reasons. First, the borrow/inout model is a clean articulation of an alternative to Rust's reference system, and watching it encounter real-world requirements will be educational regardless of whether it succeeds. Second, the AI-assisted development methodology is itself a data point in the ongoing question of what role AI can play in building complex software systems.&lt;/p&gt;
&lt;p&gt;If you're a Rust programmer who has ever thought "there must be a simpler way to express this," Rue represents one concrete exploration of what "simpler" might look like, and what it costs. The language makes the trade-offs visible in a way that abstract arguments about borrow checker complexity do not.&lt;/p&gt;
&lt;p&gt;And if you're Steve Klabnik, messing around with a hobby project after thirteen years of working on someone else's language, Rue looks like exactly the kind of thing a deeply experienced language person should be doing with their evenings and weekends.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Rue is best understood as three things simultaneously: a technical experiment in memory safety ergonomics, a methodological experiment in AI-assisted language development, and a personal project by someone with unusually deep expertise in the problem space. As a technical experiment, it has articulated a clean alternative to Rust's borrow checker that is worth studying even if it proves too restrictive for general use. As a methodological experiment, the speed of development (700+ commits and a working multi-platform compiler in weeks) is genuinely remarkable, though the long-term maintainability of AI-generated compiler code remains unproven. As a personal project, it is honest about its limitations in a way that many language announcements are not.&lt;/p&gt;
&lt;p&gt;The language cannot do anything useful yet. It may never be able to. But the questions it asks (can memory safety be simpler? can one person with an AI build a compiler? what happens when you remove the borrow checker and try something else?) are worth asking. And the person asking them has spent over a decade earning the credibility to make the attempt interesting rather than naive.&lt;/p&gt;
&lt;p&gt;Watch the &lt;a href="https://baud.rs/eWuAO3"&gt;GitHub repository&lt;/a&gt;. Read the &lt;a href="https://baud.rs/8joWb8"&gt;specification&lt;/a&gt;. Try the &lt;a href="https://baud.rs/bUJS2D"&gt;tutorial&lt;/a&gt; if you're curious about what a post-Rust systems language might feel like. Just don't write anything important in it yet.&lt;/p&gt;</description><category>ai-assisted development</category><category>claude</category><category>compilers</category><category>language design</category><category>memory safety</category><category>programming languages</category><category>rue</category><category>rust</category><category>steve klabnik</category><category>systems programming</category><guid>https://tinycomputers.io/posts/rue-programming-language-review.html</guid><pubDate>Fri, 20 Feb 2026 12:00:00 GMT</pubDate></item><item><title>Review of "Crafting Interpreters" by Robert Nystrom</title><link>https://tinycomputers.io/posts/review-of-crafting-interpreters-by-robert-nystrom.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/review-of-crafting-interpreters-by-robert-nystrom_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;25 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;h3&gt;Introduction&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://tinycomputers.io/images/crafting-interpreters/cover-0001.png" alt="Crafting Interpreters by Robert Nystrom, cover art depicting a hand-drawn mountain with paths representing compilation stages from source code to machine code" class="book-cover-image" style="float: right; max-width: 300px; margin: 0 0 1em 1.5em;"&gt;&lt;/p&gt;
&lt;p&gt;There is a particular category of programming book that transcends its subject matter, becoming not just a reference but an experience. &lt;a href="https://baud.rs/crafting-interp"&gt;"Crafting Interpreters" by Robert Nystrom&lt;/a&gt; belongs firmly in this category. Originally published in 2021 after six years of development, the book tackles what many programmers consider one of the most intimidating topics in computer science (building a programming language from scratch) and makes it not just accessible but genuinely enjoyable.&lt;/p&gt;
&lt;p&gt;Nystrom is no stranger to technical writing that connects with practitioners. His earlier book &lt;a href="https://baud.rs/game-dev-patterns"&gt;"Game Programming Patterns"&lt;/a&gt; demonstrated a talent for explaining complex software concepts through clear prose and practical examples. With "Crafting Interpreters," he applies that same skill to language implementation, a domain traditionally guarded by &lt;a href="https://baud.rs/compilers-dragon"&gt;dense academic texts&lt;/a&gt; and &lt;a href="https://baud.rs/backus-naur-form"&gt;formal notation&lt;/a&gt; that sends most working programmers running.&lt;/p&gt;
&lt;p&gt;The book's central premise is both ambitious and elegant: build the same programming language twice. First as a tree-walk interpreter in Java (called &lt;code&gt;jlox&lt;/code&gt;), then as a bytecode virtual machine in C (called &lt;code&gt;clox&lt;/code&gt;). This dual implementation strategy isn't just a structural gimmick. It serves a deep pedagogical purpose, allowing readers to first grasp the conceptual architecture of language processing in a high-level language before rebuilding everything from raw memory and pointer arithmetic. The result is a book that manages to teach compiler theory, language design, software architecture, and low-level systems programming simultaneously.&lt;/p&gt;
&lt;p&gt;The entire text is freely available at &lt;a href="https://baud.rs/crafting-interpreters-site"&gt;craftinginterpreters.com&lt;/a&gt;, which speaks to Nystrom's commitment to making this knowledge widely accessible. The physical edition, published with care and featuring hand-drawn illustrations throughout, is worth owning for anyone who works through the material.&lt;/p&gt;
&lt;h3&gt;The Language: Lox&lt;/h3&gt;
&lt;p&gt;Before diving into implementation, Nystrom introduces Lox, the language both interpreters will execute. Lox is a dynamically typed, garbage-collected language with C-family syntax that supports first-class functions, closures, and class-based object orientation with single inheritance. It is deliberately modest in scope (no arrays, no module system, no standard library to speak of), but this restraint is precisely the point.&lt;/p&gt;
&lt;p&gt;Every feature in Lox exists because it teaches something important about language implementation. Dynamic typing means building a runtime type system. Garbage collection means understanding memory management at the deepest level. Closures require wrestling with variable capture and lifetime semantics. Classes and inheritance demand method resolution and the vtable-like dispatch mechanisms that underpin most object-oriented languages. Lox is small enough to implement in a book but complex enough that implementing it forces the reader to confront every major challenge in language design.&lt;/p&gt;
&lt;p&gt;The choice of a custom language rather than a subset of an existing one is significant. It frees Nystrom from having to explain why certain features are omitted or work differently than readers might expect. Lox is exactly what it needs to be, nothing more.&lt;/p&gt;
&lt;h3&gt;Part II: The Tree-Walk Interpreter (&lt;code&gt;jlox&lt;/code&gt;)&lt;/h3&gt;
&lt;p&gt;The first implementation spans thirteen chapters and builds a complete interpreter in Java. Nystrom begins where every language implementation must: with scanning. The scanner chapter walks through converting raw source text into tokens, handling string literals, numbers, keywords, and the inevitable edge cases that make lexical analysis more interesting than it first appears.&lt;/p&gt;
&lt;p&gt;From there, the book moves into parsing, where Nystrom introduces recursive descent parsing with a clarity that makes the technique feel almost obvious in hindsight. Rather than reaching for parser generators like &lt;a href="https://baud.rs/yacc-parser"&gt;YACC&lt;/a&gt; or &lt;a href="https://baud.rs/antlr-parser"&gt;ANTLR&lt;/a&gt;, every line of the parser is written by hand. This decision is characteristic of the book's philosophy: no black boxes, no magic, no dependencies. The reader understands every piece because the reader built every piece.&lt;/p&gt;
&lt;p&gt;The chapters on expression evaluation and statement execution establish the runtime model, but the book truly hits its stride in the chapters on scope and environments. Nystrom's explanation of lexical scoping (using a chain of environment objects that form what he calls a "cactus stack") is one of the clearest treatments of this topic in any programming text. The hand-drawn illustration of nested environments, with their parent pointers threading back through enclosing scopes, communicates in a single image what paragraphs of formal specification struggle to convey.&lt;/p&gt;
&lt;p&gt;Functions and closures represent the first major conceptual challenge, and Nystrom handles them with characteristic patience. The problem of captured variables (where a closure must hold onto variables from an enclosing scope that may have already returned) is presented as a puzzle to be solved rather than a rule to be memorized. The resolver pass that performs static analysis to determine variable binding is introduced as a natural response to a concrete bug, not as an abstract compiler phase.&lt;/p&gt;
&lt;p&gt;The object-oriented chapters add classes, methods, constructors, inheritance, and super expressions. By the time &lt;code&gt;jlox&lt;/code&gt; is complete, the reader has built a language implementation capable of running recursive algorithms, managing object hierarchies, and handling the scoping rules that trip up even experienced programmers in production languages.&lt;/p&gt;
&lt;p&gt;What makes this section exceptional is Nystrom's willingness to show the design process, not just the final design. When a naive approach creates a bug or performance problem, the reader sees it happen and participates in fixing it. This iterative development style mirrors how real software is built and teaches debugging intuition alongside language implementation.&lt;/p&gt;
&lt;h3&gt;Part III: The Bytecode Virtual Machine (&lt;code&gt;clox&lt;/code&gt;)&lt;/h3&gt;
&lt;p&gt;If Part II is the approachable on-ramp, Part III is where the book reveals its true ambition. Across seventeen chapters, Nystrom rebuilds everything in C, this time compiling Lox to bytecode and executing it on a stack-based virtual machine. The motivation is made concrete early: &lt;code&gt;jlox&lt;/code&gt; takes 72 seconds to compute the 40th Fibonacci number recursively, while C can do it in half a second. The bytecode VM will close that gap dramatically.&lt;/p&gt;
&lt;p&gt;The transition from Java to C is itself educational. Readers who have grown comfortable with Java's automatic memory management, dynamic arrays, and hash maps must now implement all of these from scratch. Nystrom builds a dynamic array type, a hash table, and ultimately a mark-sweep garbage collector, all in service of the language implementation. These data structures are not taught in isolation; they emerge because the VM needs them.&lt;/p&gt;
&lt;p&gt;The chunk and instruction design chapters teach the reader to think about data representation at the byte level. Each bytecode instruction is a single byte, followed by operands that encode constants, variable slots, or jump offsets. The disassembler that Nystrom builds alongside the VM is a thoughtful touch, providing a debugging tool that makes the otherwise invisible bytecode tangible.&lt;/p&gt;
&lt;p&gt;The single-pass compiler that replaces &lt;code&gt;jlox&lt;/code&gt;'s separate parsing and resolution phases is a masterclass in practical compiler construction. Nystrom uses &lt;a href="https://baud.rs/parser-techniques"&gt;Pratt parsing&lt;/a&gt; for expressions, a technique he explains with such clarity that this chapter alone has become a widely referenced resource for anyone implementing expression parsers. The Pratt parser's elegant handling of precedence and associativity through a simple table of parsing functions is one of those ideas that, once understood, feels like it should have been obvious all along.&lt;/p&gt;
&lt;p&gt;The chapters on closures in &lt;code&gt;clox&lt;/code&gt; deserve special mention. Where &lt;code&gt;jlox&lt;/code&gt; could lean on Java's garbage collector and object references to capture variables, &lt;code&gt;clox&lt;/code&gt; must solve the "upvalue" problem explicitly. Nystrom introduces the concept of upvalues (runtime objects that represent captured variables) and walks through the mechanism by which stack-allocated locals are "closed over" and moved to the heap when their enclosing function returns. The complexity of this implementation, managed through careful incremental development, demonstrates why closures are considered one of the hardest features to implement correctly in a bytecode VM.&lt;/p&gt;
&lt;p&gt;The garbage collection chapter is the book's peak of systems programming depth. Nystrom implements a mark-sweep collector, explaining reachability, root sets, and the tricolor abstraction. The treatment is practical rather than theoretical; the reader sees exactly when collection triggers, how objects are traced, and why the collector must handle the subtle case of the VM itself allocating memory during collection (which could invalidate pointers being traced). The self-adjusting heap threshold that balances collection frequency against memory usage is a detail that separates a textbook GC from one that works in practice.&lt;/p&gt;
&lt;h3&gt;Writing Style and Presentation&lt;/h3&gt;
&lt;p&gt;Nystrom's prose is the book's secret weapon. Technical writing about compilers tends toward one of two failure modes: impenetrable formalism or hand-waving oversimplification. Nystrom avoids both. His writing is conversational without being sloppy, precise without being dry. Footnotes contain genuine wit. Asides acknowledge the reader's likely confusion at exactly the moments when confusion is most natural.&lt;/p&gt;
&lt;p&gt;The hand-drawn illustrations scattered throughout the book serve a purpose beyond aesthetics. They signal that this is a personal, crafted work rather than a mass-produced textbook. The diagrams of memory layouts, parse trees, and stack states during execution are clearer than their machine-generated equivalents in most compiler texts, partly because they include exactly the detail needed and nothing more.&lt;/p&gt;
&lt;p&gt;The "Design Note" sections that appear between chapters are mini-essays on language design philosophy: why dynamic typing exists, what makes a feature "elegant," how language designers balance expressiveness against implementation complexity. These sections transform the book from a pure implementation guide into something closer to a meditation on programming language design as a creative discipline.&lt;/p&gt;
&lt;h3&gt;Strengths&lt;/h3&gt;
&lt;p&gt;The book's greatest achievement is making compiler construction feel like a natural extension of everyday programming rather than a specialized academic pursuit. By avoiding formal grammars, &lt;a href="https://baud.rs/automata-theory-book"&gt;automata theory&lt;/a&gt;, and the mathematical notation that dominates traditional compiler texts, Nystrom demonstrates that you don't need a PhD to build a working language implementation.&lt;/p&gt;
&lt;p&gt;The dual-implementation approach pays dividends throughout. Concepts that are murky in one implementation become clear in the other. The tree-walk interpreter makes the abstract concepts tangible; the bytecode VM reveals the performance and engineering considerations that production language implementations face. Together, they provide a stereoscopic view of language implementation that neither could achieve alone.&lt;/p&gt;
&lt;p&gt;The no-dependency philosophy deserves praise. There is no lexer generator, no parser generator, no framework, no library. Every line of code in both implementations is written in the book and understood by the reader. This means that upon completion, the reader owns their understanding completely; there is no mysterious tool doing critical work behind the scenes.&lt;/p&gt;
&lt;p&gt;The incremental development style produces a book that is remarkably difficult to get lost in. Each chapter begins with working code and ends with working code. The reader is never more than a few pages from being able to compile and run something. For a topic as complex as language implementation, this steady cadence of progress is essential for maintaining motivation.&lt;/p&gt;
&lt;h3&gt;Limitations&lt;/h3&gt;
&lt;p&gt;The book is not without its shortcomings. The choice of dynamic typing for Lox means that static type systems (one of the most active and important areas of modern language design) receive no coverage. Type inference, generics, algebraic data types, and pattern matching are absent. A reader completing both implementations still would not know how to add a type checker, which is arguably the most practically relevant compiler phase for working programmers today.&lt;/p&gt;
&lt;p&gt;Optimization is largely unexplored. The &lt;code&gt;clox&lt;/code&gt; VM is faster than &lt;code&gt;jlox&lt;/code&gt; by virtue of being a bytecode interpreter written in C, but Nystrom does not cover constant folding, dead code elimination, register allocation, or any of the optimization passes that distinguish a teaching compiler from a production one. JIT compilation, increasingly the standard for high-performance language runtimes, is mentioned only in passing.&lt;/p&gt;
&lt;p&gt;The error handling and recovery throughout both implementations is minimal. Production parsers need sophisticated error recovery to provide useful diagnostics. Nystrom acknowledges this gap but does not address it, leaving readers who want to build user-facing tools with significant work ahead of them.&lt;/p&gt;
&lt;p&gt;Lox's deliberate simplicity means that several common language features (arrays, iterators, modules, pattern matching, exception handling) are left as exercises. While this keeps the book focused, it means that readers must figure out on their own how to implement the features that most real languages require. The gap between Lox and a practical language is significant.&lt;/p&gt;
&lt;h3&gt;Who Should Read This Book&lt;/h3&gt;
&lt;p&gt;"Crafting Interpreters" is ideal for working programmers who have always been curious about how languages work but have been intimidated by the traditional compiler literature. Comfortable familiarity with Java and C is assumed; this is not a book for learning either language. But the reader need not have any prior knowledge of compilers, formal languages, or automata theory.&lt;/p&gt;
&lt;p&gt;Computer science students will find it an excellent companion to a formal compilers course, providing the practical intuition that textbooks like Aho's "Dragon Book" deliberately omit. Conversely, self-taught programmers who never took a compilers course will find this book fills a significant gap in their education.&lt;/p&gt;
&lt;p&gt;Language enthusiasts who have tinkered with toy interpreters but never built anything with closures, classes, or garbage collection will find exactly the guidance they need to level up. And anyone who simply enjoys beautifully crafted technical writing will find the book rewarding even as a pure reading experience.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;"Crafting Interpreters" is one of the best programming books published in recent years. It takes a subject that most programmers consider forbiddingly complex and renders it not just comprehensible but engaging. Nystrom's combination of clear writing, thoughtful pedagogy, practical focus, and genuine craft produces a book that teaches far more than its nominal subject. Beyond scanning, parsing, and code generation, the reader learns how to approach complex software design, how to build systems incrementally, and how to think about the tools they use every day at a deeper level.&lt;/p&gt;
&lt;p&gt;The book will not make you a compiler engineer. It will not teach you how to build a production language runtime, optimize generated code, or implement a sophisticated type system. What it will do is demystify the machinery that powers every programming language you have ever used, and give you the confidence and foundation to explore further. For most programmers, that is more than enough. It is, in fact, exactly what was needed.&lt;/p&gt;</description><category>bytecode</category><category>c</category><category>compilers</category><category>garbage collection</category><category>interpreters</category><category>java</category><category>language design</category><category>parsing</category><category>programming languages</category><category>robert nystrom</category><category>virtual machines</category><guid>https://tinycomputers.io/posts/review-of-crafting-interpreters-by-robert-nystrom.html</guid><pubDate>Thu, 19 Feb 2026 16:30:00 GMT</pubDate></item><item><title>From Tree-Walker to Bytecode VM: Compiling Lattice</title><link>https://tinycomputers.io/posts/from-tree-walker-to-bytecode-vm-compiling-lattice.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/from-tree-walker-to-bytecode-vm-compiling-lattice_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;16 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;Lattice is a programming language built around a &lt;a href="https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html"&gt;crystallization-based phase system&lt;/a&gt;: values start as mutable "flux" and can be frozen into immutable "fix," with the runtime enforcing the transition and providing &lt;a href="https://tinycomputers.io/posts/mutability-as-a-first-class-concept-the-lattice-phase-system.html"&gt;reactions, bonds, contracts, and temporal tracking&lt;/a&gt; around it. It's implemented in C with no external dependencies.&lt;/p&gt;
&lt;p&gt;When I started building &lt;a href="https://baud.rs/q5yFwI"&gt;Lattice&lt;/a&gt;, a tree-walking interpreter was the obvious first move. You parse source into an AST, walk the nodes recursively, and evaluate as you go. It's straightforward, easy to debug, and lets you iterate on language semantics quickly without worrying about a second representation. &lt;a href="https://baud.rs/crafting-interpreters"&gt;&lt;em&gt;Crafting Interpreters&lt;/em&gt;&lt;/a&gt; calls this approach "the simplest way to build an interpreter," and it's right.&lt;/p&gt;
&lt;p&gt;But tree-walkers have well-known limitations. Every expression evaluation descends through function calls: &lt;code&gt;eval_expr&lt;/code&gt; calling &lt;code&gt;eval_binary&lt;/code&gt; calling &lt;code&gt;eval_expr&lt;/code&gt; twice more. The overhead compounds. You're chasing pointers through heap-allocated AST nodes with poor cache locality. And the call stack of the host language (C, in Lattice's case) becomes tangled with the call stack of the guest language, making it harder to implement features like error recovery and coroutines cleanly.&lt;/p&gt;
&lt;p&gt;Lattice v0.3.0 shipped a bytecode compiler and stack-based virtual machine alongside the tree-walker. In v0.3.1, the bytecode VM became the default for file execution, the interactive REPL, and the browser-based playground. The tree-walker is still available via &lt;code&gt;--tree-walk&lt;/code&gt;, but the VM now handles everything. This post walks through the architecture of that VM, some design decisions that turned out to matter, and a mutation bug that only surfaces when you combine deep-clone-on-read semantics with in-place method dispatch.&lt;/p&gt;
&lt;h3&gt;Architecture Overview&lt;/h3&gt;
&lt;p&gt;The bytecode pipeline has three stages: lexing and parsing (shared with the tree-walker), compilation from AST to bytecode chunks, and execution on a stack-based VM. The compiler and VM together add about 8,200 lines of C to the codebase, bringing the total to around 33,000 lines.&lt;/p&gt;
&lt;p&gt;A &lt;code&gt;Chunk&lt;/code&gt; is the compilation unit: a dynamic array of bytecode instructions, a constant pool, and debug metadata mapping instructions back to source line numbers. The compiler walks the AST and emits bytes into a chunk. The VM reads bytes from the chunk and executes them against a value stack.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;typedef&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;uint8_t&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="c1"&gt;// bytecode array&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;len&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cap&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;constants&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="c1"&gt;// constant pool&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;const_len&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;const_cap&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;lines&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="c1"&gt;// source line per instruction&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;char&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;local_names&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="c1"&gt;// slot → variable name (debug)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;local_name_cap&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Chunk&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The VM itself is a &lt;code&gt;for(;;)&lt;/code&gt; loop with a &lt;code&gt;switch&lt;/code&gt; on the current opcode byte, the textbook approach. No computed gotos, no threaded dispatch, no JIT. Just a switch. On modern hardware with branch prediction, a well-organized switch over 62 opcodes is fast enough that the overhead is negligible compared to the cost of actual operations (string allocation, hash table lookups, deep cloning).&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(;;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;uint8_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;READ_BYTE&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;switch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;OP_CONSTANT&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;OP_ADD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;OP_CALL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// 59 more cases&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The value stack holds 4,096 slots. The call frame stack holds 256 frames. Each &lt;code&gt;CallFrame&lt;/code&gt; tracks its own instruction pointer, a base pointer into the value stack for its local variables, and an array of captured upvalues for closures. When you call a function, the VM pushes a new frame pointing at the callee's chunk. When the function returns, the frame pops and execution resumes in the caller.&lt;/p&gt;
&lt;h3&gt;The Instruction Set&lt;/h3&gt;
&lt;p&gt;Lattice's instruction set has 62 opcodes. Some are standard (&lt;code&gt;OP_ADD&lt;/code&gt;, &lt;code&gt;OP_JUMP_IF_FALSE&lt;/code&gt;, &lt;code&gt;OP_RETURN&lt;/code&gt;). Others exist because of Lattice-specific semantics.&lt;/p&gt;
&lt;p&gt;The phase system needs dedicated opcodes. &lt;code&gt;OP_FREEZE&lt;/code&gt; pops a value, deep-clones it into a crystal region with &lt;code&gt;VTAG_CRYSTAL&lt;/code&gt; tags, and pushes the frozen result. &lt;code&gt;OP_THAW&lt;/code&gt; does the reverse. &lt;code&gt;OP_MARK_FLUID&lt;/code&gt; sets the phase tag to &lt;code&gt;VTAG_FLUID&lt;/code&gt;; this is what &lt;code&gt;flux&lt;/code&gt; bindings emit after their initializer. &lt;code&gt;OP_FREEZE_VAR&lt;/code&gt; and &lt;code&gt;OP_THAW_VAR&lt;/code&gt; handle the case where &lt;code&gt;freeze(x)&lt;/code&gt; targets a named variable and needs to write back the result, carrying extra operands to identify the variable's location (local slot, upvalue, or global name).&lt;/p&gt;
&lt;p&gt;Phase reactions and bonds each have their own opcodes: &lt;code&gt;OP_REACT&lt;/code&gt;, &lt;code&gt;OP_UNREACT&lt;/code&gt;, &lt;code&gt;OP_BOND&lt;/code&gt;, &lt;code&gt;OP_UNBOND&lt;/code&gt;, &lt;code&gt;OP_SEED&lt;/code&gt;, &lt;code&gt;OP_UNSEED&lt;/code&gt;. These could theoretically be implemented as native function calls, but making them opcodes lets the compiler emit the variable name as a constant operand, and the VM needs the name to look up the correct reaction/bond registration in its tracking tables, and encoding it in the bytecode avoids a runtime string lookup.&lt;/p&gt;
&lt;p&gt;Structured concurrency uses an interesting hybrid. &lt;code&gt;OP_SCOPE&lt;/code&gt; and &lt;code&gt;OP_SELECT&lt;/code&gt; each carry a constant-pool index that stores a pointer to the original AST &lt;code&gt;Expr*&lt;/code&gt; node. When the VM hits one of these opcodes, it invokes the tree-walking evaluator on that subtree. This is a deliberate design choice; the concurrency primitives involve spawning threads and managing channels, which requires the evaluator's full environment machinery. Rather than reimplement all of that in the VM, the bytecode compiler punts to the tree-walker for these specific constructs. The rest of the program runs on the VM; only &lt;code&gt;scope&lt;/code&gt; and &lt;code&gt;select&lt;/code&gt; blocks briefly drop into interpretation.&lt;/p&gt;
&lt;h3&gt;Closures and Upvalues&lt;/h3&gt;
&lt;p&gt;Closures are where bytecode VMs get interesting, and Lattice follows the upvalue model that Lua pioneered and Crafting Interpreters popularized.&lt;/p&gt;
&lt;p&gt;When a function is defined inside another function and references variables from the enclosing scope, those variables need to outlive their original stack frame. The solution is upvalues, indirection objects that start pointing into the stack and get "closed over" when the variable goes out of scope.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;typedef&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;ObjUpvalue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;location&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// points to stack slot or &amp;amp;closed&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;closed&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;          &lt;/span&gt;&lt;span class="c1"&gt;// holds value after scope exit&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;ObjUpvalue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;next&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="c1"&gt;// linked list for open upvalues&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ObjUpvalue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;While the enclosing function is still executing, &lt;code&gt;location&lt;/code&gt; points directly at the stack slot. When the enclosing function returns, &lt;code&gt;OP_CLOSE_UPVALUE&lt;/code&gt; copies the stack value into the &lt;code&gt;closed&lt;/code&gt; field and repoints &lt;code&gt;location&lt;/code&gt; to &lt;code&gt;&amp;amp;closed&lt;/code&gt;. The closure doesn't know or care about the switch; it always dereferences &lt;code&gt;location&lt;/code&gt;. This is why upvalues work: they're a level of indirection that transparently survives stack frame destruction.&lt;/p&gt;
&lt;p&gt;The compiler resolves variable references in three stages: first it checks local scope (&lt;code&gt;resolve_local&lt;/code&gt;), then upvalues (&lt;code&gt;resolve_upvalue&lt;/code&gt;, which walks the compiler chain recursively), then falls back to globals via &lt;code&gt;OP_GET_GLOBAL&lt;/code&gt;. The &lt;code&gt;OP_CLOSURE&lt;/code&gt; instruction is followed by a series of &lt;code&gt;(is_local, index)&lt;/code&gt; byte pairs, one per upvalue, telling the VM whether to capture from the current frame's stack or from the parent frame's upvalue array.&lt;/p&gt;
&lt;p&gt;A concrete example makes this clearer. Consider a counter factory:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fn make_counter() {
    flux count = 0
    return |n| { count += n; count }
}

let c = make_counter()
print(c(5))   // 5
print(c(3))   // 8
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;When &lt;code&gt;make_counter&lt;/code&gt; returns, its stack frame is destroyed, but &lt;code&gt;count&lt;/code&gt; needs to survive, because the returned closure references it. During compilation, the compiler sees that the closure's body references &lt;code&gt;count&lt;/code&gt;, which is local to the enclosing &lt;code&gt;make_counter&lt;/code&gt;. It emits an &lt;code&gt;(is_local=true, index=1)&lt;/code&gt; upvalue descriptor. At runtime, &lt;code&gt;OP_CLOSURE&lt;/code&gt; calls &lt;code&gt;capture_upvalue()&lt;/code&gt;, which either reuses an existing &lt;code&gt;ObjUpvalue&lt;/code&gt; pointing at that stack slot or creates a new one. When &lt;code&gt;make_counter&lt;/code&gt; returns, &lt;code&gt;OP_CLOSE_UPVALUE&lt;/code&gt; copies the stack value of &lt;code&gt;count&lt;/code&gt; into the upvalue's &lt;code&gt;closed&lt;/code&gt; field and repoints &lt;code&gt;location&lt;/code&gt;. The closure keeps working, oblivious to the frame being gone.&lt;/p&gt;
&lt;p&gt;One implementation detail worth noting: Lattice stores the upvalue array by repurposing the closure's &lt;code&gt;captured_env&lt;/code&gt; field (normally an &lt;code&gt;Env*&lt;/code&gt; in the tree-walker) and the upvalue count in the &lt;code&gt;region_id&lt;/code&gt; field. This avoids adding new fields to the &lt;code&gt;LatValue&lt;/code&gt; union, which matters when values are deep-cloned frequently, since every field adds to the clone cost.&lt;/p&gt;
&lt;h3&gt;Compiling for the REPL&lt;/h3&gt;
&lt;p&gt;A REPL that runs on a bytecode VM needs different compilation from file execution. The difference is small but important.&lt;/p&gt;
&lt;p&gt;In file mode, &lt;code&gt;compile_module()&lt;/code&gt; compiles a complete program and terminates with &lt;code&gt;OP_UNIT; OP_RETURN&lt;/code&gt;; the module returns unit, and any expression results along the way are discarded with &lt;code&gt;OP_POP&lt;/code&gt;. This is the right behavior for scripts: you don't want every intermediate expression to accumulate on the stack.&lt;/p&gt;
&lt;p&gt;In REPL mode, &lt;code&gt;compile_repl()&lt;/code&gt; needs the opposite behavior for the last expression. When you type &lt;code&gt;42&lt;/code&gt; at the REPL prompt, you want to see &lt;code&gt;=&amp;gt; 42&lt;/code&gt;. So if the last item in the compiled chunk is a bare expression statement, &lt;code&gt;compile_repl()&lt;/code&gt; compiles the expression but &lt;em&gt;skips the &lt;code&gt;OP_POP&lt;/code&gt;&lt;/em&gt;, leaving the value on the stack. Then it emits &lt;code&gt;OP_RETURN&lt;/code&gt;, and the VM receives the value as the chunk's return value.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="kt"&gt;bool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;last_is_expr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prog&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;item_count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;prog&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;prog&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;item_count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;ITEM_STMT&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;prog&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;prog&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;item_count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stmt&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;STMT_EXPR&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;last_is_expr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;emit_byte&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;OP_RETURN&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;       &lt;/span&gt;&lt;span class="c1"&gt;// value already on stack&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;emit_byte&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;OP_UNIT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;         &lt;/span&gt;&lt;span class="c1"&gt;// no expression — return unit&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;emit_byte&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;OP_RETURN&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;For function definitions, struct declarations, and enum definitions, the result is unit, and the REPL silently suppresses the &lt;code&gt;=&amp;gt;&lt;/code&gt; output. This matches user expectations: defining a function shouldn't print anything. The effect in practice:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="n"&gt;lattice&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;
&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;
&lt;span class="n"&gt;lattice&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;"hello"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;" world"&lt;/span&gt;
&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;"hello world"&lt;/span&gt;
&lt;span class="n"&gt;lattice&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;
&lt;span class="n"&gt;lattice&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;
&lt;span class="n"&gt;lattice&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;square&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Int&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;lattice&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;square&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;49&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Each line is independently compiled and executed on the persistent VM. Globals defined in one line (&lt;code&gt;flux x = 10&lt;/code&gt;) are visible in subsequent lines because they're stored in the VM's environment, which persists across iterations. The &lt;code&gt;Chunk&lt;/code&gt; for each line is freed after execution; constants that matter (like global variable values) have already been deep-cloned into the environment.&lt;/p&gt;
&lt;p&gt;The other critical difference is enum persistence. &lt;code&gt;compile_module()&lt;/code&gt; frees its known-enum registry after compilation, because the compiler is done. &lt;code&gt;compile_repl()&lt;/code&gt; must not, because enums defined in REPL iteration N need to be visible in iteration N+1. The REPL calls &lt;code&gt;compiler_free_known_enums()&lt;/code&gt; only on exit. The same lifetime concern applies to parsed programs; struct and function declarations store &lt;code&gt;Expr*&lt;/code&gt; pointers that compiled chunks reference at runtime. The REPL accumulates all parsed programs in a dynamic array and frees them only when the session ends.&lt;/p&gt;
&lt;h3&gt;The Global Mutation Bug&lt;/h3&gt;
&lt;p&gt;This is the story I find most instructive, because it reveals a subtle interaction between two independently reasonable design decisions.&lt;/p&gt;
&lt;p&gt;Lattice has &lt;strong&gt;deep-clone-on-read&lt;/strong&gt; semantics. When you access a variable, the environment doesn't hand you a reference to the stored value; it hands you a fresh deep clone. This eliminates aliasing entirely: two variables never share underlying memory, passing a map to a function gives the function its own copy, and there's no way to create spooky action at a distance through shared mutable state.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="kt"&gt;bool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nf"&gt;env_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Env&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;char&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;lat_map_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;scopes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="mi"&gt;-1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;value_deep_clone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;// always a fresh copy&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is expensive but correct. It gives Lattice pure value semantics without needing a borrow checker or persistent data structures.&lt;/p&gt;
&lt;p&gt;The tree-walking evaluator handles in-place mutation (like &lt;code&gt;array.push()&lt;/code&gt;) with a separate &lt;code&gt;resolve_lvalue()&lt;/code&gt; mechanism that obtains a direct mutable pointer into the environment's storage, bypassing the deep clone. Push, pop, index assignment: these all go through &lt;code&gt;resolve_lvalue&lt;/code&gt; and mutate the stored value directly.&lt;/p&gt;
&lt;p&gt;The bytecode VM needed the same distinction. For local variables, this is straightforward: locals live on the value stack, and the VM has a direct pointer to them via &lt;code&gt;frame-&amp;gt;slots[slot]&lt;/code&gt;. I added &lt;code&gt;OP_INVOKE_LOCAL&lt;/code&gt;, which takes a stack slot index as an operand and passes a pointer to &lt;code&gt;vm_invoke_builtin()&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;OP_INVOKE_LOCAL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;uint8_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;slot&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;READ_BYTE&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;uint8_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;method_idx&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;READ_BYTE&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;uint8_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;arg_count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;READ_BYTE&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;char&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;method_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;constants&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;method_idx&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;str_val&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;obj&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;slots&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;slot&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;// direct pointer&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vm_invoke_builtin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;method_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;arg_count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;local_var_name&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// builtin mutated obj in-place — mutation persists&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;// ... fall through to closure/method dispatch&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;When &lt;code&gt;.push()&lt;/code&gt; grows the array by reallocating &lt;code&gt;obj-&amp;gt;as.array.elems&lt;/code&gt; and incrementing &lt;code&gt;obj-&amp;gt;as.array.len&lt;/code&gt;, it's directly modifying the stack slot. The mutation persists because &lt;code&gt;obj&lt;/code&gt; &lt;em&gt;is&lt;/em&gt; the variable.&lt;/p&gt;
&lt;p&gt;For globals, the situation is different. Globals live in the environment (a scope-chain of hash maps), and &lt;code&gt;env_get()&lt;/code&gt; deep-clones. The generic &lt;code&gt;OP_INVOKE&lt;/code&gt; opcode works by evaluating the receiver expression onto the stack (which, for a global variable, means emitting &lt;code&gt;OP_GET_GLOBAL&lt;/code&gt;, which calls &lt;code&gt;env_get()&lt;/code&gt;, which deep-clones) and then dispatching the method on the cloned value. After the builtin mutates the clone, &lt;code&gt;OP_INVOKE&lt;/code&gt; pops and &lt;em&gt;frees&lt;/em&gt; it. The mutation vanishes.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux nums = [1, 2, 3]
nums.push(4)
print(nums)  // still [1, 2, 3] — the push mutated a clone
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is the kind of bug that's obvious in retrospect but invisible when you're implementing things one piece at a time. &lt;code&gt;env_get()&lt;/code&gt; deep-cloning is correct. &lt;code&gt;OP_INVOKE&lt;/code&gt; popping the receiver after dispatch is correct. Each piece behaves correctly in isolation. The bug emerges from their composition.&lt;/p&gt;
&lt;p&gt;The fix is &lt;code&gt;OP_INVOKE_GLOBAL&lt;/code&gt;, a new opcode that knows the receiver is a global variable and writes back after mutation:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;OP_INVOKE_GLOBAL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;uint8_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;name_idx&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;READ_BYTE&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;uint8_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;method_idx&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;READ_BYTE&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;uint8_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;arg_count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;READ_BYTE&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;char&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;global_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;constants&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;name_idx&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;str_val&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;char&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;method_name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;frame&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;constants&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;method_idx&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;str_val&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;obj_val&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;env_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vm&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;global_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;obj_val&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="n"&gt;VM_ERROR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"undefined variable '%s'"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;global_name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vm_invoke_builtin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;obj_val&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;method_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;arg_count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;global_name&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vm&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cm"&gt;/* handle error */&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// Write back the mutated clone to the environment&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="n"&gt;env_set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vm&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;global_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;obj_val&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;// ... fall through for non-builtin methods&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The compiler emits &lt;code&gt;OP_INVOKE_GLOBAL&lt;/code&gt; when it sees a method call on an identifier that isn't a local variable or an upvalue:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;case&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="no"&gt;EXPR_METHOD_CALL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method_call&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;tag&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;EXPR_IDENT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;slot&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;resolve_local&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;current&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method_call&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;str_val&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;slot&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="c1"&gt;// ... emit OP_INVOKE_LOCAL&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;upvalue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;resolve_upvalue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;current&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method_call&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;object&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;str_val&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;upvalue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="c1"&gt;// Not local, not upvalue — must be global&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="c1"&gt;// ... emit OP_INVOKE_GLOBAL&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;// ... fall through to generic OP_INVOKE&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This gives us three tiers of method dispatch: &lt;code&gt;OP_INVOKE_LOCAL&lt;/code&gt; for locals (direct pointer, no clone), &lt;code&gt;OP_INVOKE_GLOBAL&lt;/code&gt; for globals (clone + write-back), and &lt;code&gt;OP_INVOKE&lt;/code&gt; for everything else (computed receivers like &lt;code&gt;get_array().push(x)&lt;/code&gt;, where there's nothing to write back to). With the fix:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux nums = [1, 2, 3]
nums.push(4)
nums.push(5)
print(nums)  // [1, 2, 3, 4, 5] — mutations persist
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;All mutating builtins (&lt;code&gt;push&lt;/code&gt;, &lt;code&gt;pop&lt;/code&gt;, &lt;code&gt;set&lt;/code&gt;, &lt;code&gt;remove&lt;/code&gt;, &lt;code&gt;insert&lt;/code&gt;, &lt;code&gt;remove_at&lt;/code&gt;) now work correctly on global variables. The same pattern applies to maps, sets, and any other type with in-place methods.&lt;/p&gt;
&lt;p&gt;The broader lesson is that deep-clone-on-read semantics create an impedance mismatch with in-place mutation. In a reference-based language, &lt;code&gt;obj.push(x)&lt;/code&gt; just works; &lt;code&gt;obj&lt;/code&gt; is a reference, and the mutation happens wherever the reference points. In a value-based language, you need to explicitly handle the write-back for every level of variable storage. The tree-walker's &lt;code&gt;resolve_lvalue&lt;/code&gt; is one solution. The VM's tiered invoke opcodes are another. Both exist because of the same underlying tension.&lt;/p&gt;
&lt;h3&gt;The WASM Playground&lt;/h3&gt;
&lt;p&gt;Lattice's browser-based &lt;a href="https://baud.rs/odS816"&gt;playground&lt;/a&gt; compiles the entire VM to WebAssembly via Emscripten. The WASM API exposes four functions: &lt;code&gt;lat_init()&lt;/code&gt;, &lt;code&gt;lat_run_line()&lt;/code&gt;, &lt;code&gt;lat_is_complete()&lt;/code&gt;, and &lt;code&gt;lat_destroy()&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The playground runs the same bytecode VM as the native binary. Each line of input goes through the same pipeline: lex, parse, &lt;code&gt;compile_repl()&lt;/code&gt;, &lt;code&gt;vm_run()&lt;/code&gt;. The &lt;code&gt;lat_is_complete()&lt;/code&gt; function checks bracket depth to determine whether the user is mid-expression, enabling multi-line input by waiting for balanced braces before compiling.&lt;/p&gt;
&lt;p&gt;Previously the playground used the tree-walking evaluator, which meant code could behave differently in the browser than on the command line. Switching the WASM build to the bytecode VM eliminates that inconsistency; the playground, the REPL, and file execution all use the same compilation and execution path.&lt;/p&gt;
&lt;h3&gt;What Didn't Change&lt;/h3&gt;
&lt;p&gt;It's worth noting what the bytecode VM &lt;em&gt;doesn't&lt;/em&gt; change about Lattice.&lt;/p&gt;
&lt;p&gt;The value representation is identical. A &lt;code&gt;LatValue&lt;/code&gt; is still a tagged union with a type tag, phase tag, and payload. Phase transitions still deep-clone data across heap regions. The dual-heap architecture (mark-and-sweep for fluid data, arena-based regions for crystal data) is unchanged. Global variables still live in a scope-chain environment.&lt;/p&gt;
&lt;p&gt;The parser and AST are completely shared. The compiler reads the same &lt;code&gt;Program&lt;/code&gt; structure that the tree-walker reads. A single set of test programs validates both execution paths, and all 771 tests pass on both.&lt;/p&gt;
&lt;p&gt;The phase system compiles one-to-one. &lt;code&gt;freeze()&lt;/code&gt; becomes &lt;code&gt;OP_FREEZE&lt;/code&gt;. &lt;code&gt;thaw()&lt;/code&gt; becomes &lt;code&gt;OP_THAW&lt;/code&gt;. Bonds, reactions, seeds, pressure constraints: each has a corresponding opcode that does exactly what the tree-walker's evaluator function did, just driven by bytecode dispatch instead of recursive AST traversal.&lt;/p&gt;
&lt;h3&gt;Performance&lt;/h3&gt;
&lt;p&gt;I haven't done rigorous benchmarking, and I'm deliberately not making performance claims. The motivation for the bytecode VM wasn't speed; it was consistency (one execution path everywhere) and architectural cleanliness (the VM is easier to extend than the tree-walker's deeply nested switch statements).&lt;/p&gt;
&lt;p&gt;That said, bytecode VMs are generally faster than tree-walkers for the structural reasons mentioned earlier: better cache locality (sequential byte array vs. pointer-chasing through AST nodes), less call overhead (one switch dispatch vs. recursive function calls), and a compact representation that fits more of the program in cache. Whether this matters for Lattice programs depends on the workload. For a language whose core runtime cost is dominated by deep cloning, the dispatch overhead is rarely the bottleneck.&lt;/p&gt;
&lt;h3&gt;Looking Forward&lt;/h3&gt;
&lt;p&gt;The VM is feature-complete but not optimized. There's no constant folding, no dead code elimination, no register allocation (it's a pure stack machine). The &lt;code&gt;OP_SCOPE&lt;/code&gt; and &lt;code&gt;OP_SELECT&lt;/code&gt; concurrency opcodes still delegate to the tree-walker. The dispatch loop is a plain switch rather than computed gotos.&lt;/p&gt;
&lt;p&gt;These are all well-understood optimizations with clear implementation paths. The point of v0.3.1 is that the bytecode VM is now the default, passes all tests, and handles the full language surface including the phase system. Optimization is a separate project.&lt;/p&gt;
&lt;p&gt;The source code is at &lt;a href="https://baud.rs/fIe3gx"&gt;github.com/ajokela/lattice&lt;/a&gt;, and you can try it in the browser at &lt;a href="https://baud.rs/bwvnYT"&gt;lattice-lang.web.app&lt;/a&gt;. The bytecode VM, compiler, REPL, and all 62 opcodes are in four files: &lt;code&gt;compiler.c&lt;/code&gt;, &lt;code&gt;vm.c&lt;/code&gt;, &lt;code&gt;chunk.c&lt;/code&gt;, and &lt;code&gt;opcode.c&lt;/code&gt;.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;git clone https://github.com/ajokela/lattice.git
cd lattice &amp;amp;&amp;amp; make
./clat
&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;Recommended Resources&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://baud.rs/crafting-interpreters"&gt;&lt;em&gt;Crafting Interpreters&lt;/em&gt;&lt;/a&gt; by Robert Nystrom - The definitive guide to building interpreters and bytecode VMs, and a major influence on Lattice's upvalue implementation&lt;/li&gt;
&lt;li&gt;&lt;a href="https://baud.rs/P6ofTE"&gt;&lt;em&gt;Writing A Compiler In Go&lt;/em&gt;&lt;/a&gt; by Thorsten Ball - Practical companion covering bytecode compilation and stack-based VMs&lt;/li&gt;
&lt;li&gt;&lt;a href="https://baud.rs/BSTqlt"&gt;&lt;em&gt;Engineering a Compiler&lt;/em&gt;&lt;/a&gt; by Cooper &amp;amp; Torczon - Comprehensive treatment of compiler internals from front-end to optimization&lt;/li&gt;
&lt;li&gt;&lt;a href="https://baud.rs/JhMFPU"&gt;&lt;em&gt;Compilers: Principles, Techniques, and Tools&lt;/em&gt;&lt;/a&gt; by Aho, Lam, Sethi, Ullman - The classic &lt;em&gt;Dragon Book&lt;/em&gt; covering parsing, code generation, and optimization theory&lt;/li&gt;
&lt;/ul&gt;</description><category>bytecode</category><category>c</category><category>compilers</category><category>interpreters</category><category>language design</category><category>lattice</category><category>programming languages</category><category>virtual machine</category><guid>https://tinycomputers.io/posts/from-tree-walker-to-bytecode-vm-compiling-lattice.html</guid><pubDate>Tue, 17 Feb 2026 18:00:00 GMT</pubDate></item><item><title>Mutability as a First-Class Concept: The Lattice Phase System</title><link>https://tinycomputers.io/posts/mutability-as-a-first-class-concept-the-lattice-phase-system.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/mutability-as-a-first-class-concept-the-lattice-phase-system_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;11 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;Mutability as a First-Class Concept: The Lattice Phase System&lt;/h2&gt;
&lt;p&gt;Most programming languages treat mutability as a binary annotation. You write &lt;code&gt;const&lt;/code&gt; or &lt;code&gt;let&lt;/code&gt;, &lt;code&gt;final&lt;/code&gt; or &lt;code&gt;var&lt;/code&gt;, and the compiler enforces it statically. Rust goes further with its borrow checker, enforcing exclusive mutable access at compile time. JavaScript offers &lt;code&gt;Object.freeze()&lt;/code&gt;, a runtime operation that's shallow by default and provides no mechanism for observation or validation. These are all useful tools, but they share a common limitation: mutability is something you &lt;em&gt;declare&lt;/em&gt;, not something you &lt;em&gt;work with&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In &lt;a href="https://baud.rs/bwvnYT"&gt;Lattice&lt;/a&gt;, I've been building something different. Mutability (what Lattice calls &lt;em&gt;phase&lt;/em&gt;) is a first-class runtime property that can be queried, constrained, validated, coordinated across variables, observed reactively, and even tracked historically. Over the last several releases (v0.2.3 through v0.2.6), this system has grown from simple freeze/thaw semantics into a full lifecycle framework. This post walks through that progression and the design decisions behind it.&lt;/p&gt;
&lt;h3&gt;The Metaphor: Crystallization&lt;/h3&gt;
&lt;p&gt;Lattice is built around the metaphor of crystallization. Values begin in a &lt;strong&gt;fluid&lt;/strong&gt; state (mutable) and can be &lt;strong&gt;frozen&lt;/strong&gt; into a &lt;strong&gt;crystal&lt;/strong&gt; state (immutable). The &lt;code&gt;thaw()&lt;/code&gt; operation creates a mutable copy of a crystal value, and &lt;code&gt;clone()&lt;/code&gt; performs a deep copy regardless of phase. This vocabulary isn't just cosmetic; it shapes how you think about data lifecycle in a Lattice program.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux temperature = 72.5       // fluid: mutable
temperature = 68.0             // allowed

freeze(temperature)            // now crystal: immutable
// temperature = 70.0          // ERROR: cannot mutate crystal value

flux copy = thaw(temperature)  // new fluid copy
copy = 70.0                    // allowed
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;flux&lt;/code&gt; keyword declares a fluid (mutable) binding. The &lt;code&gt;fix&lt;/code&gt; keyword declares a crystal (immutable) binding. And &lt;code&gt;let&lt;/code&gt; infers phase from context: fluid if the value is fluid, crystal if crystal. This alone isn't novel. What makes Lattice's approach interesting is everything that builds on top of it.&lt;/p&gt;
&lt;h3&gt;Phase Constraints: Mutability in Your Type Signatures&lt;/h3&gt;
&lt;p&gt;The first major addition (v0.2.3) was phase constraints on function parameters. In most languages, a function that receives data has no way to express whether it expects mutable or immutable input. You might document it, or rely on convention, but the language doesn't help. In Lattice, you can annotate parameters with their expected phase:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fn mutate(data: flux Map) {
    data.set("modified", true)
}

fn inspect(data: fix Map) {
    print(data.get("name"))
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The runtime checks phase at call time. Pass a crystal value to &lt;code&gt;mutate()&lt;/code&gt; and you get an error. Pass a fluid value to &lt;code&gt;inspect()&lt;/code&gt; and it works fine. Fluid is compatible with fix because it &lt;em&gt;can&lt;/em&gt; be read. The constraint is about what the function &lt;em&gt;needs&lt;/em&gt;, not what the caller &lt;em&gt;has&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The shorthand syntax uses &lt;code&gt;~&lt;/code&gt; for flux and &lt;code&gt;*&lt;/code&gt; for fix:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fn process(data: ~Map) { ... }  // needs mutable
fn display(data: *Map) { ... }  // needs immutable
&lt;/pre&gt;&lt;/div&gt;

&lt;h4&gt;Phase-Dependent Dispatch&lt;/h4&gt;
&lt;p&gt;Phase constraints enable something more powerful: dispatch based on runtime phase. You can define multiple implementations of the same function with different phase signatures, and the runtime selects the best match:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="n"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="n"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mutable&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;can&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;before&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;serializing&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"serialized_at"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;time_now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json_stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;immutable&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;directly&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;no&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;side&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;effects&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json_stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"host"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;
&lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;calls&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;overload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;adds&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;

&lt;span class="n"&gt;freeze&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;calls&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;fix&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;overload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;no&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mutation&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The overload resolution uses a scoring system. An exact phase match (fluid argument to flux parameter) scores highest. A compatible match (fluid to unphased) scores lower. An incompatible match (crystal to flux) is rejected entirely. When multiple overloads exist, the best-scoring one wins.&lt;/p&gt;
&lt;p&gt;This is genuinely useful in practice. A caching layer might have one implementation that updates a cache (requires mutable data) and another that reads through (works with immutable data). A serialization function might add metadata to mutable structures but serialize immutable ones directly. The caller doesn't need to know; the runtime dispatches based on what the data actually &lt;em&gt;is&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;Crystallization Contracts: Validation at the Phase Boundary&lt;/h3&gt;
&lt;p&gt;The next question was: when data freezes, how do you ensure it's in a valid state? In real systems, immutable data often represents finalized configuration, committed transactions, or published records. You want to validate before that transition happens.&lt;/p&gt;
&lt;p&gt;Version 0.2.5 introduced crystallization contracts, validation closures attached to &lt;code&gt;freeze()&lt;/code&gt; with the &lt;code&gt;where&lt;/code&gt; keyword:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="nv"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Map&lt;/span&gt;::&lt;span class="nv"&gt;new&lt;/span&gt;&lt;span class="ss"&gt;()&lt;/span&gt;
&lt;span class="nv"&gt;config&lt;/span&gt;[&lt;span class="s2"&gt;"host"&lt;/span&gt;]&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;
&lt;span class="nv"&gt;config&lt;/span&gt;[&lt;span class="s2"&gt;"port"&lt;/span&gt;]&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;
&lt;span class="nv"&gt;config&lt;/span&gt;[&lt;span class="s2"&gt;"workers"&lt;/span&gt;]&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;

&lt;span class="nv"&gt;freeze&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;config&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nv"&gt;v&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nv"&gt;v&lt;/span&gt;.&lt;span class="nv"&gt;has&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"host"&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;throw&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"config missing 'host'"&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;}
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nv"&gt;v&lt;/span&gt;.&lt;span class="nv"&gt;has&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"port"&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;throw&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"config missing 'port'"&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;}
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;v&lt;/span&gt;[&lt;span class="s2"&gt;"workers"&lt;/span&gt;]&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;throw&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"need at least 1 worker"&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;}
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The contract receives a deep clone of the value (so the validation can't accidentally mutate the original), runs the closure, and if the closure throws, the freeze is aborted and the value remains fluid. If validation passes, the value transitions to crystal.&lt;/p&gt;
&lt;p&gt;This maps cleanly to real-world patterns. Database ORMs validate before persisting. Configuration systems validate before applying. Form submissions validate before accepting. The difference is that in Lattice, this validation is attached to the &lt;em&gt;phase transition itself&lt;/em&gt;, not to a separate method you have to remember to call.&lt;/p&gt;
&lt;p&gt;Contracts compose naturally with the rest of the phase system. You can use them with phase bonds (discussed next) or with phase-dependent dispatch. A function that accepts &lt;code&gt;fix Map&lt;/code&gt; knows its argument passed whatever contract was attached at freeze time.&lt;/p&gt;
&lt;h3&gt;Phase Bonds: Coordinated Freezing&lt;/h3&gt;
&lt;p&gt;Individual freeze/thaw operations work well for isolated values, but real programs have related data that should transition together. A web request's headers, body, and metadata should probably all be immutable before you send it. A transaction's debit and credit entries should freeze atomically.&lt;/p&gt;
&lt;p&gt;Phase bonds (also v0.2.5) let you declare these relationships:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux&lt;span class="w"&gt; &lt;/span&gt;header&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;Map::new()
flux&lt;span class="w"&gt; &lt;/span&gt;body&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;Map::new()
flux&lt;span class="w"&gt; &lt;/span&gt;footer&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;Map::new()

header["content-type"]&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;"text/html"
body["content"]&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;"&lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Hello&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;"
footer["timestamp"]&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;time_now()

bond(header,&lt;span class="w"&gt; &lt;/span&gt;body,&lt;span class="w"&gt; &lt;/span&gt;footer)

freeze(header)&lt;span class="w"&gt;              &lt;/span&gt;//&lt;span class="w"&gt; &lt;/span&gt;cascades&lt;span class="w"&gt; &lt;/span&gt;to&lt;span class="w"&gt; &lt;/span&gt;body&lt;span class="w"&gt; &lt;/span&gt;AND&lt;span class="w"&gt; &lt;/span&gt;footer
print(phase_of(body))&lt;span class="w"&gt;       &lt;/span&gt;//&lt;span class="w"&gt; &lt;/span&gt;"crystal"
print(phase_of(footer))&lt;span class="w"&gt;     &lt;/span&gt;//&lt;span class="w"&gt; &lt;/span&gt;"crystal"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;bond(target, ...deps)&lt;/code&gt; call links dependencies to a target. When the target freezes, all its dependencies freeze too. Bonds are also transitive: if A is bonded to B and B is bonded to C, freezing A cascades through B to C.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux a = 1
flux b = 2
flux c = 3

bond(a, b)    // b depends on a
bond(b, c)    // c depends on b

freeze(a)     // freezes a → b → c
print(phase_of(c))  // "crystal"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can remove bonds with &lt;code&gt;unbond()&lt;/code&gt; before the freeze happens:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;bond(header, body, footer)
unbond(header, footer)    // footer no longer cascades

freeze(header)            // freezes header and body, NOT footer
print(phase_of(footer))   // "fluid"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Bonds solve a coordination problem that most languages leave to discipline. In a typical codebase, you'd need to remember to freeze all related values, or wrap them in a container and freeze that. Bonds make the relationship explicit and enforced.&lt;/p&gt;
&lt;h3&gt;Phase Reactions: Observing State Transitions&lt;/h3&gt;
&lt;p&gt;With constraints, contracts, and bonds, you can control &lt;em&gt;how&lt;/em&gt; and &lt;em&gt;when&lt;/em&gt; phase transitions happen. But sometimes you also need to know &lt;em&gt;that&lt;/em&gt; they happened. Logging, cache invalidation, UI updates, audit trails: these are all responses to state changes.&lt;/p&gt;
&lt;p&gt;Version 0.2.6 adds phase reactions: callbacks that fire automatically when a variable's phase changes.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux data = [1, 2, 3]

react(data, |phase, val| {
    print("data is now " + phase + ": " + to_string(val))
})

freeze(data)   // prints: "data is now crystal: [1, 2, 3]"
thaw(data)     // prints: "data is now fluid: [1, 2, 3]"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The callback receives two arguments: the new phase name (as a string, "crystal" or "fluid") and a deep clone of the current value. Multiple callbacks can be registered on the same variable, and they fire in registration order:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux counter = 0

react(counter, |phase, val| {
    print("logger: counter is now " + phase)
})

react(counter, |phase, val| {
    if phase == "crystal" {
        print("audit: counter finalized at " + to_string(val))
    }
})

counter = 42
freeze(counter)
// prints:
//   logger: counter is now crystal
//   audit: counter finalized at 42
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Reactions also fire during bond cascades. If variable B is bonded to A and has a reaction registered, freezing A will cascade to B and trigger B's reaction:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux primary = Map::new()
flux replica = Map::new()

bond(primary, replica)

react(replica, |phase, val| {
    print("replica transitioned to " + phase)
})

freeze(primary)
// prints: "replica transitioned to crystal"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is a powerful combination. Bonds handle the &lt;em&gt;coordination&lt;/em&gt; of transitions, and reactions handle the &lt;em&gt;observation&lt;/em&gt;. Together they let you build systems where phase changes propagate and trigger side effects in a predictable, declarative way.&lt;/p&gt;
&lt;p&gt;Use &lt;code&gt;unreact()&lt;/code&gt; to remove all reactions from a variable:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;react(data, |phase, val| { print("fired") })
unreact(data)
freeze(data)  // no output — reaction was removed
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If a reaction callback throws an error, it propagates as a reaction error, giving you a clean way to handle failures in the observation chain.&lt;/p&gt;
&lt;h3&gt;Temporal Values: Phase History and Time Travel&lt;/h3&gt;
&lt;p&gt;The last piece of the phase system (also v0.2.5) is temporal values: the ability to track a variable's phase transitions and value changes over time.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux counter = 0
track("counter")

counter = 10
counter = 20
freeze(counter)

let history = phases("counter")
// [{phase: "fluid", value: 0},
//  {phase: "fluid", value: 10},
//  {phase: "fluid", value: 20},
//  {phase: "crystal", value: 20}]
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;track()&lt;/code&gt; function enables recording for a named variable. Every assignment and phase transition creates a snapshot. The &lt;code&gt;phases()&lt;/code&gt; function returns the full history as an array of maps, and &lt;code&gt;rewind()&lt;/code&gt; lets you retrieve past values by offset:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux x = 100
track("x")
x = 200
x = 300

print(rewind("x", 0))  // 300 (current)
print(rewind("x", 1))  // 200 (one step back)
print(rewind("x", 2))  // 100 (two steps back)
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Temporal values serve primarily as a debugging and auditing tool. When something goes wrong with a frozen value, you can inspect its history to see what mutations happened before the freeze. When testing phase-dependent dispatch, you can verify that the right transitions occurred. In production systems, you can use temporal tracking for audit logs or undo functionality.&lt;/p&gt;
&lt;h3&gt;The Bigger Picture: Why This Matters&lt;/h3&gt;
&lt;p&gt;Most programming languages treat mutability as a compiler concern, something to check at build time and forget about. Lattice treats it as a runtime property with the same richness as types or values. This opens up patterns that are difficult or impossible in other languages:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gradual freezing.&lt;/strong&gt; Data starts fluid, accumulates state through a pipeline, and freezes when it's complete. Contracts validate at the boundary. Bonds ensure related data transitions together. This maps naturally to request processing, form building, transaction assembly, and configuration loading.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Observable state transitions.&lt;/strong&gt; Reactions let you attach behavior to phase changes without coupling the code that freezes with the code that responds. A module can register a reaction on shared data without knowing who or when the freeze will happen.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Phase-aware APIs.&lt;/strong&gt; Functions can express their mutability requirements in their signatures and dispatch based on the caller's data. Libraries can offer mutable and immutable code paths transparently.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Auditability.&lt;/strong&gt; Temporal tracking provides a built-in mechanism for understanding how data evolved, without external logging infrastructure.&lt;/p&gt;
&lt;p&gt;None of these features require abandoning the simple mental model. At its core, Lattice still has fluid and crystal, mutable and immutable. Everything else is opt-in machinery for programs that need more control.&lt;/p&gt;
&lt;h3&gt;Comparison with Other Approaches&lt;/h3&gt;
&lt;p&gt;It's worth comparing this to how other languages handle mutability:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Rust&lt;/th&gt;
&lt;th&gt;JavaScript&lt;/th&gt;
&lt;th&gt;Lattice&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Mutability declaration&lt;/td&gt;
&lt;td&gt;&lt;code&gt;let&lt;/code&gt; / &lt;code&gt;let mut&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;const&lt;/code&gt; / &lt;code&gt;let&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;fix&lt;/code&gt; / &lt;code&gt;flux&lt;/code&gt; / &lt;code&gt;let&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enforcement&lt;/td&gt;
&lt;td&gt;Compile-time&lt;/td&gt;
&lt;td&gt;Runtime (shallow)&lt;/td&gt;
&lt;td&gt;Runtime (deep)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase transitions&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Object.freeze()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;freeze()&lt;/code&gt; / &lt;code&gt;thaw()&lt;/code&gt; / &lt;code&gt;clone()&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Validation on freeze&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Crystallization contracts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coordinated freezing&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Phase bonds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Transition observation&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Phase reactions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase-dependent dispatch&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Overload resolution by phase&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;History tracking&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Temporal values&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Rust's borrow checker is more powerful for preventing data races at compile time; Lattice doesn't attempt that. JavaScript's &lt;code&gt;Object.freeze()&lt;/code&gt; is more pragmatic but also more limited: it's shallow, provides no observation, and offers no coordination. Lattice occupies a different point in the design space: mutability as a &lt;em&gt;domain concept&lt;/em&gt; rather than a &lt;em&gt;compiler constraint&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;Implementation Notes&lt;/h3&gt;
&lt;p&gt;The phase system is implemented in C as part of Lattice's tree-walking interpreter. Phase tags are stored directly on values (&lt;code&gt;VTAG_FLUID&lt;/code&gt;, &lt;code&gt;VTAG_CRYSTAL&lt;/code&gt;, &lt;code&gt;VTAG_UNPHASED&lt;/code&gt;), so phase checks are single comparisons. Bonds are stored as a dynamic array of &lt;code&gt;BondEntry&lt;/code&gt; structs on the evaluator, each mapping a target variable name to its dependencies. Reactions use a similar structure; &lt;code&gt;ReactionEntry&lt;/code&gt; maps a variable name to an array of callback closures. Temporal tracking stores &lt;code&gt;HistorySnapshot&lt;/code&gt; arrays containing phase names and deep-cloned values.&lt;/p&gt;
&lt;p&gt;The deep cloning is important throughout. Contract validation receives a clone so it can't mutate the original. Reaction callbacks receive clones so observers can't interfere with each other. Temporal snapshots are clones so history is independent of current state. This means the phase system has allocation costs proportional to value size, but it also means the invariants are strong, with no spooky action at a distance.&lt;/p&gt;
&lt;p&gt;Freeze cascading through bonds is recursive, and reactions fire during cascading, so a single &lt;code&gt;freeze()&lt;/code&gt; call can trigger an arbitrary chain of transitions and callbacks. Error propagation is straightforward: if any reaction throws, the error surfaces immediately with context about which reaction failed.&lt;/p&gt;
&lt;h3&gt;What's Next&lt;/h3&gt;
&lt;p&gt;The phase system is reaching a natural plateau in terms of core features. There are a few directions I'm considering:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Partial freezing&lt;/strong&gt; already exists in a basic form: you can freeze individual struct fields or map keys while leaving the container mutable. Expanding this to support more granular control (freeze all fields matching a pattern, freeze a subtree) could be useful for large data structures.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Phase-aware pattern matching&lt;/strong&gt; lets you match on phase in &lt;code&gt;match&lt;/code&gt; expressions using &lt;code&gt;~&lt;/code&gt; and &lt;code&gt;*&lt;/code&gt; qualifiers. This is already implemented but could be extended with more complex phase patterns.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Compile-time phase inference&lt;/strong&gt; is a longer-term goal. If the interpreter can prove that a value is always crystal by a certain point, it could skip runtime checks. This would bring some of Rust's static guarantees to Lattice without requiring explicit lifetime annotations.&lt;/p&gt;
&lt;p&gt;For now, the phase system provides a cohesive set of tools for working with mutability as a first-class concept. Whether you're building a configuration loader that validates before committing, a pipeline that coordinates related state transitions, or a reactive system that responds to phase changes, Lattice gives you the vocabulary and the enforcement to do it declaratively.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Lattice is open source and available at &lt;a href="https://baud.rs/bwvnYT"&gt;lattice-lang.org&lt;/a&gt;. The language compiles and runs on macOS and Linux with no dependencies beyond a C11 compiler. You can try it in your browser via the &lt;a href="https://baud.rs/odS816"&gt;playground&lt;/a&gt;, or clone the repo and run &lt;code&gt;make &amp;amp;&amp;amp; ./clat&lt;/code&gt; to start the REPL.&lt;/p&gt;</description><category>language design</category><category>lattice</category><category>mutability</category><category>phase system</category><category>programming languages</category><category>type systems</category><guid>https://tinycomputers.io/posts/mutability-as-a-first-class-concept-the-lattice-phase-system.html</guid><pubDate>Sat, 14 Feb 2026 23:00:00 GMT</pubDate></item><item><title>Introducing Lattice: A Crystallization-Based Programming Language</title><link>https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;p&gt;Most programming languages treat mutability as a binary property. A variable is either mutable or it's not. You declare it one way, and that's the end of the story. Rust adds nuance with its ownership and borrowing model, and functional languages sidestep the question by making everything immutable by default, but the fundamental framing remains the same: mutability is a static attribute decided at declaration time.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://baud.rs/4ysPkF"&gt;Lattice&lt;/a&gt; takes a different approach. In Lattice, mutability is a &lt;em&gt;phase&lt;/em&gt;, a state that a value passes through over its lifetime, like matter transitioning between liquid and solid. A value starts as mutable &lt;strong&gt;flux&lt;/strong&gt;, and when you're done shaping it, you &lt;strong&gt;freeze&lt;/strong&gt; it into immutable &lt;strong&gt;fix&lt;/strong&gt;. Need to modify it again? &lt;strong&gt;Thaw&lt;/strong&gt; it back to flux. Want to build something complex and immutable in one shot? Use a &lt;strong&gt;forge&lt;/strong&gt; block, a controlled mutation zone whose output automatically crystallizes.&lt;/p&gt;
&lt;p&gt;This isn't just a metaphor. The phase system is woven through Lattice's entire runtime, from its type representation to its memory management architecture. This post is a deep dive into what that means, how it works at the implementation level, and why it represents a genuinely different way of thinking about the relationship between mutability and memory.&lt;/p&gt;
&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/introducing-lattice-a-crystallization-based-programming-language_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;36 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;h3&gt;The Problem Lattice Solves&lt;/h3&gt;
&lt;p&gt;Every language designer eventually confronts the same tension: programmers need mutability to build things, but mutability is the source of most bugs. Shared mutable state causes race conditions. Unexpected mutation causes aliasing bugs. Mutable references that outlive their owners cause use-after-free errors.&lt;/p&gt;
&lt;p&gt;Different languages resolve this tension in different ways, and each approach carries trade-offs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Garbage-collected languages&lt;/strong&gt; (Java, Python, Go, JavaScript) let you mutate freely and use a garbage collector to clean up. This is convenient but pushes the cost to runtime: GC pauses, unpredictable memory usage, and no compile-time guarantees about who can modify what. You gain ease of use but lose control.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://baud.rs/gSnSwR"&gt;Rust's ownership model&lt;/a&gt;&lt;/strong&gt; provides compile-time guarantees through a sophisticated borrow checker. You can have either one mutable reference or many immutable references, but not both. This eliminates data races at compile time, but the cost is complexity: the borrow checker is notoriously difficult for newcomers, lifetime annotations add syntactic weight, and certain patterns (like self-referential structs or graph structures) require unsafe escape hatches.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Functional languages&lt;/strong&gt; (Haskell, Erlang, Clojure) default to immutability and model mutation through controlled mechanisms like monads, processes, or atoms. This produces correct programs but can feel unnatural for inherently stateful problems, and persistent data structures carry performance overhead.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;C and C++&lt;/strong&gt; give you full manual control and zero overhead, at the cost of memory safety. &lt;code&gt;const&lt;/code&gt; in C is advisory at best; you can cast it away, and the compiler won't stop you from freeing memory that someone else is still using.&lt;/p&gt;
&lt;p&gt;Lattice's phase system is an attempt to find a different point in this design space. The core insight is that in most programs, values have a natural lifecycle: they're constructed (requiring mutation), then used (requiring stability), and occasionally reconstructed (requiring mutation again). The phase system makes this lifecycle explicit and enforceable.&lt;/p&gt;
&lt;h3&gt;The Phase Model&lt;/h3&gt;
&lt;p&gt;Lattice has three binding keywords that correspond to mutability phases:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;flux&lt;/code&gt;&lt;/strong&gt; declares a mutable binding. A flux variable can be reassigned, and its contents can be modified in place. This is where you do your work: building arrays, populating maps, incrementing counters.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux counter = 0
counter += 1
counter += 1
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;fix&lt;/code&gt;&lt;/strong&gt; declares an immutable binding. A fix variable cannot be reassigned, and its contents cannot be modified. Attempting to mutate a fix binding is an error.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="n"&gt;fix&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;freeze&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;3.14159&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;// pi = 2.0  -- error: cannot assign to crystal binding&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;let&lt;/code&gt;&lt;/strong&gt; is the inferred form (available in casual mode). It doesn't enforce a phase; the value keeps whatever phase tag it already has.&lt;/p&gt;
&lt;p&gt;The transitions between phases are explicit function calls:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;freeze(value)&lt;/code&gt;&lt;/strong&gt; transitions a value from fluid to crystal. In strict mode, this is a &lt;em&gt;consuming&lt;/em&gt; operation: the original binding is removed from the environment. You can't accidentally keep a mutable reference to something you've declared immutable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;thaw(value)&lt;/code&gt;&lt;/strong&gt; creates a mutable deep clone of a crystal value. The original remains frozen; you get a completely independent mutable copy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;clone(value)&lt;/code&gt;&lt;/strong&gt; creates a deep copy without changing phase.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And then there's the &lt;strong&gt;&lt;code&gt;forge&lt;/code&gt;&lt;/strong&gt; block, which is perhaps the most interesting construct:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fix config = forge {
    flux temp = Map::new()
    temp.set("host", "localhost")
    temp.set("port", "8080")
    temp.set("debug", "true")
    freeze(temp)
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;A forge block is a scoped computation whose result is automatically frozen. Inside the forge, you can use flux variables and mutate freely. But whatever value the block produces comes out crystallized. The temporary mutable state is gone; only the finished, immutable result survives.&lt;/p&gt;
&lt;p&gt;This addresses a real pain point. In functional languages, building a complex immutable data structure often requires awkward chains of constructor calls or builder patterns. In Lattice, you just... build it, mutably, in a forge block, and it comes out frozen. The forge acknowledges that construction is inherently a mutable process, while insisting that the &lt;em&gt;result&lt;/em&gt; of construction should be stable.&lt;/p&gt;
&lt;h3&gt;Under the Hood: How the Phase System Maps to Memory&lt;/h3&gt;
&lt;p&gt;Lattice is implemented as a tree-walking interpreter in C, roughly 6,000 lines across the lexer, parser, phase checker, and evaluator. The implementation reveals some interesting design decisions about how phase semantics interact with memory management.&lt;/p&gt;
&lt;h4&gt;Value Representation&lt;/h4&gt;
&lt;p&gt;Every runtime value in Lattice is a &lt;code&gt;LatValue&lt;/code&gt; struct, a tagged union carrying a type tag, a phase tag, and the value payload:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;ValueType&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="c1"&gt;// VAL_INT, VAL_STR, VAL_ARRAY, VAL_MAP, ...&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;PhaseTag&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;phase&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;// VTAG_FLUID, VTAG_CRYSTAL, VTAG_UNPHASED&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;union&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Primitive values (integers, floats, booleans) live inline in the union, with no heap allocation. Compound values (strings, arrays, structs, maps, closures) own heap-allocated payloads. A string holds a heap-allocated character buffer. An array holds a &lt;code&gt;malloc&lt;/code&gt;'d element buffer. A map holds a pointer to an open-addressing hash table.&lt;/p&gt;
&lt;h4&gt;Deep-Clone-on-Read: Value Semantics Without a Compiler&lt;/h4&gt;
&lt;p&gt;The most consequential design decision in Lattice's runtime is that &lt;strong&gt;every variable read produces a deep clone&lt;/strong&gt;. When you access a variable, the environment doesn't hand you a reference to the stored value. It hands you a complete, independent copy.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="kt"&gt;bool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nf"&gt;env_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Env&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;char&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;lat_map_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;scopes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="mi"&gt;-1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;value_deep_clone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;// always a fresh copy&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is expensive. Every array access clones the entire array. Every map read clones every key-value pair. But it eliminates an entire class of bugs. There is no aliasing in Lattice. Two variables never point to the same underlying memory. When you pass a map to a function, the function gets its own copy, and mutations inside the function don't leak back to the caller. When you assign an array to a new variable, you get two independent arrays.&lt;/p&gt;
&lt;p&gt;This is the implementation strategy that makes Lattice's maps value types. In most languages, objects and collections are reference types, and assigning them to a new variable creates a new reference to the same data. In Lattice, assignment means duplication. This is closer to how values work in mathematics than how they work in most programming languages.&lt;/p&gt;
&lt;p&gt;For in-place mutation within a scope (like &lt;code&gt;array.push()&lt;/code&gt; or &lt;code&gt;map.set()&lt;/code&gt;), Lattice uses a separate &lt;code&gt;resolve_lvalue()&lt;/code&gt; mechanism that obtains a direct mutable pointer into the environment's storage, bypassing the deep clone. This means local mutations are efficient; it's only cross-scope communication that pays the cloning cost.&lt;/p&gt;
&lt;h4&gt;The Dual Heap Architecture&lt;/h4&gt;
&lt;p&gt;Lattice's memory subsystem uses what the implementation calls a &lt;code&gt;DualHeap&lt;/code&gt;: two separate allocation regions with different management strategies:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The FluidHeap&lt;/strong&gt; manages mutable data using a mark-and-sweep garbage collector. It maintains a linked list of all heap allocations, with a mark bit on each. When memory pressure crosses a threshold (1 MB by default), the GC walks all reachable values from the environment and a shadow root stack, marks what's alive, and sweeps everything else.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The RegionManager&lt;/strong&gt; manages immutable data using arena-based regions. Each freeze creates a new region backed by a page-based arena, a linked list of 4 KB pages with bump allocation. When a value is frozen, it is deep-cloned entirely into the region's arena, giving crystal data cache locality and enabling O(1) bulk deallocation when the region becomes unreachable. Regions are collected during GC cycles based on reachability analysis.&lt;/p&gt;
&lt;p&gt;The key insight here is that &lt;strong&gt;immutable and mutable data have different lifecycle characteristics&lt;/strong&gt; and benefit from different management strategies. Mutable data changes frequently and has unpredictable lifetimes, and mark-and-sweep handles this well. Immutable data, once created, never changes and tends to be long-lived, so arena-based region allocation is more efficient for this pattern, as it enables bulk deallocation and better cache locality.&lt;/p&gt;
&lt;p&gt;This is conceptually similar to generational garbage collection (where young objects are collected differently from old objects), but the split is based on &lt;em&gt;mutability&lt;/em&gt; rather than &lt;em&gt;age&lt;/em&gt;. Lattice's phase tags provide the runtime with information that generational GCs have to infer statistically.&lt;/p&gt;
&lt;p&gt;The following chart shows how this plays out in practice across several benchmark programs. Fluid peak memory represents the high-water mark of the GC-managed heap, while crystal arena data shows how much data has been frozen into arena-backed regions:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Fluid Peak vs Crystal Arena Data" src="https://tinycomputers.io/images/lattice_fluid_vs_crystal.png"&gt;&lt;/p&gt;
&lt;h4&gt;Freeze and Thaw at the Memory Level&lt;/h4&gt;
&lt;p&gt;When you call &lt;code&gt;freeze()&lt;/code&gt; on a value, the runtime creates a new crystal region with a fresh arena, deep-clones the entire value tree into it, sets the &lt;code&gt;phase&lt;/code&gt; field to &lt;code&gt;VTAG_CRYSTAL&lt;/code&gt; on every node, and frees the original fluid heap pointers. The data physically migrates from the fluid heap into arena pages. Freeze is a move operation, not just a metadata flip. This gives frozen data cache locality within contiguous arena pages and completely separates it from the garbage-collected fluid heap.&lt;/p&gt;
&lt;p&gt;But in strict mode, &lt;code&gt;freeze()&lt;/code&gt; is also a &lt;em&gt;consuming&lt;/em&gt; operation. It removes the original binding from the environment and returns the frozen value. This is effectively a move: after &lt;code&gt;freeze(x)&lt;/code&gt;, there is no &lt;code&gt;x&lt;/code&gt; anymore. You can bind the result to a new name (&lt;code&gt;fix y = freeze(x)&lt;/code&gt;), but the mutable original is gone. This prevents a common bug pattern where you freeze a value but accidentally keep mutating the original through a still-live reference.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;thaw()&lt;/code&gt; is more expensive: it performs a complete deep clone of the crystal value and then recursively sets all phase tags to &lt;code&gt;VTAG_FLUID&lt;/code&gt;. The original crystal value is untouched; you get a completely independent mutable copy. This is consistent with the principle that crystal values are permanent. Thawing doesn't melt the original; it creates a new fluid copy.&lt;/p&gt;
&lt;p&gt;In practice, both operations are fast. Across the benchmark suite, freeze and thaw costs stay well under a millisecond even for complex data structures:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Freeze/Thaw Cost by Benchmark" src="https://tinycomputers.io/images/lattice_freeze_thaw_timing.png"&gt;&lt;/p&gt;
&lt;p&gt;The number and type of phase transitions varies by workload. Some benchmarks are freeze-heavy (building immutable snapshots), others are thaw-heavy (repeatedly modifying frozen state), and some use deep clones for full value duplication:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Phase Transitions by Benchmark" src="https://tinycomputers.io/images/lattice_phase_transitions.png"&gt;&lt;/p&gt;
&lt;h3&gt;How This Compares to Existing Systems&lt;/h3&gt;
&lt;h4&gt;vs. Rust's Ownership and Borrowing&lt;/h4&gt;
&lt;p&gt;Rust solves the mutability problem at compile time through static analysis. The borrow checker ensures that mutable references are unique and that immutable references don't coexist with mutable ones. This gives Rust zero-runtime-cost safety guarantees that Lattice can't match.&lt;/p&gt;
&lt;p&gt;But Rust's approach operates at the reference level. It tracks who has access to data, not the data's intrinsic state. You can have an &lt;code&gt;&amp;amp;mut&lt;/code&gt; to data that is conceptually "done being built," or an &lt;code&gt;&amp;amp;&lt;/code&gt; to data that you wish you could modify. The permission model and the data lifecycle are orthogonal.&lt;/p&gt;
&lt;p&gt;Lattice's phase system operates on the data itself. A frozen value &lt;em&gt;is&lt;/em&gt; immutable, not because the type system prevents you from obtaining a mutable reference, but because the value has transitioned to a state where mutation doesn't apply. This is a simpler mental model at the cost of runtime enforcement rather than compile-time proof.&lt;/p&gt;
&lt;p&gt;The consuming &lt;code&gt;freeze()&lt;/code&gt; in strict mode is reminiscent of Rust's move semantics, where using a value after moving it is a compile error. Lattice achieves a similar effect at runtime: freeze consumes the binding, preventing further mutable access. It's not as strong a guarantee (runtime vs. compile time), but it's the same intuition: once you've declared something immutable, the mutable version shouldn't exist anymore.&lt;/p&gt;
&lt;h4&gt;vs. Garbage Collection&lt;/h4&gt;
&lt;p&gt;Traditional garbage collectors (Java, Go, Python) are phase-agnostic. They track reachability, not mutability. A &lt;code&gt;final&lt;/code&gt; field in Java prevents reassignment but doesn't inform the GC. An immutable object in Python is collected the same way as a mutable one.&lt;/p&gt;
&lt;p&gt;Lattice's dual-heap architecture uses phase information to make better allocation decisions. Crystal values go into arena-managed memory with reachability-based collection. Fluid values go into a mark-and-sweep heap. The GC can reason about immutable data more efficiently because it &lt;em&gt;knows&lt;/em&gt; the data won't change, so it doesn't need to re-scan crystal regions for updated references.&lt;/p&gt;
&lt;p&gt;This is a form of phase-informed memory management that, to my knowledge, doesn't have a direct precedent in mainstream languages. The closest analogy might be Clojure's persistent data structures, which are structurally shared and immutable, but Clojure doesn't use this information to drive its garbage collection strategy differently.&lt;/p&gt;
&lt;h4&gt;vs. Functional Immutability&lt;/h4&gt;
&lt;p&gt;Haskell and other pure functional languages are immutable by default, with mutation confined to monads (&lt;code&gt;IORef&lt;/code&gt;, &lt;code&gt;STRef&lt;/code&gt;) or similar controlled mechanisms. This is elegant but can be awkward for imperative algorithms where you need to build something up step by step.&lt;/p&gt;
&lt;p&gt;Lattice's forge blocks address this directly. Instead of threading a builder through a chain of pure function calls, you write imperative mutation inside a forge and get an immutable result. This acknowledges that construction and consumption are different activities that benefit from different mutability guarantees.&lt;/p&gt;
&lt;p&gt;The philosophical difference is that functional languages treat immutability as the default and mutation as the exception. Lattice treats mutability as a &lt;em&gt;phase&lt;/em&gt; that values pass through, and both flux and fix are natural, expected states, and the language provides explicit tools for transitioning between them.&lt;/p&gt;
&lt;h4&gt;vs. C/C++ Manual Memory Management&lt;/h4&gt;
&lt;p&gt;C gives you &lt;code&gt;malloc&lt;/code&gt; and &lt;code&gt;free&lt;/code&gt; and wishes you the best. C++ adds RAII, smart pointers, and &lt;code&gt;const&lt;/code&gt; correctness, but &lt;code&gt;const&lt;/code&gt; in both languages is fundamentally a compiler hint. It can be cast away, and the runtime has no awareness of it. A &lt;code&gt;const&lt;/code&gt; pointer in C doesn't prevent someone else from modifying the data through a non-const pointer to the same memory. The &lt;code&gt;const&lt;/code&gt; is a property of the &lt;em&gt;reference&lt;/em&gt;, not the &lt;em&gt;data&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Lattice's phase tags live on the data itself. When a value is crystal, it's crystal regardless of how you access it. There's no way to "cast away" a freeze; the only path back to mutability is &lt;code&gt;thaw()&lt;/code&gt;, which creates a new independent copy. This is a stronger guarantee than &lt;code&gt;const&lt;/code&gt; provides, because it operates on values rather than references.&lt;/p&gt;
&lt;p&gt;C++ move semantics share DNA with Lattice's consuming &lt;code&gt;freeze()&lt;/code&gt; in strict mode. A &lt;code&gt;std::move&lt;/code&gt; in C++ transfers ownership of resources, leaving the source in a valid-but-unspecified state. Lattice's strict freeze does something similar: it removes the binding entirely, ensuring the mutable version ceases to exist. But where C++ moves are primarily about avoiding copies for performance, Lattice's consuming freeze is about semantic correctness, ensuring that the transition from mutable to immutable is clean and total. Scott Meyers' &lt;a href="https://baud.rs/OK4IwA"&gt;Effective Modern C++&lt;/a&gt; remains the best guide to understanding these move semantics and other modern C++ patterns that Lattice's design draws from.&lt;/p&gt;
&lt;h4&gt;The Static Phase Checker&lt;/h4&gt;
&lt;p&gt;It's worth noting that Lattice doesn't rely solely on runtime enforcement. Before any code executes, a static phase checker walks the AST and catches phase violations at analysis time. This checker maintains its own scope stack mapping variable names to their declared phases and rejects programs that attempt to reassign crystal bindings, freeze already-frozen values, thaw already-fluid values, or use &lt;code&gt;let&lt;/code&gt; in strict mode where an explicit phase declaration is required.&lt;/p&gt;
&lt;p&gt;The static checker also enforces spawn boundaries. If Lattice's concurrency model (&lt;code&gt;spawn&lt;/code&gt;) is used, fluid bindings from the enclosing scope cannot be captured across the spawn point. Only crystal values can be shared into spawned computations. This is checked &lt;em&gt;before&lt;/em&gt; evaluation begins, catching potential data races at parse time rather than at runtime.&lt;/p&gt;
&lt;p&gt;This two-layer approach (static checking before evaluation, runtime enforcement during) provides confidence without requiring a full type system or borrow checker. It catches the obvious mistakes early and enforces the subtle invariants at runtime. For the theoretical foundations behind this kind of phase-based type analysis, Benjamin Pierce's &lt;a href="https://baud.rs/oMfDwe"&gt;Types and Programming Languages&lt;/a&gt; is the standard reference.&lt;/p&gt;
&lt;h3&gt;The Language Beyond Phases&lt;/h3&gt;
&lt;p&gt;While the phase system is Lattice's defining feature, the language has other characteristics worth noting.&lt;/p&gt;
&lt;p&gt;Structs in Lattice can hold closures as fields, enabling object-like patterns without a class system. A struct with function fields and a &lt;code&gt;self&lt;/code&gt; parameter in each closure behaves much like an object with methods, but the data flow is explicit, and there's no hidden &lt;code&gt;this&lt;/code&gt; pointer or vtable dispatch. When a closure captures &lt;code&gt;self&lt;/code&gt;, it receives a deep clone, ensuring that method calls don't produce spooky action at a distance.&lt;/p&gt;
&lt;p&gt;Control flow is expression-based: &lt;code&gt;if&lt;/code&gt;/&lt;code&gt;else&lt;/code&gt; blocks, &lt;code&gt;match&lt;/code&gt; expressions, and bare blocks all return values. This reduces the need for temporary variables and makes code more compositional. Error handling uses &lt;code&gt;try&lt;/code&gt;/&lt;code&gt;catch&lt;/code&gt; blocks with explicit error values rather than exception hierarchies.&lt;/p&gt;
&lt;p&gt;The self-hosted REPL is particularly notable. Written entirely in Lattice, it demonstrates that the language is expressive enough to implement its own interactive environment, parsing multi-line input, evaluating expressions, and managing session state. Running &lt;code&gt;./clat&lt;/code&gt; without arguments drops into this REPL, while &lt;code&gt;./clat file.lat&lt;/code&gt; executes a program directly.&lt;/p&gt;
&lt;p&gt;Lattice is implemented in C with no external dependencies. The entire codebase (roughly 6,000 lines across the lexer, parser, phase checker, evaluator, and data structures) compiles with a single &lt;code&gt;make&lt;/code&gt; invocation. This is a deliberate choice. The language is meant to be small, understandable, and self-contained. You can read the entire implementation in an afternoon. If you're interested in this kind of work, Robert Nystrom's &lt;a href="https://baud.rs/uTpA6y"&gt;&lt;em&gt;Crafting Interpreters&lt;/em&gt;&lt;/a&gt; is the best practical guide to building language implementations from scratch. It covers both tree-walking interpreters and bytecode VMs, and Lattice's architecture shares several design decisions with Nystrom's Lox language. For the C implementation side, Kernighan and Ritchie's &lt;a href="https://baud.rs/71h6l3"&gt;&lt;em&gt;The C Programming Language&lt;/em&gt;&lt;/a&gt; remains the definitive reference for writing the kind of clean, minimal C that Lattice targets.&lt;/p&gt;
&lt;h3&gt;Runtime Characteristics&lt;/h3&gt;
&lt;p&gt;To understand how the dual-heap architecture behaves in practice, Lattice includes a benchmark suite that exercises different memory patterns: allocation churn, closure-heavy computation, event sourcing, freeze/thaw cycles, game state rollback, long-lived crystal data, persistent tree construction, and undo/redo stacks.&lt;/p&gt;
&lt;p&gt;The overview below shows peak RSS (resident set size) alongside the number of live crystal regions at program exit. Benchmarks that use the phase system heavily (like freeze/thaw cycles and persistent trees) maintain more live regions, while purely fluid workloads like allocation churn and closure-heavy computation have none:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Peak RSS and Crystal Regions Overview" src="https://tinycomputers.io/images/lattice_overview.png"&gt;&lt;/p&gt;
&lt;p&gt;The memory churn ratio (total bytes allocated divided by peak live bytes) reveals how aggressively each benchmark recycles memory. A high ratio means the program allocates and discards data rapidly, relying on the GC to keep the working set small. Benchmarks using crystal regions (shown in purple) tend to have lower churn because frozen data is long-lived by design:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Memory Churn Ratio" src="https://tinycomputers.io/images/lattice_churn_ratio.png"&gt;&lt;/p&gt;
&lt;h3&gt;Research Papers&lt;/h3&gt;
&lt;p&gt;For readers interested in the formal foundations and empirical analysis, two companion papers are available:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://tinycomputers.io/papers/lattice_paper.pdf"&gt;The Lattice Phase System: First-Class Immutability with Dual-Heap Memory Management&lt;/a&gt;&lt;/strong&gt;: The full research paper covering the language design, formal operational semantics, six proved safety properties (phase monotonicity, value isolation, consuming freeze, forge soundness, heap separation, and thaw independence), implementation details of the dual-heap architecture, and empirical evaluation across eight benchmarks.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://tinycomputers.io/papers/lattice_formal_semantics.pdf"&gt;Formal Semantics of the Lattice Phase System&lt;/a&gt;&lt;/strong&gt;: A standalone formal treatment containing the complete semantic domains, static phase-checking rules, big-step operational semantics, memory model, and full proofs of all six safety theorems.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Looking Forward&lt;/h3&gt;
&lt;p&gt;Lattice is at version 0.1.3, which means it's early. The dual-heap architecture is fully wired into the evaluator. Freeze operations physically migrate data into arena-backed crystal regions, providing cache locality and O(1) bulk deallocation for immutable data. The mark-and-sweep GC handles fluid values, while crystal regions are collected through reachability analysis during GC cycles.&lt;/p&gt;
&lt;p&gt;The deep-clone-on-read strategy is correct but expensive. Future versions may introduce structural sharing for crystal values (since they can't be modified, sharing is safe) or copy-on-write semantics for fluid values that haven't actually been mutated. The phase tags provide the runtime with exactly the information needed to make these optimizations: which values can be shared safely, and which might change.&lt;/p&gt;
&lt;p&gt;There's also the question of concurrency. The phase system provides a natural foundation for safe concurrent programming: crystal values can be freely shared across threads (they're immutable), while fluid values are confined to their owning scope. The &lt;code&gt;spawn&lt;/code&gt; keyword exists in the parser and phase checker, with static analysis already preventing fluid bindings from crossing spawn boundaries, though concurrent execution isn't yet implemented.&lt;/p&gt;
&lt;p&gt;The source code is available on &lt;a href="https://baud.rs/fIe3gx"&gt;GitHub&lt;/a&gt; under the BSD 3-Clause license, and the project site is at &lt;a href="https://baud.rs/4ysPkF"&gt;lattice-lang.org&lt;/a&gt;. If you're interested in language design, memory management, or just want to play with a language that treats mutability as a physical process rather than a type annotation, it's worth a look.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;git clone https://github.com/ajokela/lattice.git
cd lattice &amp;amp;&amp;amp; make
./clat
&lt;/pre&gt;&lt;/div&gt;</description><category>c</category><category>immutability</category><category>interpreter</category><category>language design</category><category>lattice</category><category>memory management</category><category>mutability</category><category>phase system</category><category>programming languages</category><category>value semantics</category><guid>https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html</guid><pubDate>Tue, 10 Feb 2026 18:00:00 GMT</pubDate></item></channel></rss>