<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="../assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>TinyComputers.io (Posts about phase system)</title><link>https://tinycomputers.io/</link><description></description><atom:link href="https://tinycomputers.io/categories/phase-system.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2026 A.C. Jokela 
&lt;!-- div style="width: 100%" --&gt;
&lt;a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"&gt;&lt;img alt="" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/80x15.png" /&gt; Creative Commons Attribution-ShareAlike&lt;/a&gt;&amp;nbsp;|&amp;nbsp;
&lt;!-- /div --&gt;
</copyright><lastBuildDate>Mon, 06 Apr 2026 22:13:01 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Teaching an LLM a Language It Has Never Seen</title><link>https://tinycomputers.io/posts/teaching-llms-languages-theyve-never-seen.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/teaching-llms-languages-theyve-never-seen_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;33 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;&lt;a href="https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html"&gt;Lattice&lt;/a&gt; is a programming language I designed. Its central feature is the phase system: every runtime value carries a mutability tag that transitions between states the way matter moves between liquid and solid. You declare a variable with &lt;code&gt;flux&lt;/code&gt; (mutable) or &lt;code&gt;fix&lt;/code&gt; (immutable). You &lt;code&gt;freeze&lt;/code&gt; a value to make it immutable, &lt;code&gt;thaw&lt;/code&gt; it to get a mutable copy, and &lt;code&gt;sublimate&lt;/code&gt; it to make it permanently frozen. &lt;code&gt;forge&lt;/code&gt; blocks let you build something mutably and have the result exit as immutable. None of this exists in any other language.&lt;/p&gt;
&lt;p&gt;Lattice does not appear in Claude's training data. I designed the language after the knowledge cutoff. There is no Lattice source code on GitHub (other than my own repository). There are no Stack Overflow answers. There is no tutorial ecosystem, no community blog posts, no textbook chapters. The only documentation that exists is the code itself, a 38-chapter handbook I wrote, and three blog posts on this site.&lt;/p&gt;
&lt;p&gt;Claude writes Lattice fluently. It writes correct programs using the phase system, the concurrency primitives, the module system, and the trait/impl pattern. It writes struct definitions with per-field phase annotations. It uses &lt;code&gt;forge&lt;/code&gt; blocks and &lt;code&gt;anneal&lt;/code&gt; expressions correctly. And it wrote a 4,955-line self-hosted compiler in Lattice, for Lattice: a complete tokenizer, parser, and bytecode code generator that reads &lt;code&gt;.lat&lt;/code&gt; source files and emits &lt;code&gt;.latc&lt;/code&gt; bytecode binaries.&lt;/p&gt;
&lt;p&gt;The question is how any of this is possible when the model has never seen the language before.&lt;/p&gt;
&lt;h3&gt;The Rust Smell&lt;/h3&gt;
&lt;p&gt;The answer starts with syntax. Here is a Lattice function:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fn&lt;span class="w"&gt; &lt;/span&gt;greet(name:&lt;span class="w"&gt; &lt;/span&gt;String)&lt;span class="w"&gt; &lt;/span&gt;-&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;String&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;    &lt;/span&gt;return&lt;span class="w"&gt; &lt;/span&gt;"Hello,&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;!"
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And here is the Rust equivalent:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nf"&gt;greet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kp"&gt;&amp;amp;&lt;/span&gt;&lt;span class="kt"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="fm"&gt;format!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Hello, {name}!"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;fn&lt;/code&gt; keyword, the colon-separated type annotations, the &lt;code&gt;-&amp;gt;&lt;/code&gt; return type, the curly braces: Claude has seen these patterns millions of times in Rust code. When it encounters them in Lattice, it doesn't need to learn a new syntax. It needs to recognize a familiar one.&lt;/p&gt;
&lt;p&gt;This extends deep into the language. Lattice structs look like Rust structs:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;struct Point {
    x: Float,
    y: Float
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Lattice enums look like Rust enums:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;enum&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Shape&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;Circle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Float&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;Rectangle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Float&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Float&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Lattice match expressions look like Rust match expressions:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;match shape {
    Shape::Circle(r) =&amp;gt; pi() &lt;span class="gs"&gt;* r *&lt;/span&gt; r,
    Shape::Rectangle(w, h) =&amp;gt; w * h,
    _ =&amp;gt; 0.0
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Lattice traits and impl blocks look like Rust traits and impl blocks:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;trait&lt;span class="w"&gt; &lt;/span&gt;Printable&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;    &lt;/span&gt;fn&lt;span class="w"&gt; &lt;/span&gt;display(self:&lt;span class="w"&gt; &lt;/span&gt;any)&lt;span class="w"&gt; &lt;/span&gt;-&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;String
}

impl&lt;span class="w"&gt; &lt;/span&gt;Printable&lt;span class="w"&gt; &lt;/span&gt;for&lt;span class="w"&gt; &lt;/span&gt;Point&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;    &lt;/span&gt;fn&lt;span class="w"&gt; &lt;/span&gt;display(self:&lt;span class="w"&gt; &lt;/span&gt;any)&lt;span class="w"&gt; &lt;/span&gt;-&amp;gt;&lt;span class="w"&gt; &lt;/span&gt;String&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;        &lt;/span&gt;return&lt;span class="w"&gt; &lt;/span&gt;"(&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="bp"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;,&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="bp"&gt;self&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;y&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;)"
&lt;span class="w"&gt;    &lt;/span&gt;}
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Closures use the same &lt;code&gt;|params| body&lt;/code&gt; syntax. The &lt;code&gt;..&lt;/code&gt; range operator works the same way. The &lt;code&gt;?&lt;/code&gt; postfix operator propagates errors. &lt;code&gt;for item in collection&lt;/code&gt; iterates. &lt;code&gt;let&lt;/code&gt; binds variables. The structural similarity is pervasive enough that a model trained on Rust can parse and generate Lattice code without any Lattice-specific training.&lt;/p&gt;
&lt;p&gt;I did not design Lattice to be AI-friendly. I designed it because Rust's syntax is good and I wanted to use it for a language with different semantics. But the side effect is that Claude can write Lattice from day one because the syntax activates the same neural pathways that Rust does. The model doesn't know it's writing a different language. It knows it's writing code that looks like Rust, and the structural patterns transfer.&lt;/p&gt;
&lt;h3&gt;The Phase System: Where Familiarity Ends&lt;/h3&gt;
&lt;p&gt;The Rust resemblance carries Claude through basic Lattice programs without difficulty. Where it gets interesting is the phase system, because this is where Lattice has no analog in any language Claude has seen.&lt;/p&gt;
&lt;p&gt;In Rust, mutability is a static property: &lt;code&gt;let mut x = 5;&lt;/code&gt; or &lt;code&gt;let x = 5;&lt;/code&gt;. You decide at declaration time and the compiler enforces it. In Lattice, mutability is a runtime state that values transition through:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux counter = 0          // mutable
counter = counter + 1     // allowed: counter is fluid

freeze(counter)           // transition: fluid → crystal
counter = counter + 1     // runtime error: counter is crystal

flux copy = thaw(counter) // get a mutable copy
copy = copy + 1           // allowed: copy is fluid
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Claude handles this correctly. When I describe the phase system and provide examples, Claude generates code that uses &lt;code&gt;flux&lt;/code&gt; and &lt;code&gt;fix&lt;/code&gt; declarations appropriately, calls &lt;code&gt;freeze()&lt;/code&gt; at the right points, and avoids mutating crystal values. The model maps &lt;code&gt;flux&lt;/code&gt; to "mutable variable" and &lt;code&gt;fix&lt;/code&gt; to "immutable variable" in its internal representation, and the transition functions (&lt;code&gt;freeze&lt;/code&gt;, &lt;code&gt;thaw&lt;/code&gt;) become explicit state changes that it tracks through the program.&lt;/p&gt;
&lt;p&gt;The harder constructs are the ones with no familiar analog.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;forge&lt;/code&gt; blocks are mutable construction zones whose output exits as immutable:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fix config = forge {
    flux c = {}
    c.host = "localhost"
    c.port = 8080
    c.debug = false
    c   // exits the forge block as crystal
}
// config is now crystal; cannot be modified
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Claude gets this right because the pattern (build something mutably, freeze the result) maps to the builder pattern in Rust and other languages. The syntax is novel but the concept isn't.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;anneal&lt;/code&gt; is harder. It temporarily thaws a crystal value into a mutable binding for the duration of a block, then re-freezes it:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fix settings = forge { flux s = {}; s.theme = "dark"; s }

anneal(settings) |s| {
    s.theme = "light"   // temporarily mutable
}
// settings is crystal again, with theme = "light"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Claude produces correct &lt;code&gt;anneal&lt;/code&gt; code when given the semantics, but it occasionally generates patterns that would work in Rust (taking a &lt;code&gt;&amp;amp;mut&lt;/code&gt; reference) but don't apply in Lattice (where &lt;code&gt;anneal&lt;/code&gt; is the only way to modify a crystal value in place). The model's Rust intuitions are strong enough to produce syntactically valid Lattice but sometimes semantically incorrect programs, because it defaults to Rust's mutation model when the Lattice-specific construct is unfamiliar.&lt;/p&gt;
&lt;p&gt;The reactive phase system is where Claude needs the most guidance. &lt;code&gt;react&lt;/code&gt;, &lt;code&gt;bond&lt;/code&gt;, and &lt;code&gt;seed&lt;/code&gt; have no precedent in any mainstream language:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux&lt;span class="w"&gt; &lt;/span&gt;temperature&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;72.0

react("temperature",&lt;span class="w"&gt; &lt;/span&gt;fn(name,&lt;span class="w"&gt; &lt;/span&gt;old_phase,&lt;span class="w"&gt; &lt;/span&gt;new_phase)&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;    &lt;/span&gt;print("&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;changed&lt;span class="w"&gt; &lt;/span&gt;from&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="n"&gt;old_phase&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;to&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="cp"&gt;${&lt;/span&gt;&lt;span class="n"&gt;new_phase&lt;/span&gt;&lt;span class="cp"&gt;}&lt;/span&gt;")
})

freeze(temperature)&lt;span class="w"&gt;  &lt;/span&gt;//&lt;span class="w"&gt; &lt;/span&gt;triggers&lt;span class="w"&gt; &lt;/span&gt;the&lt;span class="w"&gt; &lt;/span&gt;reaction&lt;span class="w"&gt; &lt;/span&gt;callback
&lt;/pre&gt;&lt;/div&gt;

&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux primary = "active"
flux mirror = "active"

bond("mirror", "primary", "sync")  // when primary changes phase, mirror follows

freeze(primary)  // mirror also freezes
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Claude can produce these patterns when given the API, but it doesn't intuit them. It never suggests &lt;code&gt;react&lt;/code&gt; or &lt;code&gt;bond&lt;/code&gt; unprompted, because there's nothing in its training data that would trigger the association. These constructs must be taught explicitly. The Rust smell gets Claude through 80% of Lattice. The last 20% requires actual specification.&lt;/p&gt;
&lt;h3&gt;The Spectrum of Difficulty&lt;/h3&gt;
&lt;p&gt;Working with Claude on Lattice code over several months has revealed a clear gradient of difficulty:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Trivial (Rust transfer):&lt;/strong&gt; Functions, structs, enums, match expressions, closures, for loops, string interpolation, module imports, error propagation with &lt;code&gt;?&lt;/code&gt;. Claude writes these correctly on the first attempt because they're syntactically identical to Rust.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Easy (new vocabulary, familiar concept):&lt;/strong&gt; &lt;code&gt;flux&lt;/code&gt;/&lt;code&gt;fix&lt;/code&gt; declarations, &lt;code&gt;freeze()&lt;/code&gt;/&lt;code&gt;thaw()&lt;/code&gt; calls, basic phase checking. Claude maps these to mutable/immutable patterns it already knows. The vocabulary is new; the concept isn't.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Moderate (new pattern, teachable):&lt;/strong&gt; &lt;code&gt;forge&lt;/code&gt; blocks, &lt;code&gt;anneal&lt;/code&gt; expressions, &lt;code&gt;crystallize&lt;/code&gt; blocks, struct field-level phase annotations (alloy structs). These require explanation, but once Claude sees one or two examples, it generalizes correctly. The builder pattern and block-scoped mutation are close enough to existing patterns that the model bridges the gap.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hard (no analog, requires specification):&lt;/strong&gt; Reactive phase operations (&lt;code&gt;react&lt;/code&gt;, &lt;code&gt;bond&lt;/code&gt;, &lt;code&gt;seed&lt;/code&gt;), phase pattern matching (&lt;code&gt;fluid val =&amp;gt;&lt;/code&gt;, &lt;code&gt;crystal val =&amp;gt;&lt;/code&gt;), the concurrency constraint that only crystal values can be sent on channels, strict mode's consumption semantics for &lt;code&gt;freeze&lt;/code&gt;. Claude can use these but never invents them. They must be explicitly described.&lt;/p&gt;
&lt;p&gt;The concurrency constraint is a good example of the "hard" category. In Lattice, data sent on a channel must be crystal:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="nv"&gt;let&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Channel&lt;/span&gt;::&lt;span class="nv"&gt;new&lt;/span&gt;&lt;span class="ss"&gt;()&lt;/span&gt;
&lt;span class="nv"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mutable"&lt;/span&gt;

&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;ch&lt;/span&gt;.&lt;span class="k"&gt;send&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;runtime&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;error&lt;/span&gt;:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;cannot&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;send&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;fluid&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;value&lt;/span&gt;

&lt;span class="nv"&gt;freeze&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;ch&lt;/span&gt;.&lt;span class="k"&gt;send&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt;     &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;works&lt;/span&gt;:&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;data&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;is&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;now&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;crystal&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This rule exists because crystal values are deeply immutable: they can't be modified by the sender after transmission, which eliminates data races structurally. Claude understands the concept (Rust has &lt;code&gt;Send&lt;/code&gt; and &lt;code&gt;Sync&lt;/code&gt; traits that serve a similar purpose), but it doesn't automatically apply Lattice's specific rule without being told. Left to its own devices, Claude will try to send fluid values on channels, because that's what you'd do in Go or Python. The constraint must be stated.&lt;/p&gt;
&lt;p&gt;Strict mode (&lt;code&gt;#mode strict&lt;/code&gt; at the top of a file) is another case where Claude needs explicit guidance. In strict mode, &lt;code&gt;let&lt;/code&gt; is banned (you must use &lt;code&gt;flux&lt;/code&gt; or &lt;code&gt;fix&lt;/code&gt;), &lt;code&gt;freeze()&lt;/code&gt; consumes the original binding (Rust-like move semantics), and crystal bindings cannot be assigned to at all, not even as a runtime error. Claude can write strict-mode Lattice, but it defaults to casual-mode patterns unless reminded. The model's prior is "permissive runtime" because that's what most dynamic languages are.&lt;/p&gt;
&lt;p&gt;The gradient correlates exactly with how much the construct resembles something in Rust or another mainstream language. When the syntax is familiar, Claude's transfer learning handles it. When the concept is familiar but the syntax is new, one or two examples are enough. When both the syntax and the concept are novel, Claude needs the specification.&lt;/p&gt;
&lt;h3&gt;The Self-Hosted Compiler&lt;/h3&gt;
&lt;p&gt;The strongest evidence that Claude can deeply understand a language it was never trained on is &lt;code&gt;latc.lat&lt;/code&gt;: a &lt;a href="https://tinycomputers.io/posts/a-stack-based-bytecode-vm-for-lattice.html"&gt;4,955-line self-hosted compiler&lt;/a&gt; written in Lattice, for Lattice.&lt;/p&gt;
&lt;p&gt;The compiler reads &lt;code&gt;.lat&lt;/code&gt; source files and emits &lt;code&gt;.latc&lt;/code&gt; bytecode binaries. It has twelve sections:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Opcode constant definitions (mapping all 100+ VM opcodes to integers)&lt;/li&gt;
&lt;li&gt;Token stream and cursor helpers (&lt;code&gt;peek&lt;/code&gt;, &lt;code&gt;advance&lt;/code&gt;, &lt;code&gt;expect&lt;/code&gt;, &lt;code&gt;match_tok&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Compiler state management (save/restore for nested compilation)&lt;/li&gt;
&lt;li&gt;Error reporting&lt;/li&gt;
&lt;li&gt;Bytecode emit helpers (&lt;code&gt;emit_byte&lt;/code&gt;, &lt;code&gt;emit_jump&lt;/code&gt;, &lt;code&gt;patch_jump&lt;/code&gt;, &lt;code&gt;emit_loop&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Constant pool management (integers, floats, strings, closures)&lt;/li&gt;
&lt;li&gt;Scope and variable resolution (&lt;code&gt;begin_scope&lt;/code&gt;, &lt;code&gt;end_scope&lt;/code&gt;, &lt;code&gt;resolve_local&lt;/code&gt;, upvalue tracking)&lt;/li&gt;
&lt;li&gt;Expression parsing (precedence climbing, binary/unary ops, calls, field access)&lt;/li&gt;
&lt;li&gt;Statement compilation (let/flux/fix, if/while/for, return, match, try/catch)&lt;/li&gt;
&lt;li&gt;Declaration compilation (functions, structs, enums, traits, impl blocks)&lt;/li&gt;
&lt;li&gt;Binary serialization (writing the LATC file format with magic bytes, version header, chunk data)&lt;/li&gt;
&lt;li&gt;Main entry point&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Claude wrote this. Not "Claude assisted with this" or "Claude generated boilerplate for this." Claude wrote a recursive descent parser for Lattice's grammar, a bytecode compiler that emits correct opcodes for the phase system, and a binary serializer that produces files the C runtime can load and execute. The compiler bootstraps: you run it with the C-based &lt;code&gt;clat&lt;/code&gt; interpreter, and it produces bytecode that the same interpreter executes.&lt;/p&gt;
&lt;p&gt;The compiler itself uses Lattice's phase system for its own internal state. The compiler's mutable working data (the bytecode buffer, the constant pool, the local variable tracking arrays) is declared with &lt;code&gt;flux&lt;/code&gt;:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;code&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;c_lines&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;constants&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;local_name_arr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;local_depth_arr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;local_captured_arr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;local_count&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is the compiler eating its own dogfood. The mutable state that the compiler needs to build bytecode is declared using the same phase system that the compiler is compiling. The phase keywords aren't decorative here; they're structurally necessary because the compiler modifies these arrays on every opcode emission and scope transition.&lt;/p&gt;
&lt;p&gt;The compiler has 118 functions across 12 sections, with 554 opcode references. It handles every construct in the language: &lt;code&gt;flux&lt;/code&gt;/&lt;code&gt;fix&lt;/code&gt; declarations, &lt;code&gt;forge&lt;/code&gt; blocks, &lt;code&gt;freeze&lt;/code&gt;/&lt;code&gt;thaw&lt;/code&gt;/&lt;code&gt;sublimate&lt;/code&gt; calls, &lt;code&gt;anneal&lt;/code&gt; and &lt;code&gt;crystallize&lt;/code&gt; expressions, struct and enum definitions with phase annotations, trait/impl blocks, match expressions with phase-aware pattern matching, structured concurrency with &lt;code&gt;scope&lt;/code&gt;/&lt;code&gt;spawn&lt;/code&gt;, channel operations, &lt;code&gt;try&lt;/code&gt;/&lt;code&gt;catch&lt;/code&gt;, &lt;code&gt;defer&lt;/code&gt;, and the complete expression grammar with correct operator precedence.&lt;/p&gt;
&lt;p&gt;Writing a self-hosted compiler requires understanding the language at every level simultaneously. The tokenizer must know every keyword, operator, and delimiter. The parser must handle every grammatical production, including the phase-specific constructs (&lt;code&gt;forge&lt;/code&gt;, &lt;code&gt;anneal&lt;/code&gt;, &lt;code&gt;crystallize&lt;/code&gt;) that exist nowhere in Claude's training data. The code generator must emit the correct opcodes for phase transitions, reactive bindings, and structured concurrency. And the whole thing must be written in the language being compiled, which means Claude is writing Lattice to compile Lattice, using constructs it learned from examples rather than training data.&lt;/p&gt;
&lt;p&gt;The compiler's serialization section writes the LATC binary format byte by byte:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fn serialize_latc(ch: any) {
    ser_buf = Buffer::new(0)

    // Header: "LATC" + version(1) + reserved(0)
    write_u8(76)    // 'L'
    write_u8(65)    // 'A'
    write_u8(84)    // 'T'
    write_u8(67)    // 'C'
    write_u16_le(1) // format version
    write_u16_le(0) // reserved

    serialize_chunk(ch)
    return ser_buf
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is not pattern matching against compiler source code from the training data. No Lattice compiler exists in the training data. Claude wrote a compiler for a language that has no prior art, in a language that has no prior art, producing a binary format that has no prior art. Every decision (the magic bytes, the chunk serialization order, the upvalue encoding) came from understanding the specification I provided and the runtime behavior of the C-based interpreter.&lt;/p&gt;
&lt;h3&gt;What I Actually Gave Claude&lt;/h3&gt;
&lt;p&gt;The teaching process was less structured than you might expect. There was no formal curriculum, no staged introduction of concepts, no carefully sequenced lesson plan. And I should be honest about the recursive nature of what happened: Claude Code was the primary tool for building Lattice itself. The language, the C implementation, the grammar, the runtime, the test suite, the handbook: all of it was built with Claude Code. I designed the language and directed the implementation, but Claude wrote the C, the LaTeX, and the example programs.&lt;/p&gt;
&lt;p&gt;So the situation is: Claude wrote Lattice (the implementation), and then Claude wrote in Lattice (the programs and the self-hosted compiler). The model built the language and then learned the language it built. The "teaching material" that Claude uses to write Lattice code is documentation and examples that Claude itself produced in earlier sessions.&lt;/p&gt;
&lt;p&gt;The artifacts:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The C implementation: ~80 source files, the parser, the VM, the phase system runtime. Built with Claude Code from my architectural direction.&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html"&gt;handbook&lt;/a&gt;: 38 chapters covering every feature, with worked examples. Written in LaTeX with Claude Code. This lives in a repository that Claude can read in subsequent sessions.&lt;/li&gt;
&lt;li&gt;Example programs (&lt;code&gt;examples/phase_demo.lat&lt;/code&gt;, &lt;code&gt;examples/sorting.lat&lt;/code&gt;, &lt;code&gt;examples/state_machine.lat&lt;/code&gt;) that demonstrate idiomatic Lattice. Written by Claude Code.&lt;/li&gt;
&lt;li&gt;815 test files under AddressSanitizer that exercise every construct. Written by Claude Code.&lt;/li&gt;
&lt;li&gt;An EBNF grammar reference as an appendix to the handbook.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;When I work with Claude on Lattice code, I don't paste the entire handbook into the context window. Claude has access to the project directory. It reads files as needed. If I ask it to write a function that uses &lt;code&gt;forge&lt;/code&gt;, it reads &lt;code&gt;examples/phase_demo.lat&lt;/code&gt; or &lt;code&gt;chapters/ch11-phases-explained.tex&lt;/code&gt; to see how &lt;code&gt;forge&lt;/code&gt; works. If I ask it to add an opcode to the compiler, it reads &lt;code&gt;include/stackopcode.h&lt;/code&gt; and &lt;code&gt;src/stackvm.c&lt;/code&gt; to understand the existing instruction set.&lt;/p&gt;
&lt;p&gt;The key insight: Claude doesn't need to be trained on a language to write it. It needs access to the specification and examples at inference time. And in this case, those specifications and examples were produced by Claude itself in prior sessions. The model's understanding is constructed on the fly from documentation in its context, not retrieved from weights. This is why the Rust resemblance matters so much: the syntax gives Claude a structural scaffold, and the specification (which Claude wrote) fills in the semantics.&lt;/p&gt;
&lt;p&gt;This is also why the self-hosted compiler was possible. By the time Claude wrote &lt;code&gt;latc.lat&lt;/code&gt;, it had already written the entire language implementation, the handbook, the test suite, and hundreds of example programs. The language had moved from "novel" to "familiar" through accumulated context, not through training. Each session built on the last. Each example reinforced the phase system's rules. By the time the compiler was attempted, Claude's working understanding of Lattice (constructed from its own prior output) was deep enough to write a 5,000-line program that correctly compiles the language. The model taught itself a language by building the language first.&lt;/p&gt;
&lt;h3&gt;Why Syntax Matters More Than Semantics&lt;/h3&gt;
&lt;p&gt;The Lattice experience suggests something counterintuitive about how LLMs interact with programming languages: syntax transfer is more powerful than semantic understanding.&lt;/p&gt;
&lt;p&gt;Claude can write correct Lattice because Lattice looks like Rust. The semantic differences (phase system vs. ownership, runtime type checking vs. compile-time guarantees, garbage collection vs. RAII) are significant, but they don't prevent Claude from producing working code. The model generates syntactically valid Lattice from Rust patterns and then adjusts the semantics when corrected.&lt;/p&gt;
&lt;p&gt;This has implications for language design. If you want AI tooling to support your language from day one, without waiting for it to appear in training data, design your syntax to rhyme with something popular. Lattice's resemblance to Rust wasn't designed for AI, but it is the reason AI can write it. A language with a radically different syntax (APL, Forth, J) would be much harder for Claude to learn from examples alone, even if the semantics were simpler.&lt;/p&gt;
&lt;p&gt;The reverse is also true: a language with familiar syntax but deeply unfamiliar semantics (like Lattice's reactive phase system) will produce code that looks correct but occasionally behaves wrong. Claude's Rust intuitions are strong enough to generate valid-looking phase code, but the model sometimes falls back to Rust's mutation model when the Lattice-specific behavior is more constrained. The syntax transfers perfectly. The semantics require teaching.&lt;/p&gt;
&lt;h3&gt;Implications for Language Designers&lt;/h3&gt;
&lt;p&gt;If you're designing a new programming language in 2026, the AI tooling question is unavoidable. Your language won't have IDE plugins, autocompleters, or AI coding assistants on day one. The community doesn't exist yet. The training data doesn't include your language. Every other language your users work with has Copilot or Claude support. Yours doesn't.&lt;/p&gt;
&lt;p&gt;Lattice suggests a strategy: make your syntax rhyme with something an LLM already knows.&lt;/p&gt;
&lt;p&gt;This isn't about copying Rust. Lattice has genuinely novel semantics. The phase system, the reactive bindings, the alloy structs with per-field phase annotations: none of these exist in Rust. But they're expressed through syntax (keywords, braces, type annotations, block expressions) that maps directly to Rust's structural patterns. Claude can parse the syntax without help and learn the semantics from examples.&lt;/p&gt;
&lt;p&gt;The alternative is designing a syntax so novel that LLMs can't bootstrap from existing knowledge. This is a legitimate design choice; some ideas genuinely need new notation. But the cost is high: your users won't get AI assistance until your language appears in training data, which requires the language to become popular first, which is harder without AI assistance. It's a chicken-and-egg problem that familiar syntax sidesteps.&lt;/p&gt;
&lt;p&gt;The practical recommendation: novel semantics, familiar syntax. Invent the ideas. Borrow the notation. Let the LLM cross the bridge on syntax and learn the semantics on the other side.&lt;/p&gt;
&lt;h3&gt;What This Means for the "AI Writes Code" Conversation&lt;/h3&gt;
&lt;p&gt;The Lattice case study complicates the popular narrative about AI code generation in both directions.&lt;/p&gt;
&lt;p&gt;For the optimists who say AI can learn anything: Claude cannot invent the reactive phase system. It cannot propose &lt;code&gt;bond&lt;/code&gt; or &lt;code&gt;seed&lt;/code&gt; or &lt;code&gt;anneal&lt;/code&gt; without being told they exist. The novel constructs, the ones that make Lattice a genuinely different language rather than a Rust reskin, are invisible to the model until explicitly specified. AI transfer learning has limits, and those limits are at the boundaries of what the training data contains.&lt;/p&gt;
&lt;p&gt;For the pessimists who say AI can only regurgitate training data: Claude wrote a 5,000-line self-hosted compiler for a language it has never seen. That is not regurgitation. The compiler produces correct bytecode for constructs (phase transitions, reactive bonds, per-field phase annotations) that exist in no other language. The model assembled knowledge from its understanding of compilers generally, Rust syntax specifically, and the Lattice specification I provided, and produced something genuinely new. Antirez called this "assembling knowledge" when he observed the same phenomenon with his &lt;a href="https://baud.rs/KJoorR"&gt;Z80 emulator project&lt;/a&gt;. I think that's the right term.&lt;/p&gt;
&lt;p&gt;The truth is somewhere that neither camp wants to occupy. LLMs can go far beyond their training data when the new territory is structurally adjacent to something they know. They cannot go beyond their training data when the new territory is structurally novel. The boundary between "adjacent" and "novel" is syntax. Familiar syntax is a bridge. Novel syntax is a wall. Novel semantics behind familiar syntax is a trap: the model crosses the bridge confidently and then occasionally falls.&lt;/p&gt;
&lt;p&gt;Lattice exists in all three zones simultaneously. Its Rust-like surface lets Claude cross the bridge. Its phase system is the novel semantics behind familiar syntax. And the self-hosted compiler is proof that the bridge, once crossed, supports weight that no one expected.&lt;/p&gt;
&lt;p&gt;I didn't set out to test the limits of LLM language understanding when I designed Lattice. I set out to build a programming language with a novel approach to mutability. The AI dimension was a side effect: I used Claude Code as my development tool because I use Claude Code for everything, and the language happened to be learnable because it happened to look like Rust. But the result is one of the more complete demonstrations of LLM transfer learning applied to a genuinely novel domain: not just writing programs in an unfamiliar language, but writing a compiler for that language, in that language, from a specification that exists nowhere in the training data.&lt;/p&gt;
&lt;p&gt;The 4,955 lines of &lt;code&gt;latc.lat&lt;/code&gt; are the proof that LLMs can go further than their training data when the conditions are right. The conditions are: familiar syntax, clear specification, accessible examples, and a human who knows when the model is wrong. Remove any one of those and the compiler doesn't get written. But with all four in place, the model produces something that works, that compiles, and that no human typed by hand.&lt;/p&gt;</description><category>ai</category><category>claude</category><category>compilers</category><category>language design</category><category>lattice</category><category>llm</category><category>phase system</category><category>programming languages</category><category>rust</category><category>self-hosting</category><guid>https://tinycomputers.io/posts/teaching-llms-languages-theyve-never-seen.html</guid><pubDate>Thu, 02 Apr 2026 13:00:00 GMT</pubDate></item><item><title>A Stack-Based Bytecode VM for Lattice: 100 Opcodes, Serialization, and a Self-Hosted Compiler</title><link>https://tinycomputers.io/posts/a-stack-based-bytecode-vm-for-lattice.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/a-stack-based-bytecode-vm-for-lattice_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;29 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;p&gt;When I &lt;a href="https://tinycomputers.io/posts/from-tree-walker-to-bytecode-vm-compiling-lattice.html"&gt;first wrote about&lt;/a&gt; Lattice's move from a tree-walking interpreter to a bytecode VM, the instruction set had 62 opcodes, concurrency primitives still delegated to the tree-walker, and programs couldn't be serialized. The VM was a foundation, correct and complete enough to become the default, but clearly a starting point.&lt;/p&gt;
&lt;p&gt;That was ten versions ago. The bytecode VM now has 100 opcodes, compiles concurrency primitives into standalone sub-chunks with zero AST dependency at runtime, ships a binary serialization format for ahead-of-time compilation, includes an ephemeral bump arena for short-lived string temporaries, and (perhaps most satisfyingly) has a self-hosted compiler written entirely in Lattice that produces the same &lt;code&gt;.latc&lt;/code&gt; bytecode files as the C implementation.&lt;/p&gt;
&lt;p&gt;This post walks through what changed and why. The full technical treatment is available as a &lt;a href="https://tinycomputers.io/papers/lattice_vm.pdf"&gt;research paper&lt;/a&gt;; this is the practitioner's version.&lt;/p&gt;
&lt;h3&gt;Why Keep Going&lt;/h3&gt;
&lt;p&gt;The &lt;a href="https://tinycomputers.io/posts/from-tree-walker-to-bytecode-vm-compiling-lattice.html"&gt;original bytecode VM&lt;/a&gt; solved the immediate problems: it eliminated recursive AST dispatch overhead and gave Lattice a single execution path for file execution, the REPL, and the WASM playground. But three issues remained.&lt;/p&gt;
&lt;p&gt;First, &lt;code&gt;OP_SCOPE&lt;/code&gt; and &lt;code&gt;OP_SELECT&lt;/code&gt; (Lattice's structured concurrency opcodes) still stored AST node pointers in the constant pool and dropped into the tree-walking evaluator at runtime. This meant the AST had to stay alive during concurrent execution, which defeated one of the main motivations for having a bytecode VM in the first place.&lt;/p&gt;
&lt;p&gt;Second, the AST dependency made serialization impossible. You can serialize bytecode to a file, but you can't easily serialize an arbitrary C pointer to an AST node. Programs had to be parsed and compiled on every run.&lt;/p&gt;
&lt;p&gt;Third, the dispatch loop used a plain &lt;code&gt;switch&lt;/code&gt; statement. Not a crisis, but computed goto dispatch is a well-known improvement for bytecode interpreters, and leaving it on the table felt unnecessary.&lt;/p&gt;
&lt;p&gt;All three problems are solved now. Let me start with the instruction set, since everything else builds on it.&lt;/p&gt;
&lt;h3&gt;100 Opcodes&lt;/h3&gt;
&lt;p&gt;The instruction set grew from 62 to 100 opcodes, organized into 16 functional categories:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Representative opcodes&lt;/th&gt;
&lt;th style="text-align: right;"&gt;Count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Stack manipulation&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CONSTANT&lt;/code&gt;, &lt;code&gt;NIL&lt;/code&gt;, &lt;code&gt;TRUE&lt;/code&gt;, &lt;code&gt;FALSE&lt;/code&gt;, &lt;code&gt;UNIT&lt;/code&gt;, &lt;code&gt;POP&lt;/code&gt;, &lt;code&gt;DUP&lt;/code&gt;, &lt;code&gt;SWAP&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Arithmetic/logical&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ADD&lt;/code&gt;, &lt;code&gt;SUB&lt;/code&gt;, &lt;code&gt;MUL&lt;/code&gt;, &lt;code&gt;DIV&lt;/code&gt;, &lt;code&gt;MOD&lt;/code&gt;, &lt;code&gt;NEG&lt;/code&gt;, &lt;code&gt;NOT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bitwise&lt;/td&gt;
&lt;td&gt;&lt;code&gt;BIT_AND&lt;/code&gt;, &lt;code&gt;BIT_OR&lt;/code&gt;, &lt;code&gt;BIT_XOR&lt;/code&gt;, &lt;code&gt;BIT_NOT&lt;/code&gt;, &lt;code&gt;LSHIFT&lt;/code&gt;, &lt;code&gt;RSHIFT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Comparison&lt;/td&gt;
&lt;td&gt;&lt;code&gt;EQ&lt;/code&gt;, &lt;code&gt;NEQ&lt;/code&gt;, &lt;code&gt;LT&lt;/code&gt;, &lt;code&gt;GT&lt;/code&gt;, &lt;code&gt;LTEQ&lt;/code&gt;, &lt;code&gt;GTEQ&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;String&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CONCAT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Variables&lt;/td&gt;
&lt;td&gt;&lt;code&gt;GET/SET_LOCAL&lt;/code&gt;, &lt;code&gt;GET/SET/DEFINE_GLOBAL&lt;/code&gt;, &lt;code&gt;GET/SET_UPVALUE&lt;/code&gt;, &lt;code&gt;CLOSE_UPVALUE&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Control flow&lt;/td&gt;
&lt;td&gt;&lt;code&gt;JUMP&lt;/code&gt;, &lt;code&gt;JUMP_IF_FALSE&lt;/code&gt;, &lt;code&gt;JUMP_IF_TRUE&lt;/code&gt;, &lt;code&gt;JUMP_IF_NOT_NIL&lt;/code&gt;, &lt;code&gt;LOOP&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Functions&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CALL&lt;/code&gt;, &lt;code&gt;CLOSURE&lt;/code&gt;, &lt;code&gt;RETURN&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Iterators&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ITER_INIT&lt;/code&gt;, &lt;code&gt;ITER_NEXT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data structures&lt;/td&gt;
&lt;td&gt;&lt;code&gt;BUILD_ARRAY&lt;/code&gt;, &lt;code&gt;INDEX&lt;/code&gt;, &lt;code&gt;SET_INDEX&lt;/code&gt;, &lt;code&gt;GET_FIELD&lt;/code&gt;, &lt;code&gt;INVOKE&lt;/code&gt;, etc.&lt;/td&gt;
&lt;td style="text-align: right;"&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Exceptions/defer&lt;/td&gt;
&lt;td&gt;&lt;code&gt;PUSH_EXCEPTION_HANDLER&lt;/code&gt;, &lt;code&gt;THROW&lt;/code&gt;, &lt;code&gt;DEFER_PUSH&lt;/code&gt;, &lt;code&gt;DEFER_RUN&lt;/code&gt;, etc.&lt;/td&gt;
&lt;td style="text-align: right;"&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase system&lt;/td&gt;
&lt;td&gt;&lt;code&gt;FREEZE&lt;/code&gt;, &lt;code&gt;THAW&lt;/code&gt;, &lt;code&gt;CLONE&lt;/code&gt;, &lt;code&gt;MARK_FLUID&lt;/code&gt;, &lt;code&gt;REACT&lt;/code&gt;, &lt;code&gt;BOND&lt;/code&gt;, &lt;code&gt;SEED&lt;/code&gt;, etc.&lt;/td&gt;
&lt;td style="text-align: right;"&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Builtins/modules&lt;/td&gt;
&lt;td&gt;&lt;code&gt;PRINT&lt;/code&gt;, &lt;code&gt;IMPORT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Concurrency&lt;/td&gt;
&lt;td&gt;&lt;code&gt;SCOPE&lt;/code&gt;, &lt;code&gt;SELECT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Integer fast paths&lt;/td&gt;
&lt;td&gt;&lt;code&gt;INC_LOCAL&lt;/code&gt;, &lt;code&gt;DEC_LOCAL&lt;/code&gt;, &lt;code&gt;ADD_INT&lt;/code&gt;, &lt;code&gt;SUB_INT&lt;/code&gt;, &lt;code&gt;LOAD_INT8&lt;/code&gt;, etc.&lt;/td&gt;
&lt;td style="text-align: right;"&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Wide variants&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CONSTANT_16&lt;/code&gt;, &lt;code&gt;GET_GLOBAL_16&lt;/code&gt;, &lt;code&gt;SET_GLOBAL_16&lt;/code&gt;, &lt;code&gt;DEFINE_GLOBAL_16&lt;/code&gt;, &lt;code&gt;CLOSURE_16&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Special&lt;/td&gt;
&lt;td&gt;&lt;code&gt;RESET_EPHEMERAL&lt;/code&gt;, &lt;code&gt;HALT&lt;/code&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td style="text-align: right;"&gt;&lt;strong&gt;100&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The growth came from three directions: the integer fast-path opcodes (8 new), the wide constant variants (5 new), and the concurrency/arena opcodes. Let me explain each.&lt;/p&gt;
&lt;h4&gt;Integer Fast Paths&lt;/h4&gt;
&lt;p&gt;Tight loops like &lt;code&gt;for i in 0..1000&lt;/code&gt; spend most of their time incrementing a counter and comparing it to a bound. The generic &lt;code&gt;OP_ADD&lt;/code&gt; has to check whether its operands are integers, floats, or strings (for concatenation), which adds branching overhead on every iteration.&lt;/p&gt;
&lt;p&gt;The integer fast-path opcodes (&lt;code&gt;OP_ADD_INT&lt;/code&gt;, &lt;code&gt;OP_SUB_INT&lt;/code&gt;, &lt;code&gt;OP_MUL_INT&lt;/code&gt;, &lt;code&gt;OP_LT_INT&lt;/code&gt;, &lt;code&gt;OP_LTEQ_INT&lt;/code&gt;) skip the type check entirely and operate directly on &lt;code&gt;int64_t&lt;/code&gt; values. &lt;code&gt;OP_INC_LOCAL&lt;/code&gt; and &lt;code&gt;OP_DEC_LOCAL&lt;/code&gt; handle the &lt;code&gt;i += 1&lt;/code&gt; and &lt;code&gt;i -= 1&lt;/code&gt; patterns as single-byte instructions that modify the stack slot in place, no push or pop required.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;OP_LOAD_INT8&lt;/code&gt; encodes a signed byte directly in the instruction stream. The integer &lt;code&gt;42&lt;/code&gt; becomes two bytes (&lt;code&gt;OP_LOAD_INT8&lt;/code&gt;, &lt;code&gt;0x2A&lt;/code&gt;) instead of a three-byte &lt;code&gt;OP_CONSTANT&lt;/code&gt; plus an eight-byte constant pool entry. Any integer in [-128, 127] gets this treatment.&lt;/p&gt;
&lt;h4&gt;Wide Constant Variants&lt;/h4&gt;
&lt;p&gt;The original instruction set used a single byte for constant pool indices, limiting each chunk to 256 constants. This is fine for most functions, but the self-hosted compiler (a 2,000-line Lattice program compiled as a single top-level script) blows past that limit easily.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;OP_CONSTANT_16&lt;/code&gt;, &lt;code&gt;OP_GET_GLOBAL_16&lt;/code&gt;, &lt;code&gt;OP_SET_GLOBAL_16&lt;/code&gt;, &lt;code&gt;OP_DEFINE_GLOBAL_16&lt;/code&gt;, and &lt;code&gt;OP_CLOSURE_16&lt;/code&gt; use two-byte big-endian indices, supporting up to 65,536 constants per chunk. The compiler automatically switches to wide variants when an index exceeds 255.&lt;/p&gt;
&lt;h3&gt;The Compiler&lt;/h3&gt;
&lt;p&gt;The bytecode compiler performs a single-pass walk over the AST. It maintains a chain of &lt;code&gt;Compiler&lt;/code&gt; structs linked via &lt;code&gt;enclosing&lt;/code&gt; pointers, one per function being compiled. Variable references resolve through three tiers: local (scan the current compiler's locals array), upvalue (recursively check enclosing compilers), and global (fall through to &lt;code&gt;OP_GET_GLOBAL&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;Three compilation modes handle different use cases. &lt;code&gt;compile()&lt;/code&gt; is the standard file mode: it compiles all declarations and emits an implicit call to &lt;code&gt;main()&lt;/code&gt; if one is defined. &lt;code&gt;compile_module()&lt;/code&gt; is for imports, identical to &lt;code&gt;compile()&lt;/code&gt; but skips the auto-call. &lt;code&gt;compile_repl()&lt;/code&gt; preserves the last expression on the stack as the iteration's return value (displayed with &lt;code&gt;=&amp;gt;&lt;/code&gt; prefix) and keeps the known-enum table alive across REPL iterations so enum declarations persist.&lt;/p&gt;
&lt;p&gt;The compiler implements several optimizations during code generation. Binary operations on literal operands are folded at compile time: &lt;code&gt;3 + 4&lt;/code&gt; emits a single &lt;code&gt;OP_LOAD_INT8 7&lt;/code&gt; rather than two loads and an &lt;code&gt;OP_ADD&lt;/code&gt;. The pattern &lt;code&gt;x += 1&lt;/code&gt; is detected and emitted as the single-byte &lt;code&gt;OP_INC_LOCAL&lt;/code&gt;, which modifies the stack slot in place. And every statement is wrapped by &lt;code&gt;compile_stmt_reset()&lt;/code&gt;, which appends &lt;code&gt;OP_RESET_EPHEMERAL&lt;/code&gt; to trigger the ephemeral arena cleanup.&lt;/p&gt;
&lt;h3&gt;Computed Goto Dispatch&lt;/h3&gt;
&lt;p&gt;The dispatch loop now uses GCC/Clang's labels-as-values extension for computed goto:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="cp"&gt;#ifdef VM_USE_COMPUTED_GOTO&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="k"&gt;static&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;void&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;dispatch_table&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;OP_CONSTANT&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;lbl_OP_CONSTANT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;OP_NIL&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;lbl_OP_NIL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;// ... all 100 entries&lt;/span&gt;
&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="cp"&gt;#endif&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(;;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;uint8_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;READ_BYTE&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="cp"&gt;#ifdef VM_USE_COMPUTED_GOTO&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;goto&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;dispatch_table&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="cp"&gt;#endif&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;switch&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;op&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Each opcode handler ends with a &lt;code&gt;goto *dispatch_table[READ_BYTE()]&lt;/code&gt; rather than breaking back to the top of the loop. This eliminates the switch statement's bounds check and branch table indirection, replacing it with a single indirect jump. The CPU's branch predictor sees different jump sites for different opcodes, which improves prediction accuracy compared to a single switch that all opcodes funnel through.&lt;/p&gt;
&lt;p&gt;On platforms without the extension, it falls back to a standard switch. The VM works correctly either way.&lt;/p&gt;
&lt;h3&gt;Pre-Compiled Concurrency&lt;/h3&gt;
&lt;p&gt;This is the change I'm most pleased with, because it solves the problem cleanly.&lt;/p&gt;
&lt;p&gt;Lattice has three concurrency primitives: &lt;code&gt;scope&lt;/code&gt; defines a concurrent region, &lt;code&gt;spawn&lt;/code&gt; launches a task within that region, and &lt;code&gt;select&lt;/code&gt; multiplexes over channels. In the tree-walker, these work by passing AST node pointers to spawned threads, which then evaluate the subtrees independently. The bytecode VM's original implementation did the same thing: &lt;code&gt;OP_SCOPE&lt;/code&gt; stored an &lt;code&gt;Expr*&lt;/code&gt; pointer in the constant pool and called the tree-walking evaluator at runtime.&lt;/p&gt;
&lt;p&gt;The solution is to compile each concurrent body into a standalone &lt;code&gt;Chunk&lt;/code&gt; at compile time. The compiler provides two helpers: &lt;code&gt;compile_sub_body()&lt;/code&gt; for statement blocks and &lt;code&gt;compile_sub_expr()&lt;/code&gt; for expressions. Each creates a fresh &lt;code&gt;Compiler&lt;/code&gt;, compiles the code into a new chunk, emits &lt;code&gt;OP_HALT&lt;/code&gt;, and stores the resulting chunk in the parent's constant pool as a &lt;code&gt;VAL_CLOSURE&lt;/code&gt; constant.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;OP_SCOPE&lt;/code&gt; uses variable-length encoding: a spawn count, a sync body chunk index, and one chunk index per spawn body. At runtime, the VM:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Exports locals&lt;/strong&gt; to the global environment using the &lt;code&gt;local_names&lt;/code&gt; debug table, so sub-chunks can access parent variables via &lt;code&gt;OP_GET_GLOBAL&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Runs the sync body&lt;/strong&gt; (if present) via a recursive &lt;code&gt;vm_run()&lt;/code&gt; call&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Spawns threads&lt;/strong&gt; for each spawn body, each running on a cloned VM&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Joins&lt;/strong&gt; all threads and propagates errors&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;OP_SELECT&lt;/code&gt; similarly encodes per-arm metadata: flags, channel expression chunk index, body chunk index, and binding name index. The VM evaluates channel expressions, polls for readiness, and executes the winning arm.&lt;/p&gt;
&lt;p&gt;The key insight is that sub-chunks run as &lt;code&gt;FUNC_SCRIPT&lt;/code&gt; without lexical access to the parent's locals. Since they can't use upvalues to reach into the parent frame, the VM exports the parent's live locals into the global environment before running any sub-chunk, using a pushed scope that gets popped after all sub-chunks complete. This is slightly more expensive than true lexical capture, but it keeps the sub-chunks completely self-contained: no AST, no parent frame dependency, fully serializable.&lt;/p&gt;
&lt;h3&gt;Bytecode Serialization&lt;/h3&gt;
&lt;p&gt;With AST dependency eliminated, serialization becomes straightforward. The &lt;code&gt;.latc&lt;/code&gt; binary format starts with an 8-byte header:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;[4C 41 54 43]  magic: "LATC"
[01 00]        format version: 1
[00 00]        reserved
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The rest is a recursive chunk encoding: code length + bytecode bytes, line numbers for source mapping, typed constants (with a one-byte type tag for each), and local name debug info. Constants use seven type tags:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: right;"&gt;Tag&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Encoding&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;0&lt;/td&gt;
&lt;td&gt;Int&lt;/td&gt;
&lt;td&gt;8-byte signed LE&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;1&lt;/td&gt;
&lt;td&gt;Float&lt;/td&gt;
&lt;td&gt;8-byte IEEE 754&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;2&lt;/td&gt;
&lt;td&gt;Bool&lt;/td&gt;
&lt;td&gt;1 byte&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;3&lt;/td&gt;
&lt;td&gt;String&lt;/td&gt;
&lt;td&gt;length-prefixed (u32 + bytes)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;4&lt;/td&gt;
&lt;td&gt;Nil&lt;/td&gt;
&lt;td&gt;no payload&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;5&lt;/td&gt;
&lt;td&gt;Unit&lt;/td&gt;
&lt;td&gt;no payload&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: right;"&gt;6&lt;/td&gt;
&lt;td&gt;Closure&lt;/td&gt;
&lt;td&gt;param count + variadic flag + recursive sub-chunk&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The &lt;code&gt;Closure&lt;/code&gt; tag is what makes this recursive: a function constant contains its parameter metadata followed by a complete serialized sub-chunk. Nested functions serialize naturally to arbitrary depth.&lt;/p&gt;
&lt;p&gt;The CLI integrates this cleanly:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="c1"&gt;# Compile to .latc&lt;/span&gt;
clat&lt;span class="w"&gt; &lt;/span&gt;compile&lt;span class="w"&gt; &lt;/span&gt;input.lat&lt;span class="w"&gt; &lt;/span&gt;-o&lt;span class="w"&gt; &lt;/span&gt;output.latc

&lt;span class="c1"&gt;# Run pre-compiled bytecode (auto-detects .latc suffix)&lt;/span&gt;
clat&lt;span class="w"&gt; &lt;/span&gt;output.latc

&lt;span class="c1"&gt;# Or compile and run in one step (the default)&lt;/span&gt;
clat&lt;span class="w"&gt; &lt;/span&gt;input.lat
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Loading validates magic bytes, checks the format version, and uses a bounds-checking &lt;code&gt;ByteReader&lt;/code&gt; that produces descriptive error messages for truncated or malformed inputs.&lt;/p&gt;
&lt;h3&gt;The Ephemeral Bump Arena&lt;/h3&gt;
&lt;p&gt;String concatenation is a common source of short-lived allocations. An expression like &lt;code&gt;"hello " + name + "!"&lt;/code&gt; creates intermediate strings that are immediately consumed and discarded. In a language with deep-clone-on-read semantics, these temporaries add up.&lt;/p&gt;
&lt;p&gt;The ephemeral bump arena is a simple optimization: string concatenation in &lt;code&gt;OP_ADD&lt;/code&gt; and &lt;code&gt;OP_CONCAT&lt;/code&gt; allocates into a bump arena (&lt;code&gt;vm-&amp;gt;ephemeral&lt;/code&gt;) instead of the general-purpose heap. These allocations are tagged with &lt;code&gt;REGION_EPHEMERAL&lt;/code&gt;, and &lt;code&gt;OP_RESET_EPHEMERAL&lt;/code&gt; (emitted by the compiler at every statement boundary) resets the arena in O(1), reclaiming all temporary strings at once.&lt;/p&gt;
&lt;p&gt;The tricky part is escape analysis. If a temporary string gets assigned to a global variable, stored in an array, or passed to a compiled closure, it needs to be promoted out of the ephemeral arena before the arena is reset. The VM handles this at specific escape points: &lt;code&gt;OP_DEFINE_GLOBAL&lt;/code&gt;, &lt;code&gt;OP_CALL&lt;/code&gt; (for compiled closures), &lt;code&gt;array.push&lt;/code&gt;, and &lt;code&gt;OP_SET_INDEX_LOCAL&lt;/code&gt;. Each of these calls &lt;code&gt;vm_promote_value()&lt;/code&gt;, which deep-clones the string to the regular heap if its region is ephemeral.&lt;/p&gt;
&lt;p&gt;The arena uses a page-based allocator with 4 KB pages. Resetting doesn't free pages; it just moves the bump pointer back to zero, so subsequent allocations reuse the same memory without any &lt;code&gt;malloc&lt;/code&gt;/&lt;code&gt;free&lt;/code&gt; overhead. The full design and safety proof are covered in a &lt;a href="https://tinycomputers.io/papers/lattice_arena_safety.pdf"&gt;companion paper&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Closures and the Storage Hack&lt;/h3&gt;
&lt;p&gt;The upvalue system hasn't changed architecturally since the &lt;a href="https://tinycomputers.io/posts/from-tree-walker-to-bytecode-vm-compiling-lattice.html"&gt;first VM post&lt;/a&gt;; it's still the Lua-inspired open/closed model where &lt;code&gt;ObjUpvalue&lt;/code&gt; structs start pointing into the stack and get closed (deep-cloned to the heap) when variables go out of scope. But the encoding grew to accommodate the wider instruction set.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;OP_CLOSURE&lt;/code&gt; uses variable-length encoding: a constant pool index for the function's compiled chunk, an upvalue count, and then &lt;code&gt;[is_local, index]&lt;/code&gt; byte pairs for each captured variable. &lt;code&gt;OP_CLOSURE_16&lt;/code&gt; uses a two-byte big-endian function index for chunks with more than 256 constants.&lt;/p&gt;
&lt;p&gt;The storage hack (repurposing &lt;code&gt;closure.body&lt;/code&gt; (NULL), &lt;code&gt;closure.native_fn&lt;/code&gt; (Chunk pointer), &lt;code&gt;closure.captured_env&lt;/code&gt; (ObjUpvalue** cast), and &lt;code&gt;region_id&lt;/code&gt; (upvalue count)) remains unchanged. A sentinel value &lt;code&gt;VM_NATIVE_MARKER&lt;/code&gt; distinguishes C-native functions from compiled closures:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="cp"&gt;#define VM_NATIVE_MARKER ((struct Expr **)(uintptr_t)0x1)&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;A closure with &lt;code&gt;body == NULL&lt;/code&gt; and &lt;code&gt;native_fn != NULL&lt;/code&gt; is either a C native (if &lt;code&gt;default_values == VM_NATIVE_MARKER&lt;/code&gt;) or a compiled bytecode function (otherwise). This avoids adding VM-specific fields to the &lt;code&gt;LatValue&lt;/code&gt; union, which matters when values are deep-cloned frequently.&lt;/p&gt;
&lt;h3&gt;The Self-Hosted Compiler&lt;/h3&gt;
&lt;p&gt;The file &lt;code&gt;compiler/latc.lat&lt;/code&gt; is a bytecode compiler written entirely in Lattice, approximately 2,060 lines that read &lt;code&gt;.lat&lt;/code&gt; source, produce bytecode, and write &lt;code&gt;.latc&lt;/code&gt; files using the same binary format as the C implementation:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="c1"&gt;# Use the self-hosted compiler&lt;/span&gt;
clat&lt;span class="w"&gt; &lt;/span&gt;compiler/latc.lat&lt;span class="w"&gt; &lt;/span&gt;input.lat&lt;span class="w"&gt; &lt;/span&gt;output.latc

&lt;span class="c1"&gt;# Run the result&lt;/span&gt;
clat&lt;span class="w"&gt; &lt;/span&gt;output.latc
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The architecture mirrors the C compiler: lexing via the built-in &lt;code&gt;tokenize()&lt;/code&gt; function, a recursive-descent parser, single-pass code emission, and scope management with upvalue resolution. But Lattice's value semantics required some creative workarounds.&lt;/p&gt;
&lt;p&gt;The biggest constraint is that structs and maps are pass-by-value. In C, the compiler uses a &lt;code&gt;Compiler&lt;/code&gt; struct with mutable fields: local arrays, scope depth, a chunk pointer. In Lattice, passing a struct to a function creates a copy, so mutations in the callee don't propagate back. The self-hosted compiler works around this with parallel global arrays: &lt;code&gt;code&lt;/code&gt;, &lt;code&gt;constants&lt;/code&gt;, &lt;code&gt;c_lines&lt;/code&gt;, &lt;code&gt;local_names&lt;/code&gt;, &lt;code&gt;local_depths&lt;/code&gt;, &lt;code&gt;local_captured&lt;/code&gt;. Since array mutations via &lt;code&gt;.push()&lt;/code&gt; and index assignment are in-place (via &lt;code&gt;resolve_lvalue&lt;/code&gt;), global arrays work where structs don't.&lt;/p&gt;
&lt;p&gt;Nested function compilation uses explicit &lt;code&gt;save_compiler()&lt;/code&gt; / &lt;code&gt;restore_compiler()&lt;/code&gt; functions that copy all global arrays to local temporaries and back. It's verbose but correct. The Buffer type (used for serialization output) is also pass-by-value, so a global &lt;code&gt;ser_buf&lt;/code&gt; accumulates serialized bytes across function calls.&lt;/p&gt;
&lt;p&gt;Other language constraints: no &lt;code&gt;else if&lt;/code&gt; (requires &lt;code&gt;else { if ... }&lt;/code&gt; or &lt;code&gt;match&lt;/code&gt;), mandatory type annotations on function parameters (&lt;code&gt;fn foo(a: any)&lt;/code&gt;), and &lt;code&gt;test&lt;/code&gt; is a keyword so you can't use it as an identifier.&lt;/p&gt;
&lt;p&gt;The self-hosted compiler currently handles expressions, variables, functions with closures, control flow (if/else, while, loop, for, break, continue, match), structs, enums, exceptions, defer, string interpolation, and imports. Not yet implemented: concurrency primitives and advanced phase operations (react, bond, seed). The bootstrapping chain is:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;latc.lat → [C VM interprets] → output.latc → [C VM executes]
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Full self-hosting (where &lt;code&gt;latc.lat&lt;/code&gt; compiles itself) requires adding concurrency support and closing the remaining feature gaps.&lt;/p&gt;
&lt;h3&gt;The VM Execution Engine&lt;/h3&gt;
&lt;p&gt;The VM maintains a 4,096-slot value stack, a 256-frame call stack, an exception handler stack (64 entries), a defer stack (256 entries), a global environment, the open upvalue linked list, the ephemeral arena, and a module cache. A pre-allocated &lt;code&gt;fast_args[16]&lt;/code&gt; buffer avoids heap allocation for most native function calls.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;OP_CALL&lt;/code&gt; instruction discriminates three callee types. Native C functions (marked with &lt;code&gt;VM_NATIVE_MARKER&lt;/code&gt;) get the fast path: arguments are popped into &lt;code&gt;fast_args&lt;/code&gt;, the C function pointer is invoked, and the return value is pushed. No call frame allocated. Compiled closures get the full treatment: the VM promotes ephemeral values in the current frame (so the callee's &lt;code&gt;OP_RESET_EPHEMERAL&lt;/code&gt; doesn't invalidate the caller's temporaries), then pushes a new &lt;code&gt;CallFrame&lt;/code&gt; with the instruction pointer at byte 0 of the callee's chunk. Callable structs look up a constructor-named field and dispatch accordingly.&lt;/p&gt;
&lt;p&gt;Exception handling uses a handler stack. &lt;code&gt;OP_PUSH_EXCEPTION_HANDLER&lt;/code&gt; records the current IP, chunk, call frame index, and stack top. When &lt;code&gt;OP_THROW&lt;/code&gt; executes, the nearest handler is popped, the call frame and value stacks are unwound, the error value is pushed, and execution resumes at the handler's saved IP. Deferred blocks interact correctly: &lt;code&gt;OP_DEFER_RUN&lt;/code&gt; executes all defer entries registered at or above the current frame before the frame is popped by &lt;code&gt;OP_RETURN&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Iterators avoid closure allocation entirely. &lt;code&gt;OP_ITER_INIT&lt;/code&gt; converts a range or array into an internal iterator occupying two stack slots (collection + cursor index). &lt;code&gt;OP_ITER_NEXT&lt;/code&gt; advances the cursor, pushes the next element, or jumps to a specified offset when exhausted. The tree-walker used closure-based iterators for &lt;code&gt;for&lt;/code&gt; loops; the bytecode version is simpler and avoids the allocation.&lt;/p&gt;
&lt;h3&gt;Ref&amp;lt;T&amp;gt;: The Escape Hatch from Value Semantics&lt;/h3&gt;
&lt;p&gt;Everything described so far operates in a world where values are deep-cloned on every read. Maps are pass-by-value. Structs are pass-by-value. Pass a collection to a function and the function gets its own copy; mutations don't propagate back. This is correct and eliminates aliasing bugs, but it creates a real problem: how do you share mutable state when you actually need to?&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Ref&amp;lt;T&amp;gt;&lt;/code&gt; is the answer. It's a reference-counted shared mutable wrapper, the one type in Lattice that deliberately breaks value semantics:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;LatRef&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="c1"&gt;// the wrapped inner value&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="n"&gt;refcount&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="c1"&gt;// reference count&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;When a &lt;code&gt;Ref&lt;/code&gt; is cloned (which happens on every variable read, like everything else), the VM bumps the refcount and copies the pointer. It does &lt;em&gt;not&lt;/em&gt; deep-clone the inner value. Multiple copies of a &lt;code&gt;Ref&lt;/code&gt; share the same underlying &lt;code&gt;LatRef&lt;/code&gt;, so mutations through one are visible through all others. This is the explicit opt-in to reference semantics that the rest of the language avoids.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;let r = Ref::new([1, 2, 3])
let r2 = r              // shallow copy — same LatRef
r.push(4)
print(r2.get())          // [1, 2, 3, 4] — shared state
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The VM provides transparent proxying: &lt;code&gt;OP_INDEX&lt;/code&gt;, &lt;code&gt;OP_SET_INDEX&lt;/code&gt;, and &lt;code&gt;OP_INVOKE&lt;/code&gt; all check for &lt;code&gt;VAL_REF&lt;/code&gt; and delegate to the inner value. Indexing into a &lt;code&gt;Ref&amp;lt;Array&amp;gt;&lt;/code&gt; indexes the inner array. Calling &lt;code&gt;.push()&lt;/code&gt; on a &lt;code&gt;Ref&amp;lt;Array&amp;gt;&lt;/code&gt; mutates the inner array directly. At the language level, a Ref mostly behaves like the value it wraps; you just get shared mutation instead of isolated copies.&lt;/p&gt;
&lt;p&gt;Ref has its own methods (&lt;code&gt;get()&lt;/code&gt;/&lt;code&gt;deref()&lt;/code&gt; to clone the inner value out, &lt;code&gt;set(v)&lt;/code&gt; to replace it, &lt;code&gt;inner_type()&lt;/code&gt; to inspect the wrapped type) plus proxied methods for whatever the inner value supports (map &lt;code&gt;set&lt;/code&gt;/&lt;code&gt;get&lt;/code&gt;/&lt;code&gt;keys&lt;/code&gt;, array &lt;code&gt;push&lt;/code&gt;/&lt;code&gt;pop&lt;/code&gt;, etc.).&lt;/p&gt;
&lt;p&gt;The phase system applies to Refs too. Freezing a Ref blocks all mutation: &lt;code&gt;set()&lt;/code&gt;, &lt;code&gt;push()&lt;/code&gt;, index assignment all check &lt;code&gt;obj-&amp;gt;phase == VTAG_CRYSTAL&lt;/code&gt; and error with "cannot set on a frozen Ref." This makes frozen Refs safe to share across concurrent boundaries; they're immutable handles to immutable data.&lt;/p&gt;
&lt;p&gt;This introduces a third memory management strategy alongside the dual-heap (mark-and-sweep for fluid values, arenas for crystal values) and the ephemeral bump arena. Refs use reference counting: &lt;code&gt;ref_retain()&lt;/code&gt; on clone, &lt;code&gt;ref_release()&lt;/code&gt; on free, with the inner value freed when the count hits zero. It's a deliberate trade-off: reference counting is simple and deterministic, and since Refs are the uncommon case (most Lattice code uses value semantics), the lack of cycle collection hasn't been an issue in practice.&lt;/p&gt;
&lt;h3&gt;Validation&lt;/h3&gt;
&lt;p&gt;The VM is validated by &lt;strong&gt;815 tests&lt;/strong&gt; covering every feature: arithmetic, closures, upvalues, phase transitions, exception handling, defer, iterators, data structures, concurrency, modules, bytecode serialization, and the self-hosted compiler.&lt;/p&gt;
&lt;p&gt;All 815 tests pass under both normal compilation and AddressSanitizer builds (&lt;code&gt;make asan&lt;/code&gt;), which dynamically checks for heap buffer overflows, use-after-free, stack buffer overflows, and memory leaks. For a VM with manual memory management, upvalue lifetime tracking, and an ephemeral arena that reclaims memory at statement boundaries, ASan validation is essential.&lt;/p&gt;
&lt;p&gt;Both execution modes (bytecode VM (default) and tree-walker (&lt;code&gt;--tree-walk&lt;/code&gt;)) share the same test suite and produce identical results:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;make&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;&lt;span class="w"&gt;                &lt;/span&gt;&lt;span class="c1"&gt;# bytecode VM: 815 passed&lt;/span&gt;
make&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;TREE_WALK&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="c1"&gt;# tree-walker: 815 passed&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Feature parity is complete:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th style="text-align: center;"&gt;Tree-walker&lt;/th&gt;
&lt;th style="text-align: center;"&gt;Bytecode VM&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Phase system (freeze/thaw/clone/forge)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Closures with upvalues&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Exception handling (try/catch/throw)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Defer blocks&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pattern matching&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Structs with methods&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enums with payloads&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Arrays, maps, tuples, sets, buffers&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Iterators (for-in, ranges)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Module imports&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Concurrency (scope/spawn/select)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Channels&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase reactions/bonds/seeds&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Contracts (require/ensure)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Variable tracking (history)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bytecode serialization (.latc)&lt;/td&gt;
&lt;td style="text-align: center;"&gt;---&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Computed goto dispatch&lt;/td&gt;
&lt;td style="text-align: center;"&gt;---&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ephemeral bump arena&lt;/td&gt;
&lt;td style="text-align: center;"&gt;---&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Specialized integer ops&lt;/td&gt;
&lt;td style="text-align: center;"&gt;---&lt;/td&gt;
&lt;td style="text-align: center;"&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The last four rows are VM-only features that have no tree-walker equivalent.&lt;/p&gt;
&lt;h3&gt;What's Next&lt;/h3&gt;
&lt;p&gt;The VM is feature-complete but not performance-optimized. The obvious next steps are register allocation to reduce stack traffic, type-specialized dispatch paths guided by runtime profiling, tail call optimization for recursive patterns, and constant pool deduplication across compilation units. Further out, the bytecode provides a natural intermediate representation for JIT compilation.&lt;/p&gt;
&lt;p&gt;On the self-hosting front, adding concurrency primitives to &lt;code&gt;latc.lat&lt;/code&gt; would close the gap to full self-compilation, where the Lattice compiler compiles itself, producing a &lt;code&gt;.latc&lt;/code&gt; file that can then compile other programs without the C implementation in the loop.&lt;/p&gt;
&lt;p&gt;The full technical details (including encoding diagrams, the complete opcode listing, compilation walkthroughs, and references to related work in Lua, CPython, YARV, and WebAssembly) are in the &lt;a href="https://tinycomputers.io/papers/lattice_vm.pdf"&gt;research paper&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The source code is at &lt;a href="https://baud.rs/fIe3gx"&gt;github.com/ajokela/lattice&lt;/a&gt;, and the project site is at &lt;a href="https://baud.rs/bwvnYT"&gt;lattice-lang.org&lt;/a&gt;.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;git clone https://github.com/ajokela/lattice.git
cd lattice &amp;amp;&amp;amp; make
./clat
&lt;/pre&gt;&lt;/div&gt;</description><category>bytecode</category><category>c</category><category>closures</category><category>compilers</category><category>concurrency</category><category>interpreters</category><category>language design</category><category>lattice</category><category>phase system</category><category>programming languages</category><category>self-hosting</category><category>serialization</category><category>upvalues</category><category>virtual machine</category><guid>https://tinycomputers.io/posts/a-stack-based-bytecode-vm-for-lattice.html</guid><pubDate>Fri, 20 Feb 2026 18:00:00 GMT</pubDate></item><item><title>Mutability as a First-Class Concept: The Lattice Phase System</title><link>https://tinycomputers.io/posts/mutability-as-a-first-class-concept-the-lattice-phase-system.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/mutability-as-a-first-class-concept-the-lattice-phase-system_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;11 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;h2&gt;Mutability as a First-Class Concept: The Lattice Phase System&lt;/h2&gt;
&lt;p&gt;Most programming languages treat mutability as a binary annotation. You write &lt;code&gt;const&lt;/code&gt; or &lt;code&gt;let&lt;/code&gt;, &lt;code&gt;final&lt;/code&gt; or &lt;code&gt;var&lt;/code&gt;, and the compiler enforces it statically. Rust goes further with its borrow checker, enforcing exclusive mutable access at compile time. JavaScript offers &lt;code&gt;Object.freeze()&lt;/code&gt;, a runtime operation that's shallow by default and provides no mechanism for observation or validation. These are all useful tools, but they share a common limitation: mutability is something you &lt;em&gt;declare&lt;/em&gt;, not something you &lt;em&gt;work with&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;In &lt;a href="https://baud.rs/bwvnYT"&gt;Lattice&lt;/a&gt;, I've been building something different. Mutability (what Lattice calls &lt;em&gt;phase&lt;/em&gt;) is a first-class runtime property that can be queried, constrained, validated, coordinated across variables, observed reactively, and even tracked historically. Over the last several releases (v0.2.3 through v0.2.6), this system has grown from simple freeze/thaw semantics into a full lifecycle framework. This post walks through that progression and the design decisions behind it.&lt;/p&gt;
&lt;h3&gt;The Metaphor: Crystallization&lt;/h3&gt;
&lt;p&gt;Lattice is built around the metaphor of crystallization. Values begin in a &lt;strong&gt;fluid&lt;/strong&gt; state (mutable) and can be &lt;strong&gt;frozen&lt;/strong&gt; into a &lt;strong&gt;crystal&lt;/strong&gt; state (immutable). The &lt;code&gt;thaw()&lt;/code&gt; operation creates a mutable copy of a crystal value, and &lt;code&gt;clone()&lt;/code&gt; performs a deep copy regardless of phase. This vocabulary isn't just cosmetic; it shapes how you think about data lifecycle in a Lattice program.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux temperature = 72.5       // fluid: mutable
temperature = 68.0             // allowed

freeze(temperature)            // now crystal: immutable
// temperature = 70.0          // ERROR: cannot mutate crystal value

flux copy = thaw(temperature)  // new fluid copy
copy = 70.0                    // allowed
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;flux&lt;/code&gt; keyword declares a fluid (mutable) binding. The &lt;code&gt;fix&lt;/code&gt; keyword declares a crystal (immutable) binding. And &lt;code&gt;let&lt;/code&gt; infers phase from context: fluid if the value is fluid, crystal if crystal. This alone isn't novel. What makes Lattice's approach interesting is everything that builds on top of it.&lt;/p&gt;
&lt;h3&gt;Phase Constraints: Mutability in Your Type Signatures&lt;/h3&gt;
&lt;p&gt;The first major addition (v0.2.3) was phase constraints on function parameters. In most languages, a function that receives data has no way to express whether it expects mutable or immutable input. You might document it, or rely on convention, but the language doesn't help. In Lattice, you can annotate parameters with their expected phase:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fn mutate(data: flux Map) {
    data.set("modified", true)
}

fn inspect(data: fix Map) {
    print(data.get("name"))
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The runtime checks phase at call time. Pass a crystal value to &lt;code&gt;mutate()&lt;/code&gt; and you get an error. Pass a fluid value to &lt;code&gt;inspect()&lt;/code&gt; and it works fine. Fluid is compatible with fix because it &lt;em&gt;can&lt;/em&gt; be read. The constraint is about what the function &lt;em&gt;needs&lt;/em&gt;, not what the caller &lt;em&gt;has&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;The shorthand syntax uses &lt;code&gt;~&lt;/code&gt; for flux and &lt;code&gt;*&lt;/code&gt; for fix:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fn process(data: ~Map) { ... }  // needs mutable
fn display(data: *Map) { ... }  // needs immutable
&lt;/pre&gt;&lt;/div&gt;

&lt;h4&gt;Phase-Dependent Dispatch&lt;/h4&gt;
&lt;p&gt;Phase constraints enable something more powerful: dispatch based on runtime phase. You can define multiple implementations of the same function with different phase signatures, and the runtime selects the best match:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="n"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;~&lt;/span&gt;&lt;span class="n"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mutable&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;can&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;before&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;serializing&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"serialized_at"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;time_now&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json_stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;fn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;immutable&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;directly&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;no&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;side&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;effects&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="nb"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json_stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;new&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"host"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;
&lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;calls&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;overload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;adds&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;timestamp&lt;/span&gt;

&lt;span class="n"&gt;freeze&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;      &lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;calls&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;fix&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;overload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;no&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;mutation&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The overload resolution uses a scoring system. An exact phase match (fluid argument to flux parameter) scores highest. A compatible match (fluid to unphased) scores lower. An incompatible match (crystal to flux) is rejected entirely. When multiple overloads exist, the best-scoring one wins.&lt;/p&gt;
&lt;p&gt;This is genuinely useful in practice. A caching layer might have one implementation that updates a cache (requires mutable data) and another that reads through (works with immutable data). A serialization function might add metadata to mutable structures but serialize immutable ones directly. The caller doesn't need to know; the runtime dispatches based on what the data actually &lt;em&gt;is&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;Crystallization Contracts: Validation at the Phase Boundary&lt;/h3&gt;
&lt;p&gt;The next question was: when data freezes, how do you ensure it's in a valid state? In real systems, immutable data often represents finalized configuration, committed transactions, or published records. You want to validate before that transition happens.&lt;/p&gt;
&lt;p&gt;Version 0.2.5 introduced crystallization contracts, validation closures attached to &lt;code&gt;freeze()&lt;/code&gt; with the &lt;code&gt;where&lt;/code&gt; keyword:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="nv"&gt;flux&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;config&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;Map&lt;/span&gt;::&lt;span class="nv"&gt;new&lt;/span&gt;&lt;span class="ss"&gt;()&lt;/span&gt;
&lt;span class="nv"&gt;config&lt;/span&gt;[&lt;span class="s2"&gt;"host"&lt;/span&gt;]&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"localhost"&lt;/span&gt;
&lt;span class="nv"&gt;config&lt;/span&gt;[&lt;span class="s2"&gt;"port"&lt;/span&gt;]&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;
&lt;span class="nv"&gt;config&lt;/span&gt;[&lt;span class="s2"&gt;"workers"&lt;/span&gt;]&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;

&lt;span class="nv"&gt;freeze&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;config&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;where&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="nv"&gt;v&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nv"&gt;v&lt;/span&gt;.&lt;span class="nv"&gt;has&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"host"&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;throw&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"config missing 'host'"&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;}
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nv"&gt;v&lt;/span&gt;.&lt;span class="nv"&gt;has&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"port"&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;throw&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"config missing 'port'"&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;}
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;v&lt;/span&gt;[&lt;span class="s2"&gt;"workers"&lt;/span&gt;]&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;{&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;throw&lt;/span&gt;&lt;span class="ss"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"need at least 1 worker"&lt;/span&gt;&lt;span class="ss"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;}
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The contract receives a deep clone of the value (so the validation can't accidentally mutate the original), runs the closure, and if the closure throws, the freeze is aborted and the value remains fluid. If validation passes, the value transitions to crystal.&lt;/p&gt;
&lt;p&gt;This maps cleanly to real-world patterns. Database ORMs validate before persisting. Configuration systems validate before applying. Form submissions validate before accepting. The difference is that in Lattice, this validation is attached to the &lt;em&gt;phase transition itself&lt;/em&gt;, not to a separate method you have to remember to call.&lt;/p&gt;
&lt;p&gt;Contracts compose naturally with the rest of the phase system. You can use them with phase bonds (discussed next) or with phase-dependent dispatch. A function that accepts &lt;code&gt;fix Map&lt;/code&gt; knows its argument passed whatever contract was attached at freeze time.&lt;/p&gt;
&lt;h3&gt;Phase Bonds: Coordinated Freezing&lt;/h3&gt;
&lt;p&gt;Individual freeze/thaw operations work well for isolated values, but real programs have related data that should transition together. A web request's headers, body, and metadata should probably all be immutable before you send it. A transaction's debit and credit entries should freeze atomically.&lt;/p&gt;
&lt;p&gt;Phase bonds (also v0.2.5) let you declare these relationships:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux&lt;span class="w"&gt; &lt;/span&gt;header&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;Map::new()
flux&lt;span class="w"&gt; &lt;/span&gt;body&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;Map::new()
flux&lt;span class="w"&gt; &lt;/span&gt;footer&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;Map::new()

header["content-type"]&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;"text/html"
body["content"]&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;"&lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Hello&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;"
footer["timestamp"]&lt;span class="w"&gt; &lt;/span&gt;=&lt;span class="w"&gt; &lt;/span&gt;time_now()

bond(header,&lt;span class="w"&gt; &lt;/span&gt;body,&lt;span class="w"&gt; &lt;/span&gt;footer)

freeze(header)&lt;span class="w"&gt;              &lt;/span&gt;//&lt;span class="w"&gt; &lt;/span&gt;cascades&lt;span class="w"&gt; &lt;/span&gt;to&lt;span class="w"&gt; &lt;/span&gt;body&lt;span class="w"&gt; &lt;/span&gt;AND&lt;span class="w"&gt; &lt;/span&gt;footer
print(phase_of(body))&lt;span class="w"&gt;       &lt;/span&gt;//&lt;span class="w"&gt; &lt;/span&gt;"crystal"
print(phase_of(footer))&lt;span class="w"&gt;     &lt;/span&gt;//&lt;span class="w"&gt; &lt;/span&gt;"crystal"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;bond(target, ...deps)&lt;/code&gt; call links dependencies to a target. When the target freezes, all its dependencies freeze too. Bonds are also transitive: if A is bonded to B and B is bonded to C, freezing A cascades through B to C.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux a = 1
flux b = 2
flux c = 3

bond(a, b)    // b depends on a
bond(b, c)    // c depends on b

freeze(a)     // freezes a → b → c
print(phase_of(c))  // "crystal"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;You can remove bonds with &lt;code&gt;unbond()&lt;/code&gt; before the freeze happens:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;bond(header, body, footer)
unbond(header, footer)    // footer no longer cascades

freeze(header)            // freezes header and body, NOT footer
print(phase_of(footer))   // "fluid"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Bonds solve a coordination problem that most languages leave to discipline. In a typical codebase, you'd need to remember to freeze all related values, or wrap them in a container and freeze that. Bonds make the relationship explicit and enforced.&lt;/p&gt;
&lt;h3&gt;Phase Reactions: Observing State Transitions&lt;/h3&gt;
&lt;p&gt;With constraints, contracts, and bonds, you can control &lt;em&gt;how&lt;/em&gt; and &lt;em&gt;when&lt;/em&gt; phase transitions happen. But sometimes you also need to know &lt;em&gt;that&lt;/em&gt; they happened. Logging, cache invalidation, UI updates, audit trails: these are all responses to state changes.&lt;/p&gt;
&lt;p&gt;Version 0.2.6 adds phase reactions: callbacks that fire automatically when a variable's phase changes.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux data = [1, 2, 3]

react(data, |phase, val| {
    print("data is now " + phase + ": " + to_string(val))
})

freeze(data)   // prints: "data is now crystal: [1, 2, 3]"
thaw(data)     // prints: "data is now fluid: [1, 2, 3]"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The callback receives two arguments: the new phase name (as a string, "crystal" or "fluid") and a deep clone of the current value. Multiple callbacks can be registered on the same variable, and they fire in registration order:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux counter = 0

react(counter, |phase, val| {
    print("logger: counter is now " + phase)
})

react(counter, |phase, val| {
    if phase == "crystal" {
        print("audit: counter finalized at " + to_string(val))
    }
})

counter = 42
freeze(counter)
// prints:
//   logger: counter is now crystal
//   audit: counter finalized at 42
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Reactions also fire during bond cascades. If variable B is bonded to A and has a reaction registered, freezing A will cascade to B and trigger B's reaction:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux primary = Map::new()
flux replica = Map::new()

bond(primary, replica)

react(replica, |phase, val| {
    print("replica transitioned to " + phase)
})

freeze(primary)
// prints: "replica transitioned to crystal"
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is a powerful combination. Bonds handle the &lt;em&gt;coordination&lt;/em&gt; of transitions, and reactions handle the &lt;em&gt;observation&lt;/em&gt;. Together they let you build systems where phase changes propagate and trigger side effects in a predictable, declarative way.&lt;/p&gt;
&lt;p&gt;Use &lt;code&gt;unreact()&lt;/code&gt; to remove all reactions from a variable:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;react(data, |phase, val| { print("fired") })
unreact(data)
freeze(data)  // no output — reaction was removed
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If a reaction callback throws an error, it propagates as a reaction error, giving you a clean way to handle failures in the observation chain.&lt;/p&gt;
&lt;h3&gt;Temporal Values: Phase History and Time Travel&lt;/h3&gt;
&lt;p&gt;The last piece of the phase system (also v0.2.5) is temporal values: the ability to track a variable's phase transitions and value changes over time.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux counter = 0
track("counter")

counter = 10
counter = 20
freeze(counter)

let history = phases("counter")
// [{phase: "fluid", value: 0},
//  {phase: "fluid", value: 10},
//  {phase: "fluid", value: 20},
//  {phase: "crystal", value: 20}]
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;track()&lt;/code&gt; function enables recording for a named variable. Every assignment and phase transition creates a snapshot. The &lt;code&gt;phases()&lt;/code&gt; function returns the full history as an array of maps, and &lt;code&gt;rewind()&lt;/code&gt; lets you retrieve past values by offset:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux x = 100
track("x")
x = 200
x = 300

print(rewind("x", 0))  // 300 (current)
print(rewind("x", 1))  // 200 (one step back)
print(rewind("x", 2))  // 100 (two steps back)
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Temporal values serve primarily as a debugging and auditing tool. When something goes wrong with a frozen value, you can inspect its history to see what mutations happened before the freeze. When testing phase-dependent dispatch, you can verify that the right transitions occurred. In production systems, you can use temporal tracking for audit logs or undo functionality.&lt;/p&gt;
&lt;h3&gt;The Bigger Picture: Why This Matters&lt;/h3&gt;
&lt;p&gt;Most programming languages treat mutability as a compiler concern, something to check at build time and forget about. Lattice treats it as a runtime property with the same richness as types or values. This opens up patterns that are difficult or impossible in other languages:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Gradual freezing.&lt;/strong&gt; Data starts fluid, accumulates state through a pipeline, and freezes when it's complete. Contracts validate at the boundary. Bonds ensure related data transitions together. This maps naturally to request processing, form building, transaction assembly, and configuration loading.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Observable state transitions.&lt;/strong&gt; Reactions let you attach behavior to phase changes without coupling the code that freezes with the code that responds. A module can register a reaction on shared data without knowing who or when the freeze will happen.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Phase-aware APIs.&lt;/strong&gt; Functions can express their mutability requirements in their signatures and dispatch based on the caller's data. Libraries can offer mutable and immutable code paths transparently.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Auditability.&lt;/strong&gt; Temporal tracking provides a built-in mechanism for understanding how data evolved, without external logging infrastructure.&lt;/p&gt;
&lt;p&gt;None of these features require abandoning the simple mental model. At its core, Lattice still has fluid and crystal, mutable and immutable. Everything else is opt-in machinery for programs that need more control.&lt;/p&gt;
&lt;h3&gt;Comparison with Other Approaches&lt;/h3&gt;
&lt;p&gt;It's worth comparing this to how other languages handle mutability:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Rust&lt;/th&gt;
&lt;th&gt;JavaScript&lt;/th&gt;
&lt;th&gt;Lattice&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Mutability declaration&lt;/td&gt;
&lt;td&gt;&lt;code&gt;let&lt;/code&gt; / &lt;code&gt;let mut&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;const&lt;/code&gt; / &lt;code&gt;let&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;fix&lt;/code&gt; / &lt;code&gt;flux&lt;/code&gt; / &lt;code&gt;let&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Enforcement&lt;/td&gt;
&lt;td&gt;Compile-time&lt;/td&gt;
&lt;td&gt;Runtime (shallow)&lt;/td&gt;
&lt;td&gt;Runtime (deep)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase transitions&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;&lt;code&gt;Object.freeze()&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;freeze()&lt;/code&gt; / &lt;code&gt;thaw()&lt;/code&gt; / &lt;code&gt;clone()&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Validation on freeze&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Crystallization contracts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coordinated freezing&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Phase bonds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Transition observation&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Phase reactions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase-dependent dispatch&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Overload resolution by phase&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;History tracking&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Temporal values&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Rust's borrow checker is more powerful for preventing data races at compile time; Lattice doesn't attempt that. JavaScript's &lt;code&gt;Object.freeze()&lt;/code&gt; is more pragmatic but also more limited: it's shallow, provides no observation, and offers no coordination. Lattice occupies a different point in the design space: mutability as a &lt;em&gt;domain concept&lt;/em&gt; rather than a &lt;em&gt;compiler constraint&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;Implementation Notes&lt;/h3&gt;
&lt;p&gt;The phase system is implemented in C as part of Lattice's tree-walking interpreter. Phase tags are stored directly on values (&lt;code&gt;VTAG_FLUID&lt;/code&gt;, &lt;code&gt;VTAG_CRYSTAL&lt;/code&gt;, &lt;code&gt;VTAG_UNPHASED&lt;/code&gt;), so phase checks are single comparisons. Bonds are stored as a dynamic array of &lt;code&gt;BondEntry&lt;/code&gt; structs on the evaluator, each mapping a target variable name to its dependencies. Reactions use a similar structure; &lt;code&gt;ReactionEntry&lt;/code&gt; maps a variable name to an array of callback closures. Temporal tracking stores &lt;code&gt;HistorySnapshot&lt;/code&gt; arrays containing phase names and deep-cloned values.&lt;/p&gt;
&lt;p&gt;The deep cloning is important throughout. Contract validation receives a clone so it can't mutate the original. Reaction callbacks receive clones so observers can't interfere with each other. Temporal snapshots are clones so history is independent of current state. This means the phase system has allocation costs proportional to value size, but it also means the invariants are strong, with no spooky action at a distance.&lt;/p&gt;
&lt;p&gt;Freeze cascading through bonds is recursive, and reactions fire during cascading, so a single &lt;code&gt;freeze()&lt;/code&gt; call can trigger an arbitrary chain of transitions and callbacks. Error propagation is straightforward: if any reaction throws, the error surfaces immediately with context about which reaction failed.&lt;/p&gt;
&lt;h3&gt;What's Next&lt;/h3&gt;
&lt;p&gt;The phase system is reaching a natural plateau in terms of core features. There are a few directions I'm considering:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Partial freezing&lt;/strong&gt; already exists in a basic form: you can freeze individual struct fields or map keys while leaving the container mutable. Expanding this to support more granular control (freeze all fields matching a pattern, freeze a subtree) could be useful for large data structures.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Phase-aware pattern matching&lt;/strong&gt; lets you match on phase in &lt;code&gt;match&lt;/code&gt; expressions using &lt;code&gt;~&lt;/code&gt; and &lt;code&gt;*&lt;/code&gt; qualifiers. This is already implemented but could be extended with more complex phase patterns.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Compile-time phase inference&lt;/strong&gt; is a longer-term goal. If the interpreter can prove that a value is always crystal by a certain point, it could skip runtime checks. This would bring some of Rust's static guarantees to Lattice without requiring explicit lifetime annotations.&lt;/p&gt;
&lt;p&gt;For now, the phase system provides a cohesive set of tools for working with mutability as a first-class concept. Whether you're building a configuration loader that validates before committing, a pipeline that coordinates related state transitions, or a reactive system that responds to phase changes, Lattice gives you the vocabulary and the enforcement to do it declaratively.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;Lattice is open source and available at &lt;a href="https://baud.rs/bwvnYT"&gt;lattice-lang.org&lt;/a&gt;. The language compiles and runs on macOS and Linux with no dependencies beyond a C11 compiler. You can try it in your browser via the &lt;a href="https://baud.rs/odS816"&gt;playground&lt;/a&gt;, or clone the repo and run &lt;code&gt;make &amp;amp;&amp;amp; ./clat&lt;/code&gt; to start the REPL.&lt;/p&gt;</description><category>language design</category><category>lattice</category><category>mutability</category><category>phase system</category><category>programming languages</category><category>type systems</category><guid>https://tinycomputers.io/posts/mutability-as-a-first-class-concept-the-lattice-phase-system.html</guid><pubDate>Sat, 14 Feb 2026 23:00:00 GMT</pubDate></item><item><title>Introducing Lattice: A Crystallization-Based Programming Language</title><link>https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html?utm_source=feed&amp;utm_medium=rss&amp;utm_campaign=rss</link><dc:creator>A.C. Jokela</dc:creator><description>&lt;p&gt;Most programming languages treat mutability as a binary property. A variable is either mutable or it's not. You declare it one way, and that's the end of the story. Rust adds nuance with its ownership and borrowing model, and functional languages sidestep the question by making everything immutable by default, but the fundamental framing remains the same: mutability is a static attribute decided at declaration time.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://baud.rs/4ysPkF"&gt;Lattice&lt;/a&gt; takes a different approach. In Lattice, mutability is a &lt;em&gt;phase&lt;/em&gt;, a state that a value passes through over its lifetime, like matter transitioning between liquid and solid. A value starts as mutable &lt;strong&gt;flux&lt;/strong&gt;, and when you're done shaping it, you &lt;strong&gt;freeze&lt;/strong&gt; it into immutable &lt;strong&gt;fix&lt;/strong&gt;. Need to modify it again? &lt;strong&gt;Thaw&lt;/strong&gt; it back to flux. Want to build something complex and immutable in one shot? Use a &lt;strong&gt;forge&lt;/strong&gt; block, a controlled mutation zone whose output automatically crystallizes.&lt;/p&gt;
&lt;p&gt;This isn't just a metaphor. The phase system is woven through Lattice's entire runtime, from its type representation to its memory management architecture. This post is a deep dive into what that means, how it works at the implementation level, and why it represents a genuinely different way of thinking about the relationship between mutability and memory.&lt;/p&gt;
&lt;div class="audio-widget"&gt;
&lt;div class="audio-widget-header"&gt;
&lt;span class="audio-widget-icon"&gt;🎧&lt;/span&gt;
&lt;span class="audio-widget-label"&gt;Listen to this article&lt;/span&gt;
&lt;/div&gt;
&lt;audio controls preload="metadata"&gt;
&lt;source src="https://tinycomputers.io/introducing-lattice-a-crystallization-based-programming-language_tts.mp3" type="audio/mpeg"&gt;
&lt;/source&gt;&lt;/audio&gt;
&lt;div class="audio-widget-footer"&gt;36 min · AI-generated narration&lt;/div&gt;
&lt;/div&gt;

&lt;h3&gt;The Problem Lattice Solves&lt;/h3&gt;
&lt;p&gt;Every language designer eventually confronts the same tension: programmers need mutability to build things, but mutability is the source of most bugs. Shared mutable state causes race conditions. Unexpected mutation causes aliasing bugs. Mutable references that outlive their owners cause use-after-free errors.&lt;/p&gt;
&lt;p&gt;Different languages resolve this tension in different ways, and each approach carries trade-offs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Garbage-collected languages&lt;/strong&gt; (Java, Python, Go, JavaScript) let you mutate freely and use a garbage collector to clean up. This is convenient but pushes the cost to runtime: GC pauses, unpredictable memory usage, and no compile-time guarantees about who can modify what. You gain ease of use but lose control.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://baud.rs/gSnSwR"&gt;Rust's ownership model&lt;/a&gt;&lt;/strong&gt; provides compile-time guarantees through a sophisticated borrow checker. You can have either one mutable reference or many immutable references, but not both. This eliminates data races at compile time, but the cost is complexity: the borrow checker is notoriously difficult for newcomers, lifetime annotations add syntactic weight, and certain patterns (like self-referential structs or graph structures) require unsafe escape hatches.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Functional languages&lt;/strong&gt; (Haskell, Erlang, Clojure) default to immutability and model mutation through controlled mechanisms like monads, processes, or atoms. This produces correct programs but can feel unnatural for inherently stateful problems, and persistent data structures carry performance overhead.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;C and C++&lt;/strong&gt; give you full manual control and zero overhead, at the cost of memory safety. &lt;code&gt;const&lt;/code&gt; in C is advisory at best; you can cast it away, and the compiler won't stop you from freeing memory that someone else is still using.&lt;/p&gt;
&lt;p&gt;Lattice's phase system is an attempt to find a different point in this design space. The core insight is that in most programs, values have a natural lifecycle: they're constructed (requiring mutation), then used (requiring stability), and occasionally reconstructed (requiring mutation again). The phase system makes this lifecycle explicit and enforceable.&lt;/p&gt;
&lt;h3&gt;The Phase Model&lt;/h3&gt;
&lt;p&gt;Lattice has three binding keywords that correspond to mutability phases:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;flux&lt;/code&gt;&lt;/strong&gt; declares a mutable binding. A flux variable can be reassigned, and its contents can be modified in place. This is where you do your work: building arrays, populating maps, incrementing counters.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;flux counter = 0
counter += 1
counter += 1
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;fix&lt;/code&gt;&lt;/strong&gt; declares an immutable binding. A fix variable cannot be reassigned, and its contents cannot be modified. Attempting to mutate a fix binding is an error.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="n"&gt;fix&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;freeze&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;3.14159&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;// pi = 2.0  -- error: cannot assign to crystal binding&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;let&lt;/code&gt;&lt;/strong&gt; is the inferred form (available in casual mode). It doesn't enforce a phase; the value keeps whatever phase tag it already has.&lt;/p&gt;
&lt;p&gt;The transitions between phases are explicit function calls:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;freeze(value)&lt;/code&gt;&lt;/strong&gt; transitions a value from fluid to crystal. In strict mode, this is a &lt;em&gt;consuming&lt;/em&gt; operation: the original binding is removed from the environment. You can't accidentally keep a mutable reference to something you've declared immutable.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;thaw(value)&lt;/code&gt;&lt;/strong&gt; creates a mutable deep clone of a crystal value. The original remains frozen; you get a completely independent mutable copy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;code&gt;clone(value)&lt;/code&gt;&lt;/strong&gt; creates a deep copy without changing phase.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And then there's the &lt;strong&gt;&lt;code&gt;forge&lt;/code&gt;&lt;/strong&gt; block, which is perhaps the most interesting construct:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;fix config = forge {
    flux temp = Map::new()
    temp.set("host", "localhost")
    temp.set("port", "8080")
    temp.set("debug", "true")
    freeze(temp)
}
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;A forge block is a scoped computation whose result is automatically frozen. Inside the forge, you can use flux variables and mutate freely. But whatever value the block produces comes out crystallized. The temporary mutable state is gone; only the finished, immutable result survives.&lt;/p&gt;
&lt;p&gt;This addresses a real pain point. In functional languages, building a complex immutable data structure often requires awkward chains of constructor calls or builder patterns. In Lattice, you just... build it, mutably, in a forge block, and it comes out frozen. The forge acknowledges that construction is inherently a mutable process, while insisting that the &lt;em&gt;result&lt;/em&gt; of construction should be stable.&lt;/p&gt;
&lt;h3&gt;Under the Hood: How the Phase System Maps to Memory&lt;/h3&gt;
&lt;p&gt;Lattice is implemented as a tree-walking interpreter in C, roughly 6,000 lines across the lexer, parser, phase checker, and evaluator. The implementation reveals some interesting design decisions about how phase semantics interact with memory management.&lt;/p&gt;
&lt;h4&gt;Value Representation&lt;/h4&gt;
&lt;p&gt;Every runtime value in Lattice is a &lt;code&gt;LatValue&lt;/code&gt; struct, a tagged union carrying a type tag, a phase tag, and the value payload:&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nc"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;ValueType&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;type&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="c1"&gt;// VAL_INT, VAL_STR, VAL_ARRAY, VAL_MAP, ...&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="n"&gt;PhaseTag&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="n"&gt;phase&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;// VTAG_FLUID, VTAG_CRYSTAL, VTAG_UNPHASED&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;union&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;as&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Primitive values (integers, floats, booleans) live inline in the union, with no heap allocation. Compound values (strings, arrays, structs, maps, closures) own heap-allocated payloads. A string holds a heap-allocated character buffer. An array holds a &lt;code&gt;malloc&lt;/code&gt;'d element buffer. A map holds a pointer to an open-addressing hash table.&lt;/p&gt;
&lt;h4&gt;Deep-Clone-on-Read: Value Semantics Without a Compiler&lt;/h4&gt;
&lt;p&gt;The most consequential design decision in Lattice's runtime is that &lt;strong&gt;every variable read produces a deep clone&lt;/strong&gt;. When you access a variable, the environment doesn't hand you a reference to the stored value. It hands you a complete, independent copy.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;&lt;span class="kt"&gt;bool&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nf"&gt;env_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Env&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;const&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kt"&gt;char&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;size_t&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="n"&gt;LatValue&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;lat_map_get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;env&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;scopes&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="mi"&gt;-1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;out&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;value_deep_clone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;v&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="c1"&gt;// always a fresh copy&lt;/span&gt;
&lt;span class="w"&gt;            &lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="w"&gt;        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="w"&gt;    &lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is expensive. Every array access clones the entire array. Every map read clones every key-value pair. But it eliminates an entire class of bugs. There is no aliasing in Lattice. Two variables never point to the same underlying memory. When you pass a map to a function, the function gets its own copy, and mutations inside the function don't leak back to the caller. When you assign an array to a new variable, you get two independent arrays.&lt;/p&gt;
&lt;p&gt;This is the implementation strategy that makes Lattice's maps value types. In most languages, objects and collections are reference types, and assigning them to a new variable creates a new reference to the same data. In Lattice, assignment means duplication. This is closer to how values work in mathematics than how they work in most programming languages.&lt;/p&gt;
&lt;p&gt;For in-place mutation within a scope (like &lt;code&gt;array.push()&lt;/code&gt; or &lt;code&gt;map.set()&lt;/code&gt;), Lattice uses a separate &lt;code&gt;resolve_lvalue()&lt;/code&gt; mechanism that obtains a direct mutable pointer into the environment's storage, bypassing the deep clone. This means local mutations are efficient; it's only cross-scope communication that pays the cloning cost.&lt;/p&gt;
&lt;h4&gt;The Dual Heap Architecture&lt;/h4&gt;
&lt;p&gt;Lattice's memory subsystem uses what the implementation calls a &lt;code&gt;DualHeap&lt;/code&gt;: two separate allocation regions with different management strategies:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The FluidHeap&lt;/strong&gt; manages mutable data using a mark-and-sweep garbage collector. It maintains a linked list of all heap allocations, with a mark bit on each. When memory pressure crosses a threshold (1 MB by default), the GC walks all reachable values from the environment and a shadow root stack, marks what's alive, and sweeps everything else.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The RegionManager&lt;/strong&gt; manages immutable data using arena-based regions. Each freeze creates a new region backed by a page-based arena, a linked list of 4 KB pages with bump allocation. When a value is frozen, it is deep-cloned entirely into the region's arena, giving crystal data cache locality and enabling O(1) bulk deallocation when the region becomes unreachable. Regions are collected during GC cycles based on reachability analysis.&lt;/p&gt;
&lt;p&gt;The key insight here is that &lt;strong&gt;immutable and mutable data have different lifecycle characteristics&lt;/strong&gt; and benefit from different management strategies. Mutable data changes frequently and has unpredictable lifetimes, and mark-and-sweep handles this well. Immutable data, once created, never changes and tends to be long-lived, so arena-based region allocation is more efficient for this pattern, as it enables bulk deallocation and better cache locality.&lt;/p&gt;
&lt;p&gt;This is conceptually similar to generational garbage collection (where young objects are collected differently from old objects), but the split is based on &lt;em&gt;mutability&lt;/em&gt; rather than &lt;em&gt;age&lt;/em&gt;. Lattice's phase tags provide the runtime with information that generational GCs have to infer statistically.&lt;/p&gt;
&lt;p&gt;The following chart shows how this plays out in practice across several benchmark programs. Fluid peak memory represents the high-water mark of the GC-managed heap, while crystal arena data shows how much data has been frozen into arena-backed regions:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Fluid Peak vs Crystal Arena Data" src="https://tinycomputers.io/images/lattice_fluid_vs_crystal.png"&gt;&lt;/p&gt;
&lt;h4&gt;Freeze and Thaw at the Memory Level&lt;/h4&gt;
&lt;p&gt;When you call &lt;code&gt;freeze()&lt;/code&gt; on a value, the runtime creates a new crystal region with a fresh arena, deep-clones the entire value tree into it, sets the &lt;code&gt;phase&lt;/code&gt; field to &lt;code&gt;VTAG_CRYSTAL&lt;/code&gt; on every node, and frees the original fluid heap pointers. The data physically migrates from the fluid heap into arena pages. Freeze is a move operation, not just a metadata flip. This gives frozen data cache locality within contiguous arena pages and completely separates it from the garbage-collected fluid heap.&lt;/p&gt;
&lt;p&gt;But in strict mode, &lt;code&gt;freeze()&lt;/code&gt; is also a &lt;em&gt;consuming&lt;/em&gt; operation. It removes the original binding from the environment and returns the frozen value. This is effectively a move: after &lt;code&gt;freeze(x)&lt;/code&gt;, there is no &lt;code&gt;x&lt;/code&gt; anymore. You can bind the result to a new name (&lt;code&gt;fix y = freeze(x)&lt;/code&gt;), but the mutable original is gone. This prevents a common bug pattern where you freeze a value but accidentally keep mutating the original through a still-live reference.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;thaw()&lt;/code&gt; is more expensive: it performs a complete deep clone of the crystal value and then recursively sets all phase tags to &lt;code&gt;VTAG_FLUID&lt;/code&gt;. The original crystal value is untouched; you get a completely independent mutable copy. This is consistent with the principle that crystal values are permanent. Thawing doesn't melt the original; it creates a new fluid copy.&lt;/p&gt;
&lt;p&gt;In practice, both operations are fast. Across the benchmark suite, freeze and thaw costs stay well under a millisecond even for complex data structures:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Freeze/Thaw Cost by Benchmark" src="https://tinycomputers.io/images/lattice_freeze_thaw_timing.png"&gt;&lt;/p&gt;
&lt;p&gt;The number and type of phase transitions varies by workload. Some benchmarks are freeze-heavy (building immutable snapshots), others are thaw-heavy (repeatedly modifying frozen state), and some use deep clones for full value duplication:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Phase Transitions by Benchmark" src="https://tinycomputers.io/images/lattice_phase_transitions.png"&gt;&lt;/p&gt;
&lt;h3&gt;How This Compares to Existing Systems&lt;/h3&gt;
&lt;h4&gt;vs. Rust's Ownership and Borrowing&lt;/h4&gt;
&lt;p&gt;Rust solves the mutability problem at compile time through static analysis. The borrow checker ensures that mutable references are unique and that immutable references don't coexist with mutable ones. This gives Rust zero-runtime-cost safety guarantees that Lattice can't match.&lt;/p&gt;
&lt;p&gt;But Rust's approach operates at the reference level. It tracks who has access to data, not the data's intrinsic state. You can have an &lt;code&gt;&amp;amp;mut&lt;/code&gt; to data that is conceptually "done being built," or an &lt;code&gt;&amp;amp;&lt;/code&gt; to data that you wish you could modify. The permission model and the data lifecycle are orthogonal.&lt;/p&gt;
&lt;p&gt;Lattice's phase system operates on the data itself. A frozen value &lt;em&gt;is&lt;/em&gt; immutable, not because the type system prevents you from obtaining a mutable reference, but because the value has transitioned to a state where mutation doesn't apply. This is a simpler mental model at the cost of runtime enforcement rather than compile-time proof.&lt;/p&gt;
&lt;p&gt;The consuming &lt;code&gt;freeze()&lt;/code&gt; in strict mode is reminiscent of Rust's move semantics, where using a value after moving it is a compile error. Lattice achieves a similar effect at runtime: freeze consumes the binding, preventing further mutable access. It's not as strong a guarantee (runtime vs. compile time), but it's the same intuition: once you've declared something immutable, the mutable version shouldn't exist anymore.&lt;/p&gt;
&lt;h4&gt;vs. Garbage Collection&lt;/h4&gt;
&lt;p&gt;Traditional garbage collectors (Java, Go, Python) are phase-agnostic. They track reachability, not mutability. A &lt;code&gt;final&lt;/code&gt; field in Java prevents reassignment but doesn't inform the GC. An immutable object in Python is collected the same way as a mutable one.&lt;/p&gt;
&lt;p&gt;Lattice's dual-heap architecture uses phase information to make better allocation decisions. Crystal values go into arena-managed memory with reachability-based collection. Fluid values go into a mark-and-sweep heap. The GC can reason about immutable data more efficiently because it &lt;em&gt;knows&lt;/em&gt; the data won't change, so it doesn't need to re-scan crystal regions for updated references.&lt;/p&gt;
&lt;p&gt;This is a form of phase-informed memory management that, to my knowledge, doesn't have a direct precedent in mainstream languages. The closest analogy might be Clojure's persistent data structures, which are structurally shared and immutable, but Clojure doesn't use this information to drive its garbage collection strategy differently.&lt;/p&gt;
&lt;h4&gt;vs. Functional Immutability&lt;/h4&gt;
&lt;p&gt;Haskell and other pure functional languages are immutable by default, with mutation confined to monads (&lt;code&gt;IORef&lt;/code&gt;, &lt;code&gt;STRef&lt;/code&gt;) or similar controlled mechanisms. This is elegant but can be awkward for imperative algorithms where you need to build something up step by step.&lt;/p&gt;
&lt;p&gt;Lattice's forge blocks address this directly. Instead of threading a builder through a chain of pure function calls, you write imperative mutation inside a forge and get an immutable result. This acknowledges that construction and consumption are different activities that benefit from different mutability guarantees.&lt;/p&gt;
&lt;p&gt;The philosophical difference is that functional languages treat immutability as the default and mutation as the exception. Lattice treats mutability as a &lt;em&gt;phase&lt;/em&gt; that values pass through, and both flux and fix are natural, expected states, and the language provides explicit tools for transitioning between them.&lt;/p&gt;
&lt;h4&gt;vs. C/C++ Manual Memory Management&lt;/h4&gt;
&lt;p&gt;C gives you &lt;code&gt;malloc&lt;/code&gt; and &lt;code&gt;free&lt;/code&gt; and wishes you the best. C++ adds RAII, smart pointers, and &lt;code&gt;const&lt;/code&gt; correctness, but &lt;code&gt;const&lt;/code&gt; in both languages is fundamentally a compiler hint. It can be cast away, and the runtime has no awareness of it. A &lt;code&gt;const&lt;/code&gt; pointer in C doesn't prevent someone else from modifying the data through a non-const pointer to the same memory. The &lt;code&gt;const&lt;/code&gt; is a property of the &lt;em&gt;reference&lt;/em&gt;, not the &lt;em&gt;data&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Lattice's phase tags live on the data itself. When a value is crystal, it's crystal regardless of how you access it. There's no way to "cast away" a freeze; the only path back to mutability is &lt;code&gt;thaw()&lt;/code&gt;, which creates a new independent copy. This is a stronger guarantee than &lt;code&gt;const&lt;/code&gt; provides, because it operates on values rather than references.&lt;/p&gt;
&lt;p&gt;C++ move semantics share DNA with Lattice's consuming &lt;code&gt;freeze()&lt;/code&gt; in strict mode. A &lt;code&gt;std::move&lt;/code&gt; in C++ transfers ownership of resources, leaving the source in a valid-but-unspecified state. Lattice's strict freeze does something similar: it removes the binding entirely, ensuring the mutable version ceases to exist. But where C++ moves are primarily about avoiding copies for performance, Lattice's consuming freeze is about semantic correctness, ensuring that the transition from mutable to immutable is clean and total. Scott Meyers' &lt;a href="https://baud.rs/OK4IwA"&gt;Effective Modern C++&lt;/a&gt; remains the best guide to understanding these move semantics and other modern C++ patterns that Lattice's design draws from.&lt;/p&gt;
&lt;h4&gt;The Static Phase Checker&lt;/h4&gt;
&lt;p&gt;It's worth noting that Lattice doesn't rely solely on runtime enforcement. Before any code executes, a static phase checker walks the AST and catches phase violations at analysis time. This checker maintains its own scope stack mapping variable names to their declared phases and rejects programs that attempt to reassign crystal bindings, freeze already-frozen values, thaw already-fluid values, or use &lt;code&gt;let&lt;/code&gt; in strict mode where an explicit phase declaration is required.&lt;/p&gt;
&lt;p&gt;The static checker also enforces spawn boundaries. If Lattice's concurrency model (&lt;code&gt;spawn&lt;/code&gt;) is used, fluid bindings from the enclosing scope cannot be captured across the spawn point. Only crystal values can be shared into spawned computations. This is checked &lt;em&gt;before&lt;/em&gt; evaluation begins, catching potential data races at parse time rather than at runtime.&lt;/p&gt;
&lt;p&gt;This two-layer approach (static checking before evaluation, runtime enforcement during) provides confidence without requiring a full type system or borrow checker. It catches the obvious mistakes early and enforces the subtle invariants at runtime. For the theoretical foundations behind this kind of phase-based type analysis, Benjamin Pierce's &lt;a href="https://baud.rs/oMfDwe"&gt;Types and Programming Languages&lt;/a&gt; is the standard reference.&lt;/p&gt;
&lt;h3&gt;The Language Beyond Phases&lt;/h3&gt;
&lt;p&gt;While the phase system is Lattice's defining feature, the language has other characteristics worth noting.&lt;/p&gt;
&lt;p&gt;Structs in Lattice can hold closures as fields, enabling object-like patterns without a class system. A struct with function fields and a &lt;code&gt;self&lt;/code&gt; parameter in each closure behaves much like an object with methods, but the data flow is explicit, and there's no hidden &lt;code&gt;this&lt;/code&gt; pointer or vtable dispatch. When a closure captures &lt;code&gt;self&lt;/code&gt;, it receives a deep clone, ensuring that method calls don't produce spooky action at a distance.&lt;/p&gt;
&lt;p&gt;Control flow is expression-based: &lt;code&gt;if&lt;/code&gt;/&lt;code&gt;else&lt;/code&gt; blocks, &lt;code&gt;match&lt;/code&gt; expressions, and bare blocks all return values. This reduces the need for temporary variables and makes code more compositional. Error handling uses &lt;code&gt;try&lt;/code&gt;/&lt;code&gt;catch&lt;/code&gt; blocks with explicit error values rather than exception hierarchies.&lt;/p&gt;
&lt;p&gt;The self-hosted REPL is particularly notable. Written entirely in Lattice, it demonstrates that the language is expressive enough to implement its own interactive environment, parsing multi-line input, evaluating expressions, and managing session state. Running &lt;code&gt;./clat&lt;/code&gt; without arguments drops into this REPL, while &lt;code&gt;./clat file.lat&lt;/code&gt; executes a program directly.&lt;/p&gt;
&lt;p&gt;Lattice is implemented in C with no external dependencies. The entire codebase (roughly 6,000 lines across the lexer, parser, phase checker, evaluator, and data structures) compiles with a single &lt;code&gt;make&lt;/code&gt; invocation. This is a deliberate choice. The language is meant to be small, understandable, and self-contained. You can read the entire implementation in an afternoon. If you're interested in this kind of work, Robert Nystrom's &lt;a href="https://baud.rs/uTpA6y"&gt;&lt;em&gt;Crafting Interpreters&lt;/em&gt;&lt;/a&gt; is the best practical guide to building language implementations from scratch. It covers both tree-walking interpreters and bytecode VMs, and Lattice's architecture shares several design decisions with Nystrom's Lox language. For the C implementation side, Kernighan and Ritchie's &lt;a href="https://baud.rs/71h6l3"&gt;&lt;em&gt;The C Programming Language&lt;/em&gt;&lt;/a&gt; remains the definitive reference for writing the kind of clean, minimal C that Lattice targets.&lt;/p&gt;
&lt;h3&gt;Runtime Characteristics&lt;/h3&gt;
&lt;p&gt;To understand how the dual-heap architecture behaves in practice, Lattice includes a benchmark suite that exercises different memory patterns: allocation churn, closure-heavy computation, event sourcing, freeze/thaw cycles, game state rollback, long-lived crystal data, persistent tree construction, and undo/redo stacks.&lt;/p&gt;
&lt;p&gt;The overview below shows peak RSS (resident set size) alongside the number of live crystal regions at program exit. Benchmarks that use the phase system heavily (like freeze/thaw cycles and persistent trees) maintain more live regions, while purely fluid workloads like allocation churn and closure-heavy computation have none:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Peak RSS and Crystal Regions Overview" src="https://tinycomputers.io/images/lattice_overview.png"&gt;&lt;/p&gt;
&lt;p&gt;The memory churn ratio (total bytes allocated divided by peak live bytes) reveals how aggressively each benchmark recycles memory. A high ratio means the program allocates and discards data rapidly, relying on the GC to keep the working set small. Benchmarks using crystal regions (shown in purple) tend to have lower churn because frozen data is long-lived by design:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Memory Churn Ratio" src="https://tinycomputers.io/images/lattice_churn_ratio.png"&gt;&lt;/p&gt;
&lt;h3&gt;Research Papers&lt;/h3&gt;
&lt;p&gt;For readers interested in the formal foundations and empirical analysis, two companion papers are available:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://tinycomputers.io/papers/lattice_paper.pdf"&gt;The Lattice Phase System: First-Class Immutability with Dual-Heap Memory Management&lt;/a&gt;&lt;/strong&gt;: The full research paper covering the language design, formal operational semantics, six proved safety properties (phase monotonicity, value isolation, consuming freeze, forge soundness, heap separation, and thaw independence), implementation details of the dual-heap architecture, and empirical evaluation across eight benchmarks.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://tinycomputers.io/papers/lattice_formal_semantics.pdf"&gt;Formal Semantics of the Lattice Phase System&lt;/a&gt;&lt;/strong&gt;: A standalone formal treatment containing the complete semantic domains, static phase-checking rules, big-step operational semantics, memory model, and full proofs of all six safety theorems.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Looking Forward&lt;/h3&gt;
&lt;p&gt;Lattice is at version 0.1.3, which means it's early. The dual-heap architecture is fully wired into the evaluator. Freeze operations physically migrate data into arena-backed crystal regions, providing cache locality and O(1) bulk deallocation for immutable data. The mark-and-sweep GC handles fluid values, while crystal regions are collected through reachability analysis during GC cycles.&lt;/p&gt;
&lt;p&gt;The deep-clone-on-read strategy is correct but expensive. Future versions may introduce structural sharing for crystal values (since they can't be modified, sharing is safe) or copy-on-write semantics for fluid values that haven't actually been mutated. The phase tags provide the runtime with exactly the information needed to make these optimizations: which values can be shared safely, and which might change.&lt;/p&gt;
&lt;p&gt;There's also the question of concurrency. The phase system provides a natural foundation for safe concurrent programming: crystal values can be freely shared across threads (they're immutable), while fluid values are confined to their owning scope. The &lt;code&gt;spawn&lt;/code&gt; keyword exists in the parser and phase checker, with static analysis already preventing fluid bindings from crossing spawn boundaries, though concurrent execution isn't yet implemented.&lt;/p&gt;
&lt;p&gt;The source code is available on &lt;a href="https://baud.rs/fIe3gx"&gt;GitHub&lt;/a&gt; under the BSD 3-Clause license, and the project site is at &lt;a href="https://baud.rs/4ysPkF"&gt;lattice-lang.org&lt;/a&gt;. If you're interested in language design, memory management, or just want to play with a language that treats mutability as a physical process rather than a type annotation, it's worth a look.&lt;/p&gt;
&lt;div class="code"&gt;&lt;pre class="code literal-block"&gt;git clone https://github.com/ajokela/lattice.git
cd lattice &amp;amp;&amp;amp; make
./clat
&lt;/pre&gt;&lt;/div&gt;</description><category>c</category><category>immutability</category><category>interpreter</category><category>language design</category><category>lattice</category><category>memory management</category><category>mutability</category><category>phase system</category><category>programming languages</category><category>value semantics</category><guid>https://tinycomputers.io/posts/introducing-lattice-a-crystallization-based-programming-language.html</guid><pubDate>Tue, 10 Feb 2026 18:00:00 GMT</pubDate></item></channel></rss>