If you want to run Rust on hardware that Rust was never designed for—a Z80 from 1976, a custom 16-bit RISC CPU, a Game Boy—you have a problem. The Rust compiler targets LLVM, and LLVM doesn't know your CPU exists.
I've spent some time solving this problem in different ways. I built LLVM backends for both the Z80 and my own Sampo 16-bit RISC architecture. That's the "correct" solution—and it works—but it's also countless amounts of time wrestling with TableGen definitions and GlobalISel pipelines, though agentic coding tools help immensely.
There's a recent project that offers a different path entirely: Eurydice, a Rust-to-C transpiler developed by researchers at Inria and Microsoft. The premise is simple. If your target already has a C compiler, you can skip LLVM entirely:
Rust → Eurydice → C → existing C compiler → your target
For the Z80, that existing C compiler is SDCC, the Small Device C Compiler. It's mature, well-tested, and has supported the Z80 for decades.
This article explores three distinct paths to getting Rust on custom hardware, and includes a hands-on walkthrough of the Eurydice approach—transpiling Rust to readable C, then compiling that C for the Z80 with SDCC.
Path 1: The Full LLVM Backend
This is what I did for both the Z80 and Sampo. You fork LLVM, implement a complete backend—register descriptions, instruction selection, calling conventions, type legalization, assembly printing—and teach the Rust compiler about your new target triple.
The pipeline looks like this:
Rust source → rustc frontend → LLVM IR → Your Backend → Assembly → Binary
What you get:
- Full Rust language support (within hardware constraints)
- Access to LLVM's optimization passes—constant folding, dead code elimination, register allocation
- A single backend that works for Rust, C (via Clang), and any other LLVM frontend
- Native code quality that improves as LLVM improves
What it costs:
- LLVM is roughly 30 million lines of C++. The learning curve is vertical.
- A minimal backend requires 25-30 files of TableGen, C++, and CMake configuration
- Type legalization—teaching LLVM that your 8-bit CPU can't natively handle 64-bit integers—is where 60% of the effort lives
- Keeping your fork synchronized with upstream LLVM is ongoing maintenance
For the Z80, the register poverty problem alone was really the bane of the efforts. The Z80 has seven 8-bit registers, some of which can pair into 16-bit values. LLVM's register allocator expects 16 or 32 general-purpose registers. Every function call, every 16-bit addition, every pointer dereference requires careful choreography of a register file designed when RAM was measured in kilobytes. If you follow this blog and have read about my efforts to get LLVM and Rust working for the Z80, you will recall that I needed hundreds of gigabytes of RAM on the build server just to allow full expansion of all the 8-bit registers to the 64-bit and 128-bit types in Rust.
For Sampo, the experience was smoother—a 16-bit RISC with 16 registers is closer to what LLVM expects. But "smoother" is relative. The Sampo LLVM backend still involved implementing GlobalISel pipelines, debugging opaque errors like "SmallVector capacity overflow," and building Rust's libcore for a target that had never existed.
The full LLVM approach gives you the best results. It's also the hardest path by a wide margin.
Path 2: Rust → C via Eurydice → Existing C Compiler
This is the path that caught my attention. Eurydice takes a fundamentally different approach: instead of teaching LLVM about your hardware, you transpile Rust to readable C and let an existing C compiler handle the target. This is the path other niche languages, like nim use to make portable code.
What Is Eurydice?
Eurydice grew out of the Aeneas formal verification project. Its predecessor, KaRaMeL, compiled F* (a dependently typed functional language used for cryptographic proofs) to C. Eurydice adapts this infrastructure for Rust.
The pipeline has two stages:
-
Charon extracts rustc's Medium-level Intermediate Representation (MIR) and dumps it as a JSON
.llbcfile -
Eurydice reads the
.llbc, applies roughly 30 optimization passes to lower Rust semantics to C, and emits.cand.hfiles
The generated C is genuinely readable—not the kind of machine-generated nightmare you'd expect. Rust structs become C structs. Functions keep their names (with module prefixes). Control flow is preserved. The goal is C code a human could maintain, not just C code that compiles.
Why This Matters for Retro/Custom Hardware
Here's the insight that matters for this audience: many obscure targets already have a C compiler but will never get an LLVM backend. The Z80 has SDCC. The 6502 has cc65. The 68000 has multiple mature C compilers. The Game Boy has GBDK.
If Eurydice can produce C that these compilers accept, you get Rust on all of these platforms without touching LLVM at all.
The Real-World Use Case
This isn't just theoretical. Eurydice's flagship use case is post-quantum cryptography. The ML-KEM (Kyber) key encapsulation algorithm was written and verified in Rust via the libcrux library, then transpiled to C via Eurydice for integration into:
- Mozilla's NSS (Network Security Services)
- Microsoft's SymCrypt
- Google's BoringSSL
These organizations need verified cryptographic implementations but can't take a dependency on the Rust toolchain in their C/C++ codebases. Eurydice bridges that gap.
Limitations
Eurydice is honest about what it can and can't do:
-
No
dyntraits — dynamic dispatch isn't yet supported (vtable generation is planned) - Const generics can cause Charon's MIR extraction to fail
- Iterators get compiled to while loops with runtime state management—functional but potentially less efficient than hand-written C loops
- Monomorphization is required for generics, producing separate C functions for each type instantiation
-
Strict aliasing — the generated code's handling of dynamically sized types violates C's strict-aliasing rules, requiring
-fno-strict-aliasing - Panic-free code only — Eurydice doesn't replicate Rust's panic semantics for integer overflow or bounds checking
For retro targets, some of these limitations are actually advantages. no_std embedded Rust code tends to avoid dyn traits and complex iterators. The code that runs well on a Z80—small functions, fixed-size arrays, simple control flow—is exactly the subset Eurydice handles best.
Path 3: Manual no_std with FFI to C
The minimal approach. You write your core logic in Rust targeting a supported architecture, then manually bridge to C via FFI for anything target-specific.
#![no_std] #![no_main] extern "C" { fn z80_out(port: u8, value: u8); } #[no_mangle] pub extern "C" fn compute_trajectory() -> u16 { // Pure Rust computation here let result = some_math(42); unsafe { z80_out(0x03, result as u8); } result }
You compile the Rust portion for a supported target (like thumbv6m-none-eabi for ARM Cortex-M0, the smallest Rust target), extract the algorithm logic, and rewrite the hardware interface in C or assembly.
What you get:
- Rust's type safety and ownership model for algorithm development
- No toolchain modifications required
- Works today with stable Rust
What it costs:
- You're not actually running Rust on your target—you're using Rust as a development language and manually porting
- No automated pipeline; changes to the Rust code require manual re-porting
- You lose Rust's guarantees at the FFI boundary
- Testing requires maintaining parallel implementations
This is really a development methodology, not a compilation strategy. It's useful for prototyping algorithms in Rust before implementing them in C for a constrained target, but it doesn't give you "Rust on Z80" in any meaningful sense.
Walkthrough: Rust → C → Z80
Let's do something concrete. We'll take a simple Rust program, transpile it to C with Eurydice, and compile the C for the Z80 with SDCC. I tested every step of this on my machine—what follows is real output, not approximations.
Prerequisites
You'll need:
- Nix (recommended) or OCaml + OPAM for building Eurydice
- SDCC for Z80 compilation
- Rust (Eurydice pins its own nightly via Charon)
On macOS:
# Install SDCC brew install sdcc # Install Nix (if you don't have it) curl -L https://nixos.org/nix/install | sh
Nix is the path of least resistance here. Eurydice depends on specific versions of OCaml, Charon, and KaRaMeL, and the Nix flake pins all of them. You can build everything manually with OPAM, but you'll be chasing version mismatches for an afternoon.
Step 1: Write a Rust Program
Create a small Rust project. The key constraint: it needs to stay within the subset Eurydice handles well. No dyn traits, no complex iterators, no standard library I/O.
cargo init --name z80demo cd z80demo
Replace src/main.rs with something appropriate for a Z80:
/// Simple GCD computation — the kind of algorithm /// you'd actually want on constrained hardware. pub fn gcd(mut a: u16, mut b: u16) -> u16 { while b != 0 { let t = b; b = a % t; a = t; } a } /// Compute LCM using GCD pub fn lcm(a: u16, b: u16) -> u16 { if a == 0 || b == 0 { return 0; } (a / gcd(a, b)) * b } /// A lookup table — common pattern in embedded code static FIBONACCI: [u16; 12] = [ 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 ]; pub fn fib_lookup(n: u8) -> u16 { if (n as usize) < FIBONACCI.len() { FIBONACCI[n as usize] } else { 0 } } fn main() { let result = gcd(252, 105); assert_eq!(result, 21); let l = lcm(12, 18); assert_eq!(l, 36); let f = fib_lookup(10); assert_eq!(f, 55); }
This is deliberately simple: u16 arithmetic (native to the Z80's 16-bit register pairs), no heap allocation, no traits, no closures. It's the kind of code that will transpile cleanly.
Step 2: Extract MIR with Charon
Charon hooks into the Rust compiler to extract its Medium-level Intermediate Representation (MIR). The critical detail I missed on my first attempt: Eurydice requires Charon to be invoked with --preset=eurydice. Without it, Eurydice will reject the output with a cryptic error.
Using Nix, you can run Charon directly without cloning or building anything:
nix --extra-experimental-features "nix-command flakes" \ run 'github:aeneasverif/eurydice#charon' -- cargo --preset=eurydice
The first run takes a while as Nix fetches and builds Charon's Rust toolchain. Subsequent runs complete in seconds:
Compiling z80demo v0.1.0 (/Users/alexjokela/projects/eurydice/z80demo) Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.25s
This produces z80demo.llbc—a 107KB JSON file containing the type declarations, function bodies, and trait implementations in Charon's intermediate format.
If Charon fails, the error usually points to an unsupported Rust feature. The fix is almost always to simplify the Rust code—replace iterators with explicit loops, avoid const generics, use concrete types instead of generics where possible.
Step 3: Transpile to C with Eurydice
nix --extra-experimental-features "nix-command flakes" \ run 'github:aeneasverif/eurydice' -- z80demo.llbc
Eurydice processes the LLBC through roughly 30 optimization passes and emits two files:
1️⃣ LLBC ➡️ AST 2️⃣ Cleanup 3️⃣ Monomorphization, datatypes ✅ Done
Here's the actual generated z80demo.c (comments and headers trimmed for clarity):
#include "z80demo.h" const Eurydice_arr_f5 z80demo_FIBONACCI = { .data = { 0U, 1U, 1U, 2U, 3U, 5U, 8U, 13U, 21U, 34U, 55U, 89U } }; uint16_t z80demo_fib_lookup(uint8_t n) { uint16_t uu____0; if ((size_t)n < (size_t)12U) { uu____0 = z80demo_FIBONACCI.data[(size_t)n]; } else { uu____0 = 0U; } return uu____0; } /** Simple GCD computation — the kind of algorithm you'd actually want on constrained hardware. */ uint16_t z80demo_gcd(uint16_t a, uint16_t b) { while (b != 0U) { uint16_t t = b; b = (uint32_t)a % (uint32_t)t; a = t; } return a; } /** Compute LCM using GCD */ uint16_t z80demo_lcm(uint16_t a, uint16_t b) { if (!(a == 0U)) { if (!(b == 0U)) { uint16_t uu____0 = a; return (uint32_t)uu____0 / (uint32_t)z80demo_gcd(a, b) * (uint32_t)b; } } return 0U; }
And the generated z80demo.h:
#include "eurydice_glue.h" typedef struct Eurydice_arr_f5_s { uint16_t data[12U]; } Eurydice_arr_f5; extern const Eurydice_arr_f5 z80demo_FIBONACCI; uint16_t z80demo_fib_lookup(uint8_t n); uint16_t z80demo_gcd(uint16_t a, uint16_t b); uint16_t z80demo_lcm(uint16_t a, uint16_t b); void z80demo_main(void);
A few things to notice:
- Rust doc comments are preserved as C comments. That's a nice touch.
-
Arrays are wrapped in structs (
Eurydice_arr_f5). This gives C arrays value semantics—you can return and assign them, matching Rust's behavior. The tradeoff is that array access goes through.data[n]instead of[n]. -
Arithmetic is widened to
uint32_t. Eurydice promotesu16 % u16touint32_tto avoid C's integer promotion pitfalls. On a Z80, this means 32-bit math library calls—SDCC handles this, but it's heavier than necessary. A hand-tuned version would keep the modulo at 16 bits. -
assert_eq!becomesEURYDICE_ASSERTwith a pair struct. The generatedmain()(which I'm omitting here) createsconst_uint16_t__x2structs to hold the two comparison operands. It works, but it's verbose compared to a simple==. -
Control flow is preserved. The
if/elseinfib_lookup, thewhileloop ingcd—they're structurally identical to the Rust original.
Step 4: Adapt for Bare-Metal Z80
Here's where things get practical. Eurydice's eurydice_glue.h includes <stdio.h>, <stdlib.h>, <string.h>, and KaRaMeL headers—none of which exist on a bare-metal Z80. We need a minimal replacement that provides only what the generated code actually uses.
Create eurydice_glue.h in the project directory:
/* * Minimal eurydice_glue.h for bare-metal Z80 via SDCC. * Replaces the full Eurydice glue header with only what z80demo needs. */ #ifndef EURYDICE_GLUE_H #define EURYDICE_GLUE_H #include <stdint.h> /* SDCC Z80: size_t is 16-bit */ #ifndef _SIZE_T_DEFINED typedef unsigned int size_t; #define _SIZE_T_DEFINED #endif /* On bare metal, assertions just halt the CPU */ #define EURYDICE_ASSERT(test, msg) \ do { \ if (!(test)) { \ __asm \ halt \ __endasm; \ } \ } while (0) #endif /* EURYDICE_GLUE_H */
This is the key insight for using Eurydice on constrained targets: the glue header is a compatibility layer, not a fundamental dependency. For any specific program, you can replace it with a minimal shim that provides only what that program's generated code actually references.
Now create z80_main.c—our bare-metal wrapper with serial I/O:
#include <stdint.h> /* Import Eurydice-generated functions */ extern uint16_t z80demo_gcd(uint16_t a, uint16_t b); extern uint16_t z80demo_lcm(uint16_t a, uint16_t b); extern uint16_t z80demo_fib_lookup(uint8_t n); /* Z80 serial output via port 0x01 (e.g., MC6850 ACIA) */ __sfr __at 0x01 serial_data; void putchar_z80(char c) { serial_data = c; } void print(const char *s) { while (*s) { putchar_z80(*s++); } } void print_u16(uint16_t val) { char buf[6]; uint8_t i = 5; buf[i] = '\0'; if (val == 0) { putchar_z80('0'); return; } while (val > 0 && i > 0) { buf[--i] = '0' + (val % 10); val /= 10; } while (buf[i]) { putchar_z80(buf[i++]); } } void main(void) { uint16_t g = z80demo_gcd(252, 105); print("GCD(252,105) = "); print_u16(g); print("\r\n"); uint16_t l = z80demo_lcm(12, 18); print("LCM(12,18) = "); print_u16(l); print("\r\n"); uint16_t f = z80demo_fib_lookup(10); print("Fib(10) = "); print_u16(f); print("\r\n"); __asm halt __endasm; }
Step 5: Compile with SDCC
# Compile the Eurydice-generated code sdcc -mz80 -c --std-c11 -I. z80demo.c # Compile our Z80 wrapper sdcc -mz80 -c --std-c11 z80_main.c # Link — code at 0x0000, data at 0x8000 sdcc -mz80 --code-loc 0x0000 --data-loc 0x8000 \ -o z80demo.ihx z80_main.rel z80demo.rel # Convert to raw binary makebin -s 32768 z80demo.ihx z80demo.bin
Both compilation steps complete with zero warnings. The linker produces a 32KB ROM image. According to the memory map, the _CODE segment is 717 bytes—our Rust-originated logic plus I/O wrappers and SDCC's runtime support for 32-bit division.
What the Z80 Assembly Looks Like
Here's the GCD function as SDCC compiled it, straight from the Eurydice-generated C:
_z80demo_gcd:: ; while (b != 0U) 00101$: ld a, d or a, e jr Z, 00103$ ; b = (uint32_t)a % (uint32_t)t; push de call __modsint ; a = t; pop hl jr 00101$ 00103$: ; return a; ex de, hl ret
SDCC's register-based calling convention (sdcccall 1) passes the first 16-bit argument in HL and the second in DE, returning results in DE. The GCD loop is tight—test for zero, call the modulo library routine, swap, repeat. The __modsint call is where the uint32_t widening lands; SDCC promotes to 32-bit for the modulo, which adds overhead but ensures correctness.
The Fibonacci lookup is even cleaner:
_z80demo_fib_lookup:: ; if ((size_t)n < (size_t)12U) ld c, a ld b, #0x00 ld a, c sub a, #0x0c jr NC, 00102$ ; return z80demo_FIBONACCI.data[(size_t)n] ld de, #_z80demo_FIBONACCI+0 ld l, c ld h, b add hl, hl ; n * 2 (16-bit entries) add hl, de ; base + offset ld e, (hl) inc hl ld d, (hl) ret 00102$: ; return 0 ld de, #0x0000 ret
The bounds check compiles to a single SUB/JR NC pair. The array lookup uses ADD HL,HL to compute the 16-bit element offset—exactly what you'd write by hand.
What Just Happened
We took Rust source code, ran two commands (Charon, then Eurydice), got readable C, wrote a 25-line glue header, and compiled for the Z80 with SDCC. Total code size: 717 bytes. No LLVM fork. No TableGen. No hours or days of debugging register allocation.
The entire Eurydice pipeline—from Rust to C—preserves the structure of the original code. The SDCC step is standard Z80 C compilation, unchanged from what you'd do with hand-written C. The main adaptation work is replacing the glue header, which took about five minutes once I understood what the generated code actually referenced.
Comparing the Three Paths
| LLVM Backend | Eurydice → C | Manual FFI | |
|---|---|---|---|
| Rust coverage | Full no_std
|
Subset (no dyn, limited generics) |
None (development aid only) |
| Code quality | Native optimized | Depends on C compiler | N/A |
| Maintenance | Track LLVM upstream | Track Eurydice + Charon | Manual sync |
| Automation | Full pipeline | Full pipeline | Manual porting |
| Prerequisites | LLVM expertise | Nix or OCaml | Basic C/Rust |
| Target reuse | All LLVM frontends | C-only output | None |
The right choice depends on your timeline and ambitions. If you're building a serious toolchain for a custom CPU—something you'll maintain for years—the LLVM backend is worth the investment. If you need Rust on a platform that already has a C compiler and you're working with a constrained subset of the language, Eurydice is a compelling shortcut.
The Elephant in the Room
Eurydice works best for small, self-contained programs that avoid complex Rust features. Its primary limitation is Charon, the MIR extractor, which is "routinely foiled by more recent Rust features" according to the LWN article that prompted this exploration. Const generics, complex trait bounds, and advanced pattern matching can all cause extraction failures.
For embedded and retro targets, this might actually be fine. The Rust code you'd write for a Z80—no_std, no allocator, fixed-size buffers, simple arithmetic—is exactly the subset that Eurydice handles well. You're not going to impl Iterator your way through 64KB of address space.
But if your Rust code is complex enough to genuinely benefit from Rust's type system—generics, trait objects, complex lifetime management—you've probably outgrown what Eurydice can transpile. At that point, you need an LLVM backend.
The Eurydice team is actively working on expanding coverage. Dynamic dispatch via vtables is the next major feature. Broader standard library support is an ambitious goal for 2026. The project is dual-licensed under Apache 2.0 and MIT, and accepts outside contributions.
Where This Leaves Us
For my own projects, the LLVM backends for Z80 and Sampo remain the right choice—they support the full no_std Rust language and produce optimized native code. But if someone asked me "how do I get started running Rust on my retro hardware this weekend," I'd point them at Eurydice and SDCC. The barrier to entry dropped from "understand GlobalISel" to "install Nix and run two commands."
That's genuine progress. The path from Rust to weird hardware just got shorter.