Building Language Compilers for the Z80: An Anthology of Retrocomputing Languages

Over the past year, I have been building a collection of programming language compilers and interpreters targeting the venerable Zilog Z80 microprocessor. What started as an experiment in retrocomputing has grown into a comprehensive suite of tools spanning multiple programming paradigms: from the functional elegance of LISP to the object-oriented messaging of Smalltalk, from the structured programming of Pascal and Fortran to the low-level control of C. This anthology documents the common architectural patterns, the unique challenges of targeting an 8-bit processor, and the unexpected joys of bringing modern language implementations to 1970s hardware.

My fascination with the Z80 began in the mid-1990s when I got my first TI-85 graphing calculator. That unassuming device, marketed for algebra and calculus homework, contained a Z80 running at 6 MHz with 28KB of RAM. Discovering that I could write programs in Z80 assembly and run them on this pocket computer was revelatory. I accumulated a small library of Z80 assembly books and spent countless hours learning the instruction set, writing simple games, and understanding how software meets hardware at the most fundamental level. Three decades later, this project represents a return to that formative obsession, now armed with modern tools and a deeper understanding of language implementation.

The RetroShield Platform

The RetroShield is a family of hardware adapters that bridge vintage microprocessors to modern Arduino development boards. The product line covers a remarkable range of classic CPUs: the MOS 6502 (powering the Apple II and Commodore 64), the Motorola 6809 (used in the TRS-80 Color Computer), the Intel 8085, the SC/MP, and the Zilog Z80. Each variant allows the original processor to execute real machine code while the Arduino emulates memory, peripherals, and I/O.

For this project, I focused exclusively on the RetroShield Z80. The Z80's rich instruction set, hardware BCD support via the DAA instruction, and historical significance as the CPU behind CP/M made it an ideal target for language implementation experiments. The RetroShield Z80 connects the actual Z80 chip to an Arduino Mega (or Teensy adapter for projects requiring more RAM), which emulates the memory and peripheral chips. This arrangement provides the authenticity of running on actual Z80 silicon while offering the convenience of modern development workflows.

The standard memory map provides 8KB of ROM at addresses 0x0000-0x1FFF and 6KB of RAM at 0x2000-0x37FF, though the Teensy adapter expands this significantly to 256KB. Serial I/O is handled through an emulated MC6850 ACIA chip at ports 0x80 and 0x81, providing the familiar RS-232 interface that connects these vintage programs to modern terminals.

It needs to be mentioned that if you do have a Z80 RetroShield and you want to run the binaries produced by the compilers collections on actual hardware, you will need a couple things: 1) bin2c, this is a program that will take a Z80 binary and turn it into a PROGMEM statement that you can put into an Arduino sketch. 2) Look at this sketch - there is code in there for emulating the MC6850 ACIA.

Common Compiler Architecture: Lexer, Parser, AST, Codegen

Every compiler in this collection follows a similar multi-stage architecture, a pattern that has proven itself across decades of compiler construction. Understanding this common structure reveals how the same fundamental approach can target vastly different source languages while producing efficient Z80 machine code.

The Lexer: Breaking Text into Tokens

The lexer (or tokenizer) is the first stage of compilation, responsible for transforming raw source code into a stream of tokens. Each language has its own lexical grammar: LISP recognizes parentheses and symbols, C identifies keywords and operators, Smalltalk distinguishes between message selectors and literals. Despite these differences, every lexer performs the same fundamental task of categorizing input characters into meaningful units.

In our Rust implementations, the lexer typically maintains a position in the source string and provides a next_token() method that advances through the input. This produces tokens like Token::Integer(42), Token::Plus, or Token::Identifier("factorial"). The lexer handles the tedious work of skipping whitespace, recognizing multi-character operators, and converting digit sequences into numbers.

The Parser: Building the Abstract Syntax Tree

The parser consumes the token stream and constructs an Abstract Syntax Tree (AST) that represents the hierarchical structure of the program. Most of our compilers use recursive descent parsing, a technique where each grammar rule becomes a function that may call other rule functions. This approach is intuitive, produces readable code, and handles the grammars of most programming languages effectively.

For example, parsing an arithmetic expression like 3 + 4 * 5 requires understanding operator precedence. The parser might have functions like parse_expression(), parse_term(), and parse_factor(), each handling operators at different precedence levels. The result is an AST where the multiplication is grouped as a subtree, correctly representing that it should be evaluated before the addition.

Code Generation: Emitting Z80 Machine Code

The code generator walks the AST and emits Z80 machine code. This is where the rubber meets the road: abstract operations like "add two numbers" become concrete sequences of Z80 instructions like LD A,(HL), ADD A,E, and LD (DE),A.

Most of our compilers generate code directly into a byte buffer, manually encoding each instruction's opcode and operands. This approach, while requiring intimate knowledge of the Z80 instruction set, gives us precise control over the generated code and avoids the complexity of an intermediate representation or separate assembler pass.

The DAA Instruction and BCD Arithmetic

One of the most fascinating aspects of Z80 programming is the DAA (Decimal Adjust Accumulator) instruction, opcode 0x27. This single instruction makes the Z80 surprisingly capable at decimal arithmetic, which proves essential for implementing numeric types on an 8-bit processor.

What is BCD?

Binary Coded Decimal (BCD) is a numeric representation where each decimal digit is stored in 4 bits (a nibble). Rather than storing the number 42 as binary 00101010 (its true binary representation), BCD stores it as 0100 0010, with the first nibble representing 4 and the second representing 2. This "packed BCD" format stores two decimal digits per byte.

While BCD is less space-efficient than pure binary (you can only represent 0-99 in a byte rather than 0-255), it has a crucial advantage: decimal arithmetic produces exact decimal results without rounding errors. This is why BCD was the standard for financial calculations on mainframes and why pocket calculators (including the famous TI series) used BCD internally.

How DAA Works

When you perform binary addition on two BCD digits, the result may not be valid BCD. Adding 0x09 and 0x01 gives 0x0A, but 0x0A is not a valid BCD digit. The DAA instruction corrects this: it examines the result and the half-carry flag (which indicates a carry from bit 3 to bit 4, i.e., from the low nibble to the high nibble) and adds 0x06 to any nibble that exceeds 9. After DAA, that 0x0A becomes 0x10, correctly representing decimal 10 in BCD.

This process works for both addition (after ADD or ADC instructions) and subtraction (after SUB or SBC instructions, where DAA subtracts 0x06 instead of adding it). The Z80 remembers whether the previous operation was addition or subtraction through its N flag.

BCD in Our Compilers

Several of our compilers use 4-byte packed BCD integers, supporting numbers up to 99,999,999 (8 decimal digits). The addition routine loads bytes from both operands starting from the least significant byte, adds them with ADC (add with carry) to propagate carries between bytes, applies DAA to correct each byte, and stores the result. The entire operation takes perhaps 20 bytes of code but provides exact decimal arithmetic on an 8-bit processor.

Here is a simplified version of our BCD addition loop:

bcd_add: LD B, 4 ; 4 bytes to process OR A ; Clear carry flag bcd_add_loop: LD A, (DE) ; Load byte from first operand ADC A, (HL) ; Add byte from second operand with carry DAA ; Decimal adjust LD (DE), A ; Store result DEC HL ; Move to next byte DEC DE DJNZ bcd_add_loop RET

This pattern appears in kz80_c, kz80_fortran, kz80_smalltalk, and kz80_lisp, demonstrating how a hardware feature designed in 1976 still provides practical benefits for language implementation.

The Evolution: From Assembly to C to Rust

The journey of implementing these compilers taught us valuable lessons about choosing the right tool for the job, and our approach evolved significantly over time.

First Attempt: Pascal in Z80 Assembly

Our first language implementation was kz80_pascal, a Pascal interpreter written entirely in Z80 assembly language. This approach seemed natural: if you are targeting the Z80, why not write directly in its native language?

The reality proved challenging. Z80 assembly, while powerful, is unforgiving. Building a recursive descent parser in assembly requires manually managing the call stack, carefully preserving registers across function calls, and debugging through hex dumps of memory. The resulting interpreter works and provides an interactive REPL for Pascal expressions, but extending it requires significant effort. Every new feature means more assembly, more potential for subtle bugs, and more time spent on implementation details rather than language design.

Second Attempt: Fortran 77 in C with SDCC

For kz80_fortran, we tried a different approach: writing the interpreter in C and cross-compiling with SDCC (Small Device C Compiler). This was dramatically more productive. C provided structured control flow, automatic stack management, and the ability to organize code into manageable modules.

The result is a comprehensive Fortran 77 subset with floating-point arithmetic (via BCD), subroutines and functions, arrays, and block IF statements. The C source compiles to approximately 19KB of Z80 code, fitting comfortably in ROM with room for program storage in RAM.

However, this approach has limitations. SDCC produces functional but not always optimal code, and debugging requires understanding both the C source and the generated assembly. The interpreter also requires the Teensy adapter with 256KB RAM, as the Arduino Mega's 4KB is insufficient for the runtime data structures.

The Rust Workbench: Our Final Form

Our breakthrough came with the realization that we did not need the compiler itself to run on the Z80, only the generated code. This insight led to what we call the "Rust workbench" approach: write the compiler in Rust, running on a modern development machine, and have it emit Z80 binary images.

This architecture provides enormous advantages:

Modern tooling: Cargo manages dependencies and builds, rustc catches bugs at compile time, and we have access to the entire Rust ecosystem for testing and development.

Fast iteration: Compiling a Rust program takes seconds; testing the generated Z80 code in our emulator takes milliseconds. Compare this to the multi-minute flash cycles required when the compiler runs on the target.

Comprehensive testing: Each compiler includes both Rust unit tests (testing the lexer, parser, and code generator individually) and integration tests that compile source programs and verify their output in the emulator.

Zero-dependency output: Despite being written in Rust, the generated Z80 binaries have no runtime dependencies. They are pure machine code that runs directly on the hardware.

This approach now powers kz80_lisp, kz80_c, kz80_lua, kz80_smalltalk, kz80_chip8, and retrolang. Each is a standalone Rust binary that reads source code and produces a 32KB ROM image.

The Z80 Emulator

None of this would be practical without a way to test generated code quickly. Our RetroShield Z80 Emulator provides exactly this: a cycle-accurate Z80 emulation with the same memory map and I/O ports as the real hardware.

The emulator comes in two versions: a simple passthrough mode (retroshield) that connects stdin/stdout directly to the emulated serial port, and a full TUI debugger (retroshield_nc) with register displays, disassembly views, memory inspection, and single-step execution. The passthrough mode enables scripted testing, piping test inputs through the emulator and comparing outputs against expected results. The TUI debugger proves invaluable when tracking down code generation bugs.

The emulator uses the superzazu/z80 library for CPU emulation, which provides accurate flag behavior and correct cycle counts. Combined with our MC6850 ACIA emulation, it provides a faithful recreation of the RetroShield environment without requiring physical hardware.

Self-Hosting Compilers: LISP and C

Two of our compilers achieve something remarkable: they can compile themselves and run on the target hardware. This property, called "self-hosting," is a significant milestone in compiler development.

What Does Self-Hosting Mean?

A self-hosting compiler is one written in the language it compiles. The classic example is the C compiler: most C compilers are themselves written in C. But this creates a chicken-and-egg problem: how do you compile a C compiler if you need a C compiler to compile it?

The solution is bootstrapping. You start with a minimal compiler written in some other language (or in machine code), use it to compile a slightly better compiler written in the target language, and iterate until you have a full-featured compiler that can compile its own source code. Once bootstrapped, the compiler becomes self-sustaining: future versions compile themselves.

kz80_lisp: A Self-Hosted LISP Compiler

kz80_lisp (crates.io) includes a LISP-to-Z80 compiler written in LISP itself. The compiler.lisp file defines functions that traverse LISP expressions and emit Z80 machine code bytes directly into memory. When you call (COMPILE '(+ 1 2)), it generates the actual Z80 instructions to load 1 and 2 and add them.

The self-hosted compiler supports arithmetic expressions, nested function calls, and can generate code that interfaces with the runtime's I/O primitives. While not a full replacement for the Rust-based code generator, it demonstrates that LISP is expressive enough to describe its own compilation to machine code.

kz80_c: A Self-Hosted C Compiler

kz80_c (crates.io) goes further: its self/cc.c file is a complete C compiler written in the C subset it compiles. This compiler reads C source from stdin and outputs Z80 binary to stdout, making it usable in shell pipelines:

# printf 'int main() { puts("Hello!"); return 0; }\x00' | \ retroshield self/cc.bin > hello.bin # retroshield hello.bin Hello!

The self-hosted C compiler supports all arithmetic operators, pointers, arrays, global variables, control flow statements, and recursive functions. Its main limitation is memory: the compiler source is approximately 66KB, exceeding the 8KB input buffer available on the Z80. This is a fundamental hardware constraint, not a compiler bug. In theory, a "stage 0" minimal compiler could bootstrap larger compilers.

Why Self-Hosting Matters

Self-hosting is more than a technical achievement; it validates the language implementation. If the compiler can compile itself correctly, it demonstrates that the language is expressive enough for real programs and that the code generator produces working code under complex conditions. For our Z80 compilers, self-hosting also connects us to the history of computing: the original Small-C compiler by Ron Cain in 1980 was similarly self-hosted on Z80/CP-M systems.

The Language Implementations

kz80_lisp

A minimal LISP interpreter and compiler featuring the full suite of list operations (CAR, CDR, CONS), special forms (QUOTE, IF, COND, LAMBDA, DEFINE), and recursive function support. The implementation includes a pure-LISP floating-point library and the self-hosted compiler mentioned above.

kz80_lisp v0.1 > (+ 21 21) 42 > (DEFINE (SQUARE X) (* X X)) SQUARE > (SQUARE 7) 49

kz80_c

A C compiler supporting char (8-bit), int (16-bit), float (BCD), pointers, arrays, structs, and a preprocessor with #define and #include. The runtime library provides serial I/O and comprehensive BCD arithmetic functions. The self-hosted variant can compile and run C programs entirely on the Z80.

# cat fibonacci.c int fib(int n) { if (n <= 1) return n; return fib(n-1) + fib(n-2); } int main() { puts("Fibonacci:"); for (int i = 0; i < 10; i = i + 1) print_num(fib(i)); return 0; } # kz80_c fibonacci.c -o fib.bin # retroshield -l fib.bin Fibonacci: 0 1 1 2 3 5 8 13 21 34

kz80_smalltalk

A Smalltalk subset compiler implementing the language's distinctive message-passing syntax with left-to-right operator evaluation. Expressions like 1 + 2 * 3 evaluate to 9 (not 7), matching Smalltalk's uniform treatment of binary messages. All arithmetic uses BCD with the DAA instruction.

# echo "6 * 7" | kz80_smalltalk /dev/stdin -o answer.bin # retroshield -l answer.bin Tiny Smalltalk on Z80 42

kz80_lua

A Lua compiler producing standalone ROM images with an embedded virtual machine. Supports tables (Lua's associative arrays), first-class functions, closures, and familiar control structures. The generated VM interprets Lua bytecode, with frequently-used operations implemented in native Z80 code for performance.

# cat factorial.lua function factorial(n) if n <= 1 then return 1 end return n * factorial(n - 1) end print("5! =", factorial(5)) # kz80_lua factorial.lua -o fact.bin # retroshield -l fact.bin Tiny Lua v0.1 5! = 120

kz80_fortran

A Fortran 77 interpreter with free-format input, REAL numbers via BCD floating point, block IF/THEN/ELSE/ENDIF, DO loops, subroutines, and functions. Requires the Teensy adapter for sufficient RAM. Written in C and cross-compiled with SDCC.

FORTRAN-77 Interpreter v0.3 RetroShield Z80 Ready. > INTEGER X, Y > X = 7 > Y = X * 6 > WRITE(*,*) 'Answer:', Y Answer: 42

kz80_pascal

A Pascal interpreter implemented in pure Z80 assembly. Provides an interactive REPL for expression evaluation with integer arithmetic, boolean operations, and comparison operators. A testament to the challenges of assembly language programming.

Tiny Pascal v0.1 For RetroShield Z80 (Expression Eval Mode) > 2 + 3 * 4 = 00014 > TRUE AND (5 > 3) = TRUE

retrolang

A custom systems programming language with Pascal/C-like syntax, featuring 16-bit integers, 8-bit bytes, pointers, arrays, inline assembly, and full function support with recursion. Compiles to readable Z80 assembly before assembling to binary.

# cat squares.rl proc main() var i: int; print("Squares: "); for i := 1 to 5 do printi(i * i); printc(32); end; println(); end; # retrolang squares.rl --binary -o squares.bin # retroshield -l squares.bin Squares: 1 4 9 16 25

kz80_chip8

A static recompiler that transforms CHIP-8 programs into native Z80 code. Rather than interpreting CHIP-8 bytecode at runtime, the compiler analyzes each instruction and generates equivalent Z80 sequences. Classic games like Space Invaders and Tetris run directly on the hardware.

# kz80_chip8 -d ibm_logo.ch8 200: 00E0 CLS 202: A22A LD I, 22A 204: 600C LD V0, 0C 206: 6108 LD V1, 08 208: D01F DRW V0, V1, 15 20A: 7009 ADD V0, 09 20C: A239 LD I, 239 20E: D01F DRW V0, V1, 15 ... # kz80_chip8 ibm_logo.ch8 -o ibm.bin # retroshield -l ibm.bin CHIP-8 on Z80 [displays IBM logo]

Why Rust for Compiler Development?

The choice of Rust for our compiler workbench was not accidental. Several features make it exceptionally well-suited for this work.

Strong typing catches bugs early. When you're generating machine code, off-by-one errors or type mismatches can produce binaries that crash or compute wrong results. Rust's type system prevents many such errors at compile time.

Pattern matching excels at AST manipulation. Walking a syntax tree involves matching on node types and recursively processing children. Rust's match expressions with destructuring make this natural and exhaustive (the compiler warns if you forget a case).

Zero-cost abstractions. We can use high-level constructs like iterators, enums with data, and trait objects without runtime overhead. The generated compiler code is as efficient as hand-written C.

Excellent tooling. Cargo's test framework made it easy to build comprehensive test suites. Each compiler has dozens to hundreds of tests that run in seconds, providing confidence when making changes.

Memory safety without garbage collection. This matters less for the compilers themselves (which are desktop tools) but more for our mental model: thinking about ownership and lifetimes transfers naturally to thinking about Z80 register allocation and stack management.

Conclusion

Building these compilers has been a journey through computing history, from the Z80's 1976 architecture to modern Rust tooling, from the fundamentals of lexing and parsing to the intricacies of self-hosting. The BCD arithmetic that seemed like a curiosity became a practical necessity; the emulator that started as a debugging aid became essential infrastructure; the Rust workbench that felt like an optimization became the key to productivity.

The Z80 remains a remarkable teaching platform. Its simple instruction set is comprehensible in an afternoon, yet implementing real languages for it requires genuine compiler engineering. Every language in this collection forced us to think carefully about representation, evaluation, and code generation in ways that higher-level targets often obscure.

All of these projects are open source under BSD-3-Clause licenses. The compilers are available on both GitHub and crates.io, ready to install with cargo install. Whether you are interested in retrocomputing, compiler construction, or just curious how programming languages work at the metal level, I hope these tools and their source code prove useful.

The Z80 may be nearly 50 years old, but it still has lessons to teach.

Building a Browser-Based Z80 Emulator for the RetroShield

There's something deeply satisfying about running code on vintage hardware. The blinking cursor, the deliberate pace of execution, the direct connection between your keystrokes and the machine's response. The RetroShield by Erturk Kocalar brings this experience to modern makers by allowing real vintage CPUs like the Zilog Z80 to run on Arduino boards. But what if you could experience that same feeling directly in your web browser?

That's exactly what I set out to build: a complete Z80 emulator that runs RetroShield firmware in WebAssembly, complete with authentic CRT visual effects and support for multiple programming language interpreters.

Try It Now

Select a ROM below and click "Load ROM" to start. Click on the terminal to focus it, then type to interact with the interpreter.

Idle
PC: 0000
Cycles: 0
Speed: 0 MHz
Tip: Click on the terminal to focus it, then type to send input. Try loading Fortran 77 and entering: INTEGER X then X = 42 then WRITE(*,*) X
ROM Information

Select a ROM above to load it into the emulator.


The RetroShield Platform

Before diving into the emulator, it's worth understanding what makes the RetroShield special. Unlike software emulators that simulate a CPU in code, the RetroShield uses a real vintage microprocessor. The Z80 variant features an actual Zilog Z80 chip running at its native speed, connected to an Arduino Mega or Teensy that provides:

  • Memory emulation: The Arduino's SRAM serves as the Z80's RAM, while program code is stored in the Arduino's flash memory
  • I/O peripherals: Serial communication, typically through an emulated MC6850 ACIA or Intel 8251 USART
  • Clock generation: The Arduino provides the clock signal to the Z80

This hybrid approach means you get authentic Z80 behavior - every timing quirk, every undocumented opcode - while still having the convenience of USB connectivity and easy program loading.

The RetroShield is open source hardware and available on Tindie. For larger programs, the Teensy adapter expands available RAM from about 4KB to 256KB.

The Hardware Up Close

Here's my RetroShield Z80 setup with the Teensy adapter:

RetroShield Z80 with Teensy adapter - overhead view

The Zilog Z80 CPU sits in the 40-pin DIP socket, with the Teensy 4.1 providing memory emulation and I/O handling beneath.

RetroShield Z80 - angled view showing the Z80 chip

RetroShield Z80 - side profile

RetroShield Z80 - full assembly

The physical hardware runs identically to the browser emulator above - the same ROMs, the same interpreters, the same authentic Z80 execution.

Why Build a Browser Emulator?

Having built several interpreters and tools for the RetroShield, I found myself constantly cycling through the development loop: edit code, compile, flash to Arduino, test, repeat. A software emulator would speed this up significantly, but I also wanted something I could share with others who might not have the hardware.

WebAssembly seemed like the perfect solution. It runs at near-native speed in any modern browser, requires no installation, and can be embedded directly in a web page. Someone curious about retro computing could try out a Fortran 77 interpreter or Forth environment without buying any hardware.

Building the Emulator in Rust

I chose Rust for the emulator implementation for several reasons:

  1. Excellent WASM support: Rust's wasm-bindgen and wasm-pack tools make compiling to WebAssembly straightforward
  2. Performance: Rust compiles to efficient code, important for cycle-accurate emulation
  3. The rz80 crate: Andre Weissflog's rz80 provides a battle-tested Z80 core

The emulator architecture is straightforward:

┌─────────────────────────────────────────────────┐
                 Web Browser                      
  ┌───────────────────────────────────────────┐  
              JavaScript/HTML                   
    - Terminal display with CRT effects        
    - Keyboard input handling                  
    - ROM loading and selection                
  └─────────────────────┬─────────────────────┘  
                         wasm-bindgen            
  ┌─────────────────────▼─────────────────────┐  
           Rust/WebAssembly Core               
    ┌─────────────┐  ┌─────────────────────┐   
      rz80 CPU       Memory (64KB)         
      Emulation      ROM + RAM             
    └─────────────┘  └─────────────────────┘   
    ┌─────────────────────────────────────┐    
      I/O Emulation                           
      - MC6850 ACIA (ports $80/$81)          
      - Intel 8251 USART (ports $00/$01)     
    └─────────────────────────────────────┘    
  └───────────────────────────────────────────┘  
└─────────────────────────────────────────────────┘

Dual Serial Chip Support

One challenge was supporting ROMs that use different serial chips. The RetroShield ecosystem has two common configurations:

MC6850 ACIA (ports $80/$81): Used by many homebrew projects including MINT, Firth Forth, and my own Fortran and Pascal interpreters. The ACIA has four registers (control, status, transmit data, receive data) mapped to two ports, with separate read/write functions per port.

Intel 8251 USART (ports $00/$01): Used by Grant Searle's popular BASIC port and the EFEX monitor. The 8251 is simpler with just two ports - one for data and one for control/status.

The emulator detects which chip to use based on ROM metadata and configures the I/O handlers accordingly.

Memory Layout

The standard RetroShield memory map looks like this:

Address Range Size Description
$0000-$7FFF 32KB ROM/RAM (program dependent)
$8000-$FFFF 32KB Extended RAM (Teensy adapter)

Most of my interpreters use a layout where code occupies the lower addresses and data/stack occupy higher memory. The Fortran interpreter, for example, places its program text storage at $6700 and variable storage at $7200, with the stack growing down from $8000.

The CRT Effect

No retro computing experience would be complete without the warm glow of a CRT monitor. I implemented several visual effects using pure CSS:

Scanlines: A repeating gradient overlay creates the horizontal line pattern characteristic of CRT displays:

.crt-container::before {
    background: linear-gradient(
        rgba(18, 16, 16, 0) 50%,
        rgba(0, 0, 0, 0.25) 50%
    );
    background-size: 100% 4px;
}

Chromatic aberration: CRT displays have slight color fringing due to the electron beam hitting phosphors at angles. I simulate this with animated text shadows that shift red and blue components:

@keyframes textShadow {
    0% {
        text-shadow: 0.4px 0 1px rgba(0,30,255,0.5),
                    -0.4px 0 1px rgba(255,0,80,0.3);
    }
    /* ... animation continues */
}

Flicker: Real CRTs had subtle brightness variations. A randomized opacity animation creates this effect without being distracting.

Vignette: The edges of CRT screens were typically darker than the center, simulated with a radial gradient.

The font: I'm using the Glass TTY VT220 font, a faithful recreation of the DEC VT220 terminal font from the 1980s. It's public domain and adds significant authenticity to the experience.

The Language Interpreters

The emulator comes pre-loaded with several language interpreters, each running as native Z80 code:

Fortran 77 Interpreter

This is my most ambitious RetroShield project: a subset of Fortran 77 running interpretively on an 8-bit CPU. It supports:

  • REAL numbers via BCD (Binary Coded Decimal) floating point with 8 significant digits
  • INTEGER and REAL variables with implicit typing (I-N are integers)
  • Arrays up to 3 dimensions
  • DO loops with optional STEP
  • Block IF/THEN/ELSE/ENDIF statements
  • SUBROUTINE and FUNCTION subprograms
  • Intrinsic functions: ABS, MOD, INT, REAL, SQRT

Here's a sample session:

FORTRAN-77 Interpreter v0.3
RetroShield Z80
Ready.
> PROGRAM FACTORIAL
  INTEGER I, N, F
  N = 7
  F = 1
  DO 10 I = 1, N
  F = F * I
  10 CONTINUE
  WRITE(*,*) N, '! =', F
  END
Program entered. Type RUN to execute.
> RUN
7 ! = 5040

The interpreter is written in C and cross-compiled with SDCC. At roughly 21KB of code, it pushes the limits of what's practical on the base RetroShield, which is why it requires the Teensy adapter.

MINT (Minimal Interpreter)

MINT is a wonderfully compact stack-based language. Each command is a single character, making it incredibly memory-efficient:

> 1 2 + .
3
> : SQ D * ;
> 5 SQ .
25

Firth Forth

A full Forth implementation by John Hardy. Forth's stack-based paradigm and extensibility made it popular on memory-constrained systems:

> : FACTORIAL ( n -- n! ) 1 SWAP 1+ 1 DO I * LOOP ;
> 7 FACTORIAL .
5040

Grant Searle's BASIC

A port of Microsoft BASIC that provides the classic BASIC experience:

Z80 BASIC Ver 4.7b
Ok
> 10 FOR I = 1 TO 10
> 20 PRINT I * I
> 30 NEXT I
> RUN
1
4
9
...

Technical Challenges

Building this project involved solving several interesting problems:

Memory Layout Debugging

The Fortran interpreter crashed mysteriously when entering lines with statement labels. After much investigation, I discovered the CODE section had grown to overlap with the DATA section. The linker was told to place data at $5000, but code had grown past that point. The fix was updating the memory layout to give code more room:

# Before: code overlapped data
LDFLAGS = --data-loc 0x5000

# After: proper separation
LDFLAGS = --data-loc 0x5500

This kind of bug is particularly insidious because it works fine until the code grows past a certain threshold.

BCD Floating Point

Implementing floating-point math on a Z80 without hardware support is challenging. I chose BCD (Binary Coded Decimal) representation because:

  1. Exact decimal representation: No binary floating-point surprises like 0.1 + 0.2 != 0.3
  2. Simpler conversion: Reading and printing decimal numbers is straightforward
  3. Reasonable precision: 8 BCD digits give adequate precision for an educational interpreter

Each BCD number uses 6 bytes: 1 for sign, 1 for exponent, and 4 bytes holding 8 packed decimal digits.

Cross-Compilation with SDCC

The Small Device C Compiler (SDCC) targets Z80 and other 8-bit processors. While it's an impressive project, there are quirks:

  • No standard library functions that assume an OS
  • Limited optimization compared to modern compilers
  • Memory model constraints require careful attention to data placement

I wrote a custom crt0.s startup file that initializes the stack, sets up the serial port, and calls main().

Running the Emulator

The emulator runs at roughly 3-4 MHz equivalent speed, depending on your browser and hardware. This is actually faster than the original Z80's typical 4 MHz, but the difference isn't noticeable for interactive use.

To try it:

  1. Visit the Z80 Emulator page
  2. Select a ROM from the dropdown (try Fortran 77)
  3. Click "Load ROM"
  4. Click on the terminal to focus it
  5. Start typing!

For Fortran, try entering a simple program:

PROGRAM HELLO
WRITE(*,*) 'HELLO, WORLD!'
END

Then type RUN to execute it.

What's Next

There's always more to do:

  • More ROM support: Expanding to additional retro languages like LISP, Logo, or Pilot
  • Debugger integration: Showing registers, memory, and allowing single-stepping
  • Save/restore state: Persisting the emulator state to browser storage
  • Mobile support: Touch-friendly keyboard for tablets and phones

Source Code and Links

Everything is open source:

The RetroShield hardware is available from 8bitforce on Tindie.

Acknowledgments

There's something magical about running 49-year-old CPU architectures in a modern web browser. The Z80 powered countless home computers, embedded systems, and arcade games. With this emulator, that legacy is just a click away.

The Machine Age, the Avant-Garde, and the New Cognitive Frontier

Gilbreths, Vorticism and the Echoes of Artificial Intelligence in the Twenty-First-Century Knowledge Economy


Introduction

The first decades of the twentieth century were a crucible of technological, scientific and cultural transformation. The steam-driven factory floor, the internal-combustion automobile, the telegraph-to-telephone network, and the nascent film industry all collapsed distance and accelerated the rhythm of everyday life. In that moment of accelerated modernity two seemingly unrelated phenomena emerged on opposite sides of the Atlantic: the Gilbreths' scientific-management laboratory in the United States, and the Vorticist avant-garde in Britain.

Both were responses to a shared "milieu"—a world in which the machine was no longer a peripheral tool but the central fact of existence. The Gilbreths turned the machine into a system of human motion, dissecting work into its smallest elements (the "therbligs") and re-engineering tasks for efficiency, ergonomics and profit. Vorticists, led by Wyndham Lewis and allied with figures such as Ezra Pound and Henri Gaudier-Brzeska, seized upon the same mechanical dynamism in a visual language of sharp angles, fractured planes and kinetic abstraction.

A century later, the rise of artificial intelligence is reshaping the same terrain, but this time the target is not manual labor on the factory floor; it is knowledge work, the very act of thinking, deciding and creating. Yet the cultural logic that animated the Gilbreths and the Vorticists resurfaces in the AI era: a faith in rationalization, an obsession with breaking complex processes into analyzable units, a belief that design—whether of a workflow, a painting, or an algorithm—can impose order on the chaos of modern life.

This essay weaves together three strands. First, it sketches the broader historical and intellectual atmosphere that nurtured both the Gilbreths and Vorticism. Second, it juxtaposes their concrete practices and aesthetic strategies, drawing out the convergences in their conceptualization of motion, fragmentation, control and progress. Third, it maps these early-twentieth-century dynamics onto the present AI-driven re-organization of knowledge labor, arguing that the same cultural grammar underlies both epochs, even as the material substrates have shifted from bricklaying to neural networks.


1. The Early-Twentieth-Century Milieu

Technological Acceleration

Between 1900 and 1920 the world witnessed a multiplication of speed. The internal-combustion engine made automobiles and aircraft possible; the electric motor powered factories and household appliances; the telephone and radio collapsed geographic distance; the cinema rendered motion visible and repeatable. Historian David Edgerton has shown that these "new machines" were not simply tools but actors that reshaped social relations (Edgerton, The Shock of the Old, 2006). The very perception of time became quantifiable: a stopwatch could now register the fraction of a second it took a worker to raise a hammer, a clerk to type a word, or a runner to cross a track.

Scientific Management and the Quest for Rational Order

Frederick Winslow Taylor published The Principles of Scientific Management (1911), arguing that work could be transformed into a science through measurement, standardization and hierarchical control. Taylor's ideas traveled swiftly across the Atlantic, finding eager audiences in American industry and, later, in British engineering firms. The core premise was that human labor could be rendered as predictable, repeatable data, amenable to optimization.

The Gilbreths—Frank B. Gilbreth Sr. (a mechanical engineer) and Lillian M. Gilbreth (a psychologist)—expanded Taylor's blueprint. They introduced motion-study photography, a method of capturing workers' movements on film, then dissecting each frame to isolate "therbligs," the elementary units of motion (the word itself a reversal of "Gilbreth"). Their work was both scientific and humane: they claimed that eliminating unnecessary motions would reduce fatigue, increase safety and, paradoxically, improve the worker's quality of life. Their 1915 book Motion Study blended engineering diagrams with psychological insight, making the Gilbreths the archetype of industrial ergonomics.

The Cultural Avant-Garde

Concurrently, a wave of artistic experimentation was erupting across Europe. Cubism (Picasso, Braque) deconstructed visual reality into geometric facets; Futurism (Marinetti, Balla) glorified speed, noise and the machine; Constructivism (Tatlin, Rodchenko) championed functional design as a social weapon. In London, a small cadre of writers and painters, disillusioned with the lingering Victorian aesthetic, coalesced around the journal BLAST (1914-1915).

The manifesto of Vorticism, authored chiefly by Wyndham Lewis, declared a desire to capture the "vortex"—the point where energy, motion and form converge. Vorticist works are characterized by hard-edged angularity, stark color contrasts and a sense of centrifugal force. They rejected the lyrical softness of the Pre-Raphaelite tradition and the pastoral nostalgia of the Edwardian era, instead embracing the "hard, machine-like precision" of the new industrial world.

Overlapping Intellectual Currents

Both the Gilbreths and the Vorticists were embedded in a broader intellectual climate that prized measurement, abstraction and the re-creation of reality. The rise of psychophysics, behaviorism, and physiological psychology introduced the notion that human perception and action could be quantified. In parallel, philosophers such as Henri Bergson were wrestling with the concept of duration and the mechanization of time, while sociologists like Georg Simmel explored the "blasé" effect of urban modernity. The shared vocabulary of "efficiency," "speed," "fragmentation" and "design" became the lingua franca of both engineers and artists.


2. Parallel Strategies: From Motion Study to Vortex

The Machine as Central Fact

Both movements privileged the machine not as a peripheral tool but as a defining lens through which to understand humanity. The Gilbreths approached human labor as a component of a larger production system, treating the body like a mechanical part. Their methods of representation—motion-study film frames, thermographic charts, time-and-motion diagrams—reduced the worker to analyzable data. Their ontological stance held that reality could be reduced to measurable motions, with the machine serving as the baseline condition of life.

The Vorticists operated from a parallel framework but expressed it through aesthetic means. They rendered the human figure and urban landscape as networks of intersecting mechanical forms, employing sharp angular compositions, overlapping planes, and stylized gears and dynamized lines. For them, reality was a flux of forces, and the "vortex" captured the dynamic, mechanized energy of modern existence.

In both cases, the human body was subordinated to, or fused with, a system of motion. For the Gilbreths, a worker's hand was a lever; for the Vorticists, a dancer's limb could be a blade of light cutting through the air.

Fragmentation and Reassembly

The Gilbreths' therbligs (e.g., "reach," "grasp," "move") represent a conceptual atomization of work. By isolating each atomic action, they could re-assemble a sequence that minimized waste and maximized output. This analytical practice mirrors the visual fragmentation employed by Vorticist painters, who broke down objects into geometric primitives before re-constituting them on canvas.

Consider a typical Gilbreth motion-study photograph of a bricklayer: the image is a series of still frames, each showing the worker's arm at a distinct angle. The analyst's task is to trace the trajectory, identify redundant motions, and propose a smoother path. In a Vorticist painting such as Wyndham Lewis's The Crowd (1914-15), the same crowd is depicted as a constellation of overlapping triangles and intersecting lines, each fragment suggesting a movement, a direction, a force. The similarity lies not in content but in methodology: a belief that complex reality becomes intelligible when decomposed into simpler parts.

Control, Order and Design

Both camps produced manifestos that served as design blueprints for their respective domains.

The Gilbreths published practical handbooks—Motion Study (1915), Applied Motion Study (1922)—that provided step-by-step protocols for reorganizing factories, hospitals and even homes. Their famous household experiment, depicted in Cheaper by the Dozen (1948), turned family life into a laboratory of efficiency.

The Vorticists issued the BLAST manifesto (1914), a terse proclamation that called for "a new art that will cut away the old, the sentimental, the decorative". It demanded clarity, precision, and a rejection of "softness"—values that echo the Gilbreths' insistence on eliminating "soft" motions that do not contribute to productive output.

Both therefore exerted cultural authority by prescribing how the world should be organized—whether through a Gantt chart or a bold, angular composition.

Ambivalent Faith in Progress

The Gilbreths believed that scientific optimization would lead to a more humane workplace. Yet their work also laid the groundwork for later Taylorist dehumanization, where workers became interchangeable cogs. Their optimism was tempered by the reality that efficiency could be weaponized for profit, not for worker welfare.

Vorticists, especially Lewis, celebrated the "machine aesthetic" but also expressed an undercurrent of skepticism. Lewis's later writings (e.g., The Apes of God, 1930) reveal a cynical view of mass culture and the mechanization of society. The vortex, while a source of energy, can also become a whirlpool of alienation.

Thus, both movements embody a dual vision of modernity: a promise of liberation through order, paired with a fear of loss of individuality.


3. The AI Turn: Re-Engineering Knowledge Work

From Bricklaying to Algorithms

If the Gilbreths turned the physical act of building into a set of measurable motions, today's AI researchers turn the cognitive act of reasoning into data. Machine-learning pipelines ingest millions of text fragments, label them, and train neural networks that can generate, summarize, and evaluate human language. The "therblig" of a knowledge worker—reading, analyzing, drafting—can now be instrumented by click-stream data, eye-tracking, and keystroke dynamics.

Just as a motion-study camera captured the kinematics of a worker, modern digital platforms capture the logistics of a mind at work. The "process mining" tools used in enterprise software map the sequence of digital actions much as Gilbreth charts mapped the sequence of physical actions.

Fragmentation of Cognitive Tasks

AI development follows the same atomization logic that underpinned both the Gilbreths and the Vorticists. Large language models (LLMs) are trained on tokenized text, where each token—often a sub-word fragment—is a basic unit of meaning. The model learns statistical relationships between tokens, then re-assembles them into sentences, paragraphs, or code.

Similarly, the micro-task platforms (e.g., Amazon Mechanical Turk) break down complex knowledge work (data labeling, content moderation) into tiny, repeatable units that can be distributed across a crowd. The "crowd" becomes a modern analog of the bricklayer's workshop, and the platform's algorithmic workflow is the contemporary "assembly line".

Design, Control and the Algorithmic Order

Just as the Gilbreths produced process charts and Vorticists drafted manifestos, AI researchers issue model cards, datasheets for datasets, and ethical guidelines. These documents codify how the system should behave, what data it may use, and how it ought to be evaluated—mirroring the design-by-specification ethos of early scientific management.

The rise of "prompt engineering"—the craft of phrasing inputs to LLMs to obtain desired outputs—can be read as a new form of motion study. Prompt engineers dissect the model's internal "motion" (attention patterns, token probabilities) and rearrange the prompt to optimize the "efficiency" of the model's response.

Ambivalence and Ethical Dilemmas

The Gilbreths' optimism about worker welfare was later undercut by automation-induced job loss and the rise of "scientific" surveillance of labor. Vorticism's celebration of the machine later seemed naïve in the face of the World Wars and the totalitarian use of technology.

AI today reproduces this ambivalence. Proponents hail it as a tool that will free humanity from routine cognition, allowing us to focus on creativity and empathy. Critics warn of algorithmic bias, disinformation, and the erosion of skilled labor. The "vortex" of AI can either be a centrifugal force that propels society forward or a black-hole that absorbs human agency.


4. Comparative Synthesis: Themes Across the Century

The Machine as Ontological Baseline

Across all three movements, the machine serves not merely as a tool but as a fundamental framework for understanding human existence. The Gilbreths treated the human body as a component of a larger mechanical system. The Vorticists rendered human figures as geometric, machine-like forms on canvas. Today's AI researchers model human cognition as data pipelines and neural "circuits." Each epoch finds its own way to subordinate organic complexity to mechanical logic.

Fragmentation and Reassembly

The pattern of breaking down complex wholes into analyzable parts, then reconstituting them in optimized form, appears consistently across all three contexts. The Gilbreths isolated "therbligs" from continuous motion. Vorticist artists broke visual reality into planes and reassembled them into the vortex. Modern AI systems tokenize text, distribute cognitive tasks across micro-work platforms, and build modular model components. The underlying faith remains the same: that decomposition reveals the essence of things and enables their improvement.

Design as Control

Each movement produced its own form of prescriptive documentation. The Gilbreths created process charts, standardized tools, and ergonomic workstation designs. The Vorticists issued manifestos prescribing aesthetic order and "hard edges." AI practitioners develop model cards, governance frameworks, and prompt engineering guides. All represent attempts to codify and control complex systems through explicit design principles.

Faith in Progress Tempered by Anxiety

The Gilbreths promised that efficiency would bring both productivity and worker welfare, yet their methods also enabled dehumanization. The Vorticists celebrated speed and mechanical energy while hinting at alienation in their fractured compositions. AI promises cognitive augmentation while raising concerns about surveillance and the erosion of human expertise. Each technological moment carries this dual character: the hope of liberation alongside the fear of submission.

The Shifting Cultural Milieu

The Gilbreths operated within a milieu shaped by Taylorism, psychophysics, mass media, and rapid urbanization. The Vorticists emerged amid Futurism, Cubism, Constructivism, and the upheaval of the First World War. Today's AI revolution unfolds against the backdrop of big data, ubiquitous connectivity, platform capitalism, and post-pandemic remote work. Though the specific historical conditions differ, the structural logic linking these moments remains remarkably stable. What changes is the material substrate—bricks, paint, or bits—and the scale of impact—factory floors, galleries, or global digital ecosystems.


5. The "New Vortex": AI as Contemporary Avant-Garde

Just as Vorticism attempted to visualize the invisible forces of industrial modernity, AI functions as a conceptual vortex that reshapes how we see knowledge. The latent space of a language model can be visualized as a high-dimensional field of probabilities, a kind of abstract energy landscape. Artists and designers now employ AI to generate images (e.g., DALL-E, Midjourney) that echo Vorticist aesthetics: sharp, kinetic, synthetic. The algorithmic brushstroke replaces the painter's line, yet the visual language still speaks of speed, fragmentation, and mechanized beauty.

Moreover, the cultural discourse around AI mirrors the manifestos of early avant-garde movements. Papers such as "The Ethics of Artificial Intelligence" (Bostrom & Yudkowsky, 2014) and corporate statements like Google's AI Principles (2018) function as modern manifestos, setting out a vision of a rational, humane future while warning against the dark vortex of misuse.


6. Implications for the Future of Work and Culture

Re-thinking Efficiency

The Gilbreths taught that efficiency is not merely speed, but the minimization of wasteful motion. In the AI era, efficiency must be re-conceptualized as cognitive economy: reducing unnecessary mental load, automating routine reasoning, and presenting information in ways that align with human attention patterns. However, a purely quantitative approach—optimizing click-through rates or model loss functions—runs the risk of reducing the richness of human judgment, just as early Taylorism reduced workers to data points.

Agency and the "Human-Machine" Hybrid

Both Vorticism and the Gilbreths celebrated the integration of human and machine, yet they also highlighted a tension: the loss of the organic in favor of the mechanical. Today, human-AI collaboration (often called "centaur" models) seeks a synthesis where humans guide, correct, and imbue AI with values, while AI handles scale and pattern detection. The artistic "vortex" becomes a collaborative vortex—a shared space where the algorithm's output is a raw material that the human refines.

Ethical Governance as Modern Manifesto

Just as Vorticist manifestos set out a normative framework for artistic production, AI governance documents aim to define norms for algorithmic behavior. The challenge is to avoid the pitfalls of technocratic paternalism—the belief that a small elite can dictate the shape of society through scientific design, a stance implicit in early scientific management. Democratic participation, interdisciplinary oversight, and transparent "process charts" (e.g., model interpretability dashboards) can help ensure that the AI vortex does not become a black-hole of control.


Conclusion

The Gilbreths and the Vorticists were, in their own ways, architects of the modern machine age. The former turned the human body into a calibrated component of industrial systems, while the latter rendered human experience as a kinetic, geometric abstraction. Both operated within a cultural environment that prized measurement, fragmentation, and the belief that design could impose order on a rapidly changing world.

A century later, artificial intelligence stands at a comparable crossroads. The same grammar of fragmentation, reassembly, and control underlies the transformation of knowledge work. Motion-study films have been supplanted by digital telemetry; therbligs have given way to token embeddings; Vorticist canvases now coexist with AI-generated visualizations of latent spaces.

Yet, as history shows, each wave of technological rationalization brings both liberation and alienation. The Gilbreths' optimism about a more humane workplace was later tempered by concerns over mechanistic dehumanization; Vorticism's celebration of the machine was later haunted by the specter of war and totalitarian control. In the AI epoch, we must likewise balance the promise of cognitive augmentation with vigilance against algorithmic opacity, bias, and the erosion of skilled judgment.

The lesson from the early twentieth century is not that the machine should be rejected, but that human agency must remain the central design parameter. If we can learn to treat AI not as a new "vortex" that swallows us, but as a collaborative partner that can be shaped through transparent, ethically grounded processes, we may fulfill the Gilbreths' original hope—more efficient work without sacrificing humanity—and realize a Vorticist vision of a world where form, function, and freedom converge in the bright, kinetic heart of the modern age.


Selected Bibliography

Persistent Conversation Context Over Stateless Messaging APIs

Abstract

Modern AI assistants like ChatGPT have fundamentally changed user expectations around conversational interfaces. Users now expect to have coherent, multi-turn conversations where the AI remembers what was said earlier in the discussion. However, when building AI-powered bots on top of messaging platforms like Signal, Telegram, or SMS, developers face a fundamental architectural challenge: these platforms are inherently stateless. Each message arrives as an independent event with no built-in mechanism for maintaining conversational context.

This paper examines a production implementation that bridges this gap, enabling persistent multi-turn AI conversations over Signal's stateless messaging protocol. We explore the database schema design, the command parsing architecture, and a novel inline image reference system that allows users to incorporate visual context into ongoing conversations.

1. Introduction

1.1 The Statefulness Problem

Large Language Models (LLMs) like GPT-4 and GPT-5 are stateless by design. Each API call is independent—the model has no memory of previous interactions unless the developer explicitly includes conversation history in each request. Services like ChatGPT create the illusion of memory by maintaining conversation state server-side and replaying the full message history with each new user input.

When building a bot on a messaging platform, developers must solve this same problem, but with additional constraints:

  1. Message Independence: Each incoming message from Signal (or similar platforms) arrives as a discrete event with no connection to previous messages.

  2. Multi-User Environments: In group chats, multiple users may be conducting separate conversations with the bot simultaneously.

  3. Asynchronous Delivery: Messages may arrive out of order or with significant delays.

  4. Platform Limitations: Most messaging APIs provide no native support for threading or conversation tracking.

  5. Resource Constraints: Storing complete conversation histories for every interaction can become expensive, both in terms of storage and API costs (since longer histories mean more tokens per request).

1.2 Design Goals

Our implementation targets the following objectives:

  • Conversation Continuity: Users should be able to continue previous conversations by referencing a conversation ID.
  • New Conversation Simplicity: Starting a fresh conversation should require no special syntax—just send a message.
  • Multi-Modal Support: Users should be able to reference images stored in the system within their conversational context.
  • Cost Transparency: Each response should report the API cost and attribute it correctly for multi-user billing scenarios.
  • Thread Safety: The system must handle concurrent conversations from multiple users without data corruption.

2. Database Schema Design

2.1 Conversation Tables

The persistence layer uses SQLite with a straightforward two-table design:

CREATE TABLE gpt_conversations (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    created_at TEXT NOT NULL
);

CREATE TABLE gpt_conversation_messages (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    conversation_id INTEGER NOT NULL,
    created_at TEXT NOT NULL,
    role TEXT NOT NULL,
    content TEXT NOT NULL,
    FOREIGN KEY (conversation_id) REFERENCES gpt_conversations(id)
);

The gpt_conversations table serves as a lightweight header, storing only the conversation ID and creation timestamp. The actual message content lives in gpt_conversation_messages, which maintains the full history of each conversation.

2.2 Schema Rationale

Several design decisions merit explanation:

Minimal Conversation Metadata: The gpt_conversations table intentionally stores minimal information. We considered adding fields like user_id, title, or summary, but found these complicated the implementation without providing sufficient value. The conversation ID alone is enough to retrieve and continue any conversation.

Text Storage for Timestamps: Rather than using SQLite's native datetime types, we store ISO 8601 formatted strings. This provides timezone awareness (critical for a system serving users across time zones) and human readability when debugging.

Content as Plain Text: The content field stores the raw message text, not a structured format. This keeps the schema simple and avoids premature optimization. When multi-modal content (like inline images) is needed, we resolve references at query time rather than storing binary data in the conversation history.

Foreign Key Constraints: The foreign key relationship between messages and conversations ensures referential integrity and enables cascading deletes if conversation cleanup is needed.

3. Conversation Management API

3.1 Core Operations

The database abstraction layer exposes three primary operations:

def create_gpt_conversation(first_message: GPTMessage) -> int:
    """Create a new conversation and return its ID."""
    with get_db_connection() as conn:
        cur = conn.cursor()
        cur.execute(
            "INSERT INTO gpt_conversations (created_at) VALUES (?)",
            (pendulum.now("America/Chicago").isoformat(),),
        )
        new_id = cur.lastrowid
        conn.commit()
        add_message_to_conversation(new_id, first_message)
        return new_id

The create_gpt_conversation function atomically creates both the conversation record and its first message. This ensures that no conversation exists without at least one message, maintaining data consistency.

def add_message_to_conversation(conversation_id: int, message: GPTMessage):
    """Append a message to an existing conversation."""
    with get_db_connection() as conn:
        cur = conn.cursor()
        cur.execute(
            """INSERT INTO gpt_conversation_messages
               (conversation_id, created_at, role, content)
               VALUES (?, ?, ?, ?)""",
            (conversation_id, pendulum.now().isoformat(),
             message.role, message.content),
        )
        conn.commit()
def get_messages_for_conversation(conversation_id: int) -> List[GPTMessage]:
    """Retrieve all messages in chronological order."""
    with get_db_connection() as conn:
        cur = conn.cursor()
        cur.execute(
            """SELECT created_at, role, content
               FROM gpt_conversation_messages
               WHERE conversation_id = ?
               ORDER BY created_at ASC""",
            (conversation_id,),
        )
        rows = cur.fetchall()
        return [GPTMessage(role=row[1], content=row[2]) for row in rows]

3.2 The GPTMessage Data Class

Messages are represented using a simple data class that mirrors the OpenAI API's message format:

@dataclass
class GPTMessage:
    role: str      # "user", "assistant", or "system"
    content: str   # The message text (or structured content for multi-modal)

This alignment with the OpenAI API structure means messages can be retrieved from the database and passed directly to the API without transformation, reducing complexity and potential for bugs.

4. Command Parsing and Conversation Flow

4.1 Command Syntax

The bot supports an optional conversation ID in its command syntax:

gpt <prompt>                    # Start new conversation
gpt <conversation_id> <prompt>  # Continue existing conversation

This is implemented via a regex pattern that makes the conversation ID optional:

def _process_gpt_command(text: str, command: str, model: GPTModel) -> bool:
    pat = rf"^{command} (\d+ )?\s?(.*)"
    m = re.search(pat, text, flags=re.IGNORECASE | re.DOTALL)
    if not m:
        return False

    conversation_id = m.groups()[0]  # None if not provided
    prompt = m.groups()[1]

4.2 Conversation Branching Logic

The command handler implements distinct paths for new versus continued conversations:

if conversation_id:
    # Continue existing conversation
    signal_archive_db.add_message_to_conversation(
        conversation_id, GPTMessage(role="user", content=prompt)
    )
    messages = signal_archive_db.get_messages_for_conversation(conversation_id)
    conv_id = conversation_id
else:
    # Start new conversation
    first_message = GPTMessage(role="user", content=prompt)
    conv_id = signal_archive_db.create_gpt_conversation(first_message)
    messages = signal_archive_db.get_messages_for_conversation(conv_id)

For continued conversations, we first persist the new user message, then retrieve the complete history. For new conversations, we create the conversation record (which automatically adds the first message), then retrieve it back. This ensures consistency—what we send to the API exactly matches what's stored in the database.

4.3 Response Handling and Storage

After receiving the AI's response, we store it as an assistant message:

gpt_response = gpt_api.gpt_completion(api_messages, model=model)
response_text = gpt_response.get("text", "Error: No text in response")

bot_message = GPTMessage(role="assistant", content=response_text)
signal_archive_db.add_message_to_conversation(conv_id, bot_message)

send_message(
    f"[conversation {conv_id}] {response_text}\n"
    f"cost: \${cost:.4f}, payer: {payer}"
)

The response always includes the conversation ID, making it easy for users to continue the conversation later. Including cost and payer information provides transparency in multi-user environments where API expenses are shared or attributed.

5. Multi-Modal Conversations: Inline Image References

5.1 The Challenge

Signal allows sending images as attachments, but these are ephemeral—they arrive with the message and aren't easily referenced later. For AI conversations, users often want to ask follow-up questions about an image discussed earlier, or reference images from the bot's archive in new conversations.

5.2 The imageid= Syntax

We implemented a lightweight markup syntax that lets users embed image references in their prompts:

gpt imageid=123 What's happening in this image?
gpt 42 imageid=123 imageid=456 Compare these two images

The syntax is intentionally simple—imageid= followed by a numeric ID. Multiple images can be included in a single prompt.

5.3 Implementation

Image references are resolved at request time through a two-stage process:

IMAGE_ID_REGEX = re.compile(r"imageid=(\d+)", re.IGNORECASE)

def _build_inline_image_content(prompt: str) -> tuple[list | str, list[int]]:
    """Convert imageid= references to OpenAI API image payloads."""

    image_ids = IMAGE_ID_REGEX.findall(prompt)
    if not image_ids:
        return prompt, []

    contents: list[dict] = []
    cleaned_prompt = IMAGE_ID_REGEX.sub("", prompt).strip()
    contents.append({"type": "text", "text": cleaned_prompt})

    embedded_ids: list[int] = []
    for raw_id in image_ids:
        image_id = int(raw_id)
        image_result = image_manager.get_image_by_id(image_id)
        if not image_result:
            raise ValueError(f"Image ID {image_id} not found")

        _, image_path = image_result
        image_bytes = image_manager.read_image_bytes(image_path)
        image_b64 = base64.b64encode(image_bytes).decode("utf-8")

        contents.append({
            "type": "image_url",
            "image_url": {"url": f"data:image/jpeg;base64,{image_b64}"}
        })
        embedded_ids.append(image_id)

    return contents, embedded_ids

The function extracts image IDs from the prompt, removes the imageid= markers from the text, loads each referenced image from disk, base64-encodes it, and constructs the multi-modal content structure expected by the OpenAI API.

5.4 Applying to Full Conversations

Since conversations may span multiple messages with image references, we apply this transformation to the entire message history:

def _prepare_messages_with_inline_images(
    messages: list[GPTMessage],
) -> tuple[list[GPTMessage], list[int]]:
    """Transform all messages, resolving image references."""

    prepared: list[GPTMessage] = []
    referenced_image_ids: list[int] = []

    for message in messages:
        content = message.content
        if message.role == "user" and isinstance(content, str):
            content, ids = _build_inline_image_content(content)
            referenced_image_ids.extend(ids)

        prepared.append(GPTMessage(role=message.role, content=content))

    return prepared, referenced_image_ids

This approach means the database stores the original imageid= references as plain text, while the actual image data is resolved fresh for each API call. This has several advantages:

  1. Storage Efficiency: We don't duplicate image data in conversation history.
  2. Image Updates: If an image is re-processed or corrected, subsequent conversation continuations automatically use the updated version.
  3. Auditability: The stored conversation clearly shows which images were referenced.

6. Concurrency and Thread Safety

6.1 Threading Model

Each command runs in its own daemon thread to avoid blocking the main message processing loop:

def _process_gpt_command(text: str, command: str, model: GPTModel) -> bool:
    # ... validation ...

    current_user_context = gpt_api.get_user_context()

    def my_func():
        try:
            gpt_api.set_user_context(current_user_context)
            # ... conversation processing ...
        finally:
            gpt_api.clear_user_context()

    thread = threading.Thread(target=my_func)
    thread.daemon = True
    thread.start()
    return True

6.2 User Context Propagation

The system tracks which user initiated each request for cost attribution. Since this context is stored in thread-local storage, we must capture it before spawning the worker thread and restore it inside the thread:

current_user_context = gpt_api.get_user_context()

def my_func():
    try:
        gpt_api.set_user_context(current_user_context)
        # ... API calls use this context for billing ...
    finally:
        gpt_api.clear_user_context()

6.3 Database Connection Safety

SQLite connections are managed via context managers, ensuring proper cleanup even if exceptions occur:

with get_db_connection() as conn:
    cur = conn.cursor()
    # ... operations ...
    conn.commit()

Each database operation acquires its own connection, avoiding issues with SQLite's threading limitations while maintaining data consistency.

7. Practical Considerations

7.1 Conversation Length and Token Limits

As conversations grow, they consume more tokens per API call. The current implementation sends the complete history with each request, which can become expensive for long conversations. Production deployments might consider:

  • Summarization: Periodically summarizing older messages to reduce token count.
  • Windowing: Only sending the N most recent messages.
  • Smart Truncation: Using the model to identify and retain the most relevant context.

7.2 Error Handling

The implementation includes robust error handling for common failure modes:

try:
    api_messages, embedded_images = _prepare_messages_with_inline_images(messages)
except ValueError as e:
    logger.error(f"Failed to attach images for GPT request: {e}")
    send_message(str(e))
    return

Invalid image references fail fast with clear error messages rather than sending malformed requests to the API.

7.3 User Experience

The response format provides all information users need to continue conversations:

[conversation 42] Here's my analysis of the image...
cost: \$0.0234, payer: jon

Users can immediately reference conversation 42 in their next message to continue the discussion.

8. Conclusion

Building persistent conversational AI over stateless messaging platforms requires careful consideration of data modeling, state management, and user experience. Our implementation demonstrates that a relatively simple database schema combined with thoughtful command parsing can provide a seamless multi-turn conversation experience.

The inline image reference system shows how platform limitations can be overcome through creative syntax design, allowing users to build rich multi-modal conversations without the messaging platform's native support.

This architecture has proven robust in production, handling concurrent users, long-running conversations, and multi-modal content while maintaining data consistency and providing transparency into API costs. The patterns described here are applicable beyond Signal to any stateless messaging platform where persistent AI conversations are desired.

A Bespoke LLM Code Scanner

Building a Nightly AI Code Scanner with vLLM, ROCm, and JIRA Integration

I've been running a ballistics calculation engine — a Rust physics library with several components, like a Flask app wrapper with machine learning capabilities, bindings for a python library as well as a Ruby gem library. There are also Android and iOS apps, too. The codebase has grown to about 15,000 lines of Rust and another 10,000 lines of Python. At this scale, bugs hide in edge cases: division by zero, floating-point precision issues in transonic drag calculations, unwrap() panics on unexpected input.

What if I could run an AI code reviewer every night while I sleep? Not a cloud API with per-token billing that could run up a $500 bill scanning 50 files, but a local model running on my own hardware, grinding through the codebase and filing JIRA tickets for anything suspicious.

This is the story of building that system.

The Hardware: AMD Strix Halo on ROCm 7.0

I'm running this on a server with an AMD Radeon 8060S (Strix Halo APU) — specifically the gfx1151 architecture. This isn't a data center GPU. It's essentially an integrated GPU with 128GB of shared memory; configured to give 96GB to VRAM and the rest to system RAM. Not the 80GB of HBM3 you'd get on an H100, but enough to run a 32B parameter model comfortably.

The key insight: for batch processing where latency doesn't matter, you don't need bleeding-edge hardware. A nightly scan can take hours. I'm not serving production traffic; I'm analyzing code files one at a time with a 30-second cooldown between requests. The APU handles this fine.

Hardware Configuration:
- AMD Radeon 8060S (gfx1151 Strix Halo APU)
- 96GB shared memory
- ROCm 7.0 with HSA_OVERRIDE_GFX_VERSION=11.5.1

The HSA_OVERRIDE_GFX_VERSION environment variable is critical. Without it, ROCm doesn't recognize the Strix Halo architecture. This is the kind of sharp edge you hit running ML on AMD consumer hardware.

Model Selection: Qwen2.5-Coder-7B-Instruct

I tested several models:

Model Parameters Context Quality Notes
DeepSeek-Coder-V2-Lite 16B 32k Good Requires flash_attn (ROCm issues)
Qwen3-Coder-30B 30B 32k Excellent Too slow on APU
Qwen2.5-Coder-7B-Instruct 7B 16k Good Sweet spot
TinyLlama-1.1B 1.1B 4k Poor Too small for code review

Qwen2.5-Coder-7B-Instruct hits the sweet spot. It understands Rust and Python well enough to spot real issues, runs fast enough to process 50 files per night, and doesn't require flash attention (which has ROCm compatibility issues on consumer hardware).

vLLM Setup

vLLM provides an OpenAI-compatible API server that makes integration trivial. Here's the startup command:

source ~/vllm-rocm7-venv/bin/activate
export HSA_OVERRIDE_GFX_VERSION=11.5.1
python -m vllm.entrypoints.openai.api_server \
    --model Qwen/Qwen2.5-Coder-7B-Instruct \
    --host 0.0.0.0 \
    --port 8000 \
    --trust-remote-code \
    --max-model-len 16384 \
    --gpu-memory-utilization 0.85

The --max-model-len 16384 limits context to 16k tokens. My code files rarely exceed 500 lines (truncated), so this is plenty. The --gpu-memory-utilization 0.85 leaves headroom for the system.

I run this in a Python venv rather than Docker because ROCm device passthrough with Docker on Strix Halo is finicky. Sometimes you have to choose pragmatism over elegance.

Docker Configuration (When It Works)

For reference, here's the Docker Compose configuration I initially built. It works on dedicated AMD GPUs but has issues on integrated APUs:

services:
  vllm:
    image: rocm/vllm-dev:latest
    container_name: vllm-code-scanner
    devices:
      - /dev/kfd:/dev/kfd
      - /dev/dri:/dev/dri
    group_add:
      - video
      - render
    security_opt:
      - seccomp:unconfined
    cap_add:
      - SYS_PTRACE
    ipc: host
    environment:
      - HSA_OVERRIDE_GFX_VERSION=11.5.1
      - PYTORCH_ROCM_ARCH=gfx1151
      - HIP_VISIBLE_DEVICES=0
    volumes:
      - /home/alex/models:/models
      - /home/alex/.cache/huggingface:/root/.cache/huggingface
    ports:
      - "8000:8000"
    command: >
      python -m vllm.entrypoints.openai.api_server
      --model Qwen/Qwen2.5-Coder-7B-Instruct
      --host 0.0.0.0
      --port 8000
      --trust-remote-code
      --max-model-len 16384
      --gpu-memory-utilization 0.85
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 120s

  scanner:
    build: .
    container_name: code-scanner-agent
    depends_on:
      vllm:
        condition: service_healthy
    environment:
      - VLLM_HOST=vllm
      - VLLM_PORT=8000
      - JIRA_EMAIL=${JIRA_EMAIL}
      - JIRA_API_KEY=${JIRA_API_KEY}
    volumes:
      - /home/alex/projects:/projects:ro
      - ./config:/app/config:ro
      - /home/alex/projects/code-scanner-results:/app/results

The ipc: host and seccomp:unconfined are necessary for ROCm to function properly. The depends_on with service_healthy ensures the scanner waits for vLLM to be fully loaded before starting — important since model loading can take 2-3 minutes.

The scanner Dockerfile is minimal:

FROM python:3.11-slim

WORKDIR /app

RUN apt-get update && apt-get install -y \
    git curl ripgrep \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY agent/ /app/agent/
COPY prompts/ /app/prompts/
COPY config/ /app/config/

CMD ["python", "-m", "agent.scanner"]

Including ripgrep in the container enables fast pattern matching when the scanner needs to search for related code.

The Scanner Architecture

The system has three main components:

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   Systemd       │     │    vLLM         │     │     JIRA        │
│   Timer         │────▶│    Server       │────▶│     API         │
│   (11pm daily)  │     │  (Qwen 7B)      │     │   (tickets)     │
└─────────────────┘     └─────────────────┘     └─────────────────┘
                               │
                               ▼
                    ┌─────────────────────┐
                    │   Scanner Agent     │
                    │ - File discovery    │
                    │ - Code analysis     │
                    │ - Finding validation│
                    │ - JIRA integration  │
                    └─────────────────────┘

Configuration

Everything is driven by a YAML configuration file:

vllm:
  host: "10.1.1.27"
  port: 8000
  model: "Qwen/Qwen2.5-Coder-7B-Instruct"

schedule:
  start_hour: 23  # 11pm
  end_hour: 6     # 6am
  max_iterations: 50
  cooldown_seconds: 30

repositories:
  - name: "ballistics-engine"
    path: "/home/alex/projects/ballistics-engine"
    languages: ["rust"]
    scan_patterns:
      - "src//*.rs"
    exclude_patterns:
      - "target/"
      - "*.lock"

  - name: "ballistics-api"
    path: "/home/alex/projects/ballistics-api"
    languages: ["python", "rust"]
    scan_patterns:
      - "ballistics//*.py"
      - "ballistics_rust/src//*.rs"
    exclude_patterns:
      - "__pycache__/"
      - "target/"
      - ".venv/"

jira:
  enabled: true
  project_key: "MBA"
  confidence_threshold: 0.75
  labels: ["ai-detected", "code-scanner"]
  max_tickets_per_run: 10
  review_threshold: 5

The confidence_threshold: 0.75 is crucial. Without it, the model reports every minor style issue. At 75%, it focuses on things it's genuinely concerned about.

The review_threshold: 5 triggers a different behavior: if the model finds more than 5 issues, it creates a single summary ticket for manual review rather than flooding JIRA with individual tickets. This is a safety valve for when the model goes haywire.

Structured Outputs with Pydantic

LLMs are great at finding issues but terrible at formatting output consistently. Left to their own devices, they'll return findings as markdown, prose, JSON with missing fields, or creative combinations thereof.

The solution is structured outputs. I define Pydantic models for exactly what I expect:

class Severity(str, Enum):
    CRITICAL = "critical"
    HIGH = "high"
    MEDIUM = "medium"
    LOW = "low"
    INFO = "info"

class FindingType(str, Enum):
    BUG = "bug"
    PERFORMANCE = "performance"
    SECURITY = "security"
    CODE_QUALITY = "code_quality"
    POTENTIAL_ISSUE = "potential_issue"

class CodeFinding(BaseModel):
    file_path: str = Field(description="Path to the file")
    line_start: int = Field(description="Starting line number")
    line_end: Optional[int] = Field(default=None)
    finding_type: FindingType
    severity: Severity
    title: str = Field(max_length=100)
    description: str
    suggestion: Optional[str] = None
    confidence: float = Field(ge=0.0, le=1.0)
    code_snippet: Optional[str] = None

The confidence field is a float between 0 and 1. The model learns to be honest about uncertainty — "I think this might be a bug (0.6)" versus "This is definitely division by zero (0.95)."

In a perfect world, I'd use vLLM's Outlines integration for guided JSON generation. In practice, I found that prompting Qwen for JSON and parsing the response works reliably:

def _analyze_code(self, file_path: str, content: str) -> List[CodeFinding]:
    messages = [
        {"role": "system", "content": self.system_prompt},
        {"role": "user", "content": f"""Analyze this code for bugs and issues.

File: {file_path}

{content}

Return a JSON array of findings. Each finding must have:
- file_path: string
- line_start: number
- finding_type: "bug" | "performance" | "security" | "code_quality"
- severity: "critical" | "high" | "medium" | "low" | "info"
- title: string (max 100 chars)
- description: string
- suggestion: string or null
- confidence: number 0-1

If no issues found, return an empty array: []"""}
    ]

    response = self._call_llm(messages)

    # Parse JSON from response (handles markdown code blocks too)
    if response.strip().startswith('['):
        findings_data = json.loads(response)
    elif '```json' in response:
        json_str = response.split('```json')[1].split('```')[0]
        findings_data = json.loads(json_str)
    elif '[' in response:
        start = response.index('[')
        end = response.rindex(']') + 1
        findings_data = json.loads(response[start:end])
    else:
        return []

    # Validate each finding with Pydantic
    findings = []
    for item in findings_data:
        try:
            finding = CodeFinding(item)
            findings.append(finding)
        except ValidationError:
            pass  # Skip malformed findings

    return findings

The System Prompt

The system prompt is where you teach the model what you care about. Here's mine:

You are an expert code reviewer specializing in Rust and Python.
Your job is to find bugs, performance issues, security vulnerabilities,
and code quality problems.

You are analyzing code from a ballistics calculation project that includes:
- A Rust physics engine for trajectory calculations
- Python Flask API with ML models
- PyO3 bindings between Rust and Python

Key areas to focus on:
1. Numerical precision issues (floating point errors, rounding)
2. Edge cases in physics calculations (division by zero, negative values)
3. Memory safety in Rust code
4. Error handling (silent failures, unwrap panics)
5. Performance bottlenecks (unnecessary allocations, redundant calculations)
6. Security issues (input validation, injection vulnerabilities)

Be conservative with findings - only report issues you are confident about.
Avoid false positives.

The phrase "Be conservative with findings" is doing heavy lifting. Without it, the model reports everything that looks slightly unusual. With it, it focuses on actual problems.

Timeout Handling

Large files (500+ lines) can take a while to analyze. My initial 120-second timeout caused failures on complex files. I bumped it to 600 seconds (10 minutes):

response = requests.post(
    f"{self.base_url}/chat/completions",
    json=payload,
    headers={"Content-Type": "application/json"},
    timeout=600
)

I also truncate files to 300 lines. For longer files, the model only sees the first 300 lines. This is a trade-off — I might miss bugs in the back half of long files — but it keeps scans predictable and prevents timeout cascades. I plan to revisit this in future iterations.

lines = content.split('\n')
if len(lines) > 300:
    content = '\n'.join(lines[:300])
    logger.info("Truncated to 300 lines for analysis")

JIRA Integration

When the scanner finds issues, it creates JIRA tickets automatically. The API is straightforward:

def create_jira_tickets(self, findings: List[CodeFinding]):
    jira_base_url = f"https://{jira_domain}/rest/api/3"

    for finding in findings:
        # Map severity to JIRA priority
        priority_map = {
            Severity.CRITICAL: "Highest",
            Severity.HIGH: "High",
            Severity.MEDIUM: "Medium",
            Severity.LOW: "Low",
            Severity.INFO: "Lowest"
        }

        payload = {
            "fields": {
                "project": {"key": "MBA"},
                "summary": f"[AI] {finding.title}",
                "description": {
                    "type": "doc",
                    "version": 1,
                    "content": [{"type": "paragraph", "content": [
                        {"type": "text", "text": build_description(finding)}
                    ]}]
                },
                "issuetype": {"name": "Bug" if finding.finding_type == FindingType.BUG else "Task"},
                "priority": {"name": priority_map[finding.severity]},
                "labels": ["ai-detected", "code-scanner"]
            }
        }

        response = requests.post(
            f"{jira_base_url}/issue",
            json=payload,
            auth=(jira_email, jira_api_key),
            headers={"Content-Type": "application/json"}
        )

The [AI] prefix in the summary makes it obvious these tickets came from the scanner. The ai-detected label allows filtering.

I add a 2-second delay between ticket creation to avoid rate limiting:

time.sleep(2)  # Rate limit protection

Systemd Scheduling

The scanner runs nightly via systemd timer:

# /etc/systemd/system/code-scanner.timer
[Unit]
Description=Run Code Scanner nightly at 11pm

[Timer]
OnCalendar=*-*-* 23:00:00
Persistent=true
RandomizedDelaySec=300

[Install]
WantedBy=timers.target

The RandomizedDelaySec=300 adds up to 5 minutes of random delay. This prevents the scanner from always starting at exactly 11:00:00, which helps if multiple services share the same schedule.

The service unit is a oneshot that runs the scanner script:

# /etc/systemd/system/code-scanner.service
[Unit]
Description=Code Scanner Agent
After=docker.service

[Service]
Type=oneshot
User=alex
WorkingDirectory=/home/alex/projects/ballistics/code-scanner
ExecStart=/home/alex/projects/ballistics/code-scanner/scripts/start_scanner.sh
TimeoutStartSec=25200

The TimeoutStartSec=25200 (7 hours) gives the scanner enough time to complete even if it scans every file.

Sample Findings

Here's what the scanner actually finds. From a recent run:

{
  "file_path": "/home/alex/projects/ballistics-engine/src/fast_trajectory.rs",
  "line_start": 115,
  "finding_type": "bug",
  "severity": "high",
  "title": "Division by zero in fast_integrate when velocity approaches zero",
  "description": "The division dt / velocity_magnitude could result in division by zero if the projectile stalls (velocity_magnitude = 0). This can happen at the apex of a high-angle shot.",
  "suggestion": "Add a check for velocity_magnitude < epsilon before division, or clamp to a minimum value.",
  "confidence": 0.85
}

This is a real issue. In ballistics calculations, a projectile fired at a high angle momentarily has zero horizontal velocity at the apex. Without a guard, this causes a panic.

Not every finding is valid. The model occasionally flags intentional design decisions as "issues." But at a 75% confidence threshold, the false positive rate is manageable — maybe 1 in 10 findings needs to be closed as "not a bug."

Trade-offs and Lessons

What works well: - Finding numerical edge cases (division by zero, overflow) - Spotting unwrap() calls on Options that might be None - Identifying missing error handling - Flagging dead code and unreachable branches

What doesn't work as well: - Understanding business logic (the model doesn't know physics) - Spotting subtle race conditions in concurrent code - False positives on intentional patterns

Operational lessons: - Start with a low iteration limit (10-20 files) to test the pipeline - Monitor the first few runs manually before trusting it - Keep credentials in .env files excluded from rsync - The 300-line truncation is aggressive; consider chunking for long files

Handling JSON Parse Failures

Despite asking for JSON, LLMs sometimes produce malformed output. I see two failure modes:

  1. Truncated JSON: The model runs out of tokens mid-response, leaving an unterminated string or missing closing brackets.
  2. Wrapped JSON: The model adds explanatory text around the JSON, like "Here are the findings:" before the array.

My parser handles both:

def parse_findings_response(response: str) -> list:
    """Extract JSON from potentially messy LLM output."""
    response = response.strip()

    # Best case: raw JSON array
    if response.startswith('['):
        try:
            return json.loads(response)
        except json.JSONDecodeError:
            pass  # Fall through to extraction

    # Common case: JSON in markdown code block
    if '```json' in response:
        try:
            json_str = response.split('```json')[1].split('```')[0]
            return json.loads(json_str)
        except (IndexError, json.JSONDecodeError):
            pass

    # Fallback: extract JSON array from surrounding text
    if '[' in response and ']' in response:
        try:
            start = response.index('[')
            end = response.rindex(']') + 1
            return json.loads(response[start:end])
        except json.JSONDecodeError:
            pass

    # Give up
    logger.warning("Could not extract JSON from response")
    return []

When parsing fails, I log the error and skip that file rather than crashing the entire scan. In a typical 50-file run, I see 2-3 parse failures — annoying but acceptable.

Testing the Pipeline

Before trusting the scanner with JIRA ticket creation, I ran it in "dry run" mode:

# Set max iterations low and disable JIRA
export MAX_ITERATIONS=5
# In config: jira.enabled: false

python run_scanner_direct.py

This scans just 5 files and prints findings without creating tickets. I manually reviewed each finding:

  • True positive: Division by zero in trajectory calculation — good catch
  • False positive: Flagged intentional unwrap() on a guaranteed-Some Option — needs better context
  • True positive: Dead code path never executed — valid cleanup suggestion
  • Marginal: Style suggestion about variable naming — below my quality threshold

After tuning the confidence threshold and system prompt, the true positive rate improved to roughly 90%.

Monitoring and Observability

The scanner writes detailed logs to stdout and a JSON results file. Sample log output:

2025-11-26 15:48:25 - CODE SCANNER AGENT STARTING
2025-11-26 15:48:25 - Max iterations: 50
2025-11-26 15:48:25 - Model: Qwen/Qwen2.5-Coder-7B-Instruct
2025-11-26 15:48:25 - Starting scan of ballistics-engine
2025-11-26 15:48:25 - Found 35 files to scan
2025-11-26 15:48:25 - Scanning: src/trajectory_sampling.rs
2025-11-26 15:48:25 -   Truncated to 300 lines for analysis
2025-11-26 15:49:24 -   Found 5 findings (>= 75% confidence)
2025-11-26 15:49:24 -     [LOW] Redundant check for step_m value
2025-11-26 15:49:24 -     [LOW] Potential off-by-one error

The JSON results include full finding details:

{
  "timestamp": "20251126_151136",
  "total_findings": 12,
  "repositories": [
    {
      "repository": "ballistics-engine",
      "files_scanned": 35,
      "findings": [...],
      "duration_seconds": 1842.5,
      "iterations_used": 35
    }
  ]
}

I keep the last 30 result files (configurable) for historical comparison. Eventually I'll build a dashboard showing finding trends over time.

What's Next

The current system is batch-oriented: run once per night, file tickets, done. Future improvements I'm considering:

  1. Pre-commit integration: Run on changed files only, fast enough for CI
  2. Retrieval-augmented context: Include related files when analyzing (e.g., when scanning a function, include its callers)
  3. Learning from feedback: Track which tickets get closed as "not a bug" and use that to tune prompts
  4. Multi-model ensemble: Run the same code through two models, only file tickets when both agree

For now, though, the simple approach works. Every morning I check JIRA, triage the overnight findings, and fix the real bugs. The model isn't perfect, but it finds things I miss. And unlike a human reviewer, it never gets tired, never skips files, and never has a bad day.

Get the Code

I've open-sourced the complete scanner implementation on GitHub: llm-code-scanner

The project includes:

  • Dual scanning modes: Fast nightly scans via vLLM and comprehensive weekly analyses through Ollama
  • Smart deduplication: SQLite database prevents redundant issue tracking across runs
  • JIRA integration: Automatically creates tickets for findings above your confidence threshold
  • Email reports: SendGrid integration for daily/weekly summaries
  • Multi-language support: Python, Rust, TypeScript, Kotlin, Swift, Go, and more

To get started, clone the repo, configure your scanner_config.yaml with your vLLM/Ollama server details, and run python -m agent.scanner. The README has full setup instructions including environment variables for JIRA and SendGrid integration.

Building Cross-Platform Rust Binaries: A Multi-Architecture Build Orchestration System

When developing ballistics-engine, a high-performance ballistics calculation library written in Rust, I faced a challenge: how do I efficiently build and distribute binaries for multiple operating systems and architectures? The answer led to the creation of an automated build orchestration system that leverages diverse hardware—from single-board computers to powerful x86_64 servers—to build native binaries for macOS, Linux, FreeBSD, NetBSD, and OpenBSD across both ARM64 and x86_64 architectures. Now, you are probably wondering why I am bothering to show love for the BSD Trilogy; the answer is simple: because I want to. Sure they are a bit esoteric, but I ran FreeBSD for years as my mail server. I still like the BSDs.

This article explores the architecture, implementation, and lessons learned from building a production-grade multi-platform build system that powers https://ballistics.zip, where users can download pre-built binaries for their platform with a simple curl command.

curl --proto '=https' --tlsv1.2 -sSf https://ballistics.zip/install.sh | sh

The Problem: Cross-Platform Distribution

Rust's cross-compilation capabilities are impressive, but they have limitations:

  • Cross-compilation complexity: While Rust supports cross-compilation, getting it working reliably for BSD systems (especially with system dependencies) is challenging
  • Native testing: You need to test on actual hardware to ensure binaries work correctly
  • Binary compatibility: Different BSD versions and configurations require native builds
  • Performance verification: Emulated builds may behave differently than native ones

The solution? Build natively on each target platform using actual hardware or high-performance emulation.

Architecture Overview

The build orchestration system consists of three main components:

1. Build Nodes (Physical and Virtual Machines)

  • macOS systems (x86_64 and aarch64) - Local builds
  • Linux x86_64 server - Remote build via SSH
  • FreeBSD ARM64 - Single-board computer (Raspberry Pi 4)
  • OpenBSD ARM64 - QEMU VM emulated on x86_64 (rig.localnet)
  • NetBSD x86_64 and ARM64 - QEMU VMs

2. Orchestrator (Python-based coordinator)

  • Reads build node configuration from build-nodes.yaml
  • Executes builds in parallel across all nodes
  • Collects artifacts via SSH/SCP
  • Generates SHA256 checksums
  • Uploads to Google Cloud Storage
  • Updates version metadata

3. Distribution (ballistics.zip website)

  • Serves install script at https://ballistics.zip
  • Hosts binaries in GCS bucket (gs://ballistics-releases/)
  • Provides version detection and automatic downloads
  • Supports version fallback for platforms with delayed releases

Hardware Infrastructure

Single-Board Computers

Orange Pi 5 Max (ARM64)

  • Role: Host for NetBSD ARM64 VM
  • CPU: Rockchip RK3588 (8-core ARM Cortex-A76/A55)
  • RAM: 16GB
  • Why: Native ARM64 hardware for running QEMU VMs
  • Host IP: 10.1.1.10
  • VM IPs:
  • NetBSD ARM64: 10.1.1.15
  • OpenBSD ARM64 (native, disabled): 10.1.1.11

Raspberry Pi 4 (ARM64)

  • Role: FreeBSD ARM64 native builds
  • CPU: Broadcom BCM2711 (quad-core Cortex-A72)
  • RAM: 8GB
  • Why: Stable FreeBSD support, reliable ARM64 platform
  • IP: 10.1.1.7

x86_64 ("rig.localnet")

  • Role: Linux builds, BSD VM host, emulated ARM64 builds
  • CPU: Intel i9
  • RAM: 96GB
  • IP: 10.1.1.27 (Linux host), 10.1.1.17 (KVM host)
  • VMs Hosted:
  • FreeBSD x86_64: 10.1.1.21
  • OpenBSD x86_64: 10.1.1.20
  • OpenBSD ARM64 (emulated): 10.1.1.23
  • NetBSD x86_64: 10.1.1.19

Local macOS Development Machine

  • Role: macOS binary builds (both architectures)
  • Build Method: Local cargo builds with target flags
  • Architectures:
  • aarch64-apple-darwin (Apple Silicon)
  • x86_64-apple-darwin (Intel Macs)

A Surprising Discovery: Emulated ARM64 Performance

One of the most interesting findings during development was discovering that emulated ARM64 builds on powerful x86_64 hardware are significantly faster than emulated ARM64 on native ARM64 builds on single-board computers.

Performance Comparison

  • Emulated ARM64 on ARM64: ~99+ minutes per build
  • Emulated ARM64 on x86_64: 15m 37s ⚡

The emulated build on rig.localnet (running QEMU with KVM acceleration) completed in about 6x less time than the native ARM64 hardware. This is because:

  1. The x86_64 server has significantly more powerful CPU cores
  2. QEMU with KVM provides near-native performance for many workloads
  3. Rust compilation is primarily CPU-bound and benefits from faster single-core performance
  4. The x86_64 server has faster storage (NVMe vs eMMC/SD card)

As a result, the native OpenBSD ARM64 node on the Orange Pi is now disabled in favor of the emulated version.

Prerequisites

SSH Key-Based Authentication

Critical: The orchestration system requires passwordless SSH access to all remote build nodes. Here's how to set it up:

  1. Generate SSH key (if you don't have one):
ssh-keygen -t ed25519 -C "build-orchestrator"
  1. Copy public key to each build node:
# For each build node
ssh-copy-id user@build-node-ip

# Examples:
ssh-copy-id alex@10.1.1.27      # Linux x86_64
ssh-copy-id freebsd@10.1.1.7     # FreeBSD ARM64
ssh-copy-id root@10.1.1.20       # OpenBSD x86_64
ssh-copy-id root@10.1.1.23       # OpenBSD ARM64 emulated
ssh-copy-id root@10.1.1.19       # NetBSD x86_64
ssh-copy-id root@10.1.1.15       # NetBSD ARM64
  1. Test SSH access:
ssh user@build-node-ip "uname -a"

Software Requirements

On Build Orchestrator Machine:

  • Python 3.8+
  • pyyaml (pip install pyyaml)
  • Google Cloud SDK (gcloud command) for GCS uploads
  • SSH client

On Each Build Node:

  • Rust toolchain (cargo, rustc)
  • Build essentials (compiler, linker)
  • curl, wget, or ftp (for downloading source)
  • Sufficient disk space (~2GB for build artifacts)

BSD-Specific Requirements

NetBSD: Install curl via pkgsrc (native ftp doesn't support HTTPS)

# Bootstrap pkgsrc
cd /usr && ftp -o pkgsrc.tar.gz http://cdn.netbsd.org/pub/pkgsrc/current/pkgsrc.tar.gz
tar -xzf pkgsrc.tar.gz
cd /usr/pkgsrc/bootstrap && ./bootstrap --prefix=/usr/pkg

# Install curl
/usr/pkg/bin/pkgin -y update
/usr/pkg/bin/pkgin -y install curl

OpenBSD: Native ftp supports HTTPS

pkg_add rust git

FreeBSD: Use pkg for everything

pkg install -y rust git curl

The ballistics.zip Website and Install Script

How It Works

https://ballistics.zip serves as the primary distribution point for pre-built ballistics-engine binaries. The system uses:

  1. GCS Bucket:

  2. gs://ballistics-releases/ - Binary artifacts

  3. CDN: Google Cloud CDN provides global distribution

  4. Install Script: Universal installer that:

  5. Detects OS and architecture

  6. Downloads appropriate binary
  7. Verifies SHA256 checksum
  8. Installs to /usr/local/bin

Usage

Basic installation:

curl -sSL https://ballistics.zip/install.sh | bash

Specific version:

curl -sSL https://ballistics.zip/install.sh | bash -s -- --version 0.13.3

Different install location:

curl -sSL https://ballistics.zip/install.sh | bash -s -- --prefix ~/.local

Install Script Architecture

The install.sh script intelligently handles:

Platform Detection:

OS=$(uname -s | tr '[:upper:]' '[:lower:]')
ARCH=$(uname -m)

case "$ARCH" in
  x86_64|amd64) ARCH="x86_64" ;;
  aarch64|arm64) ARCH="aarch64" ;;
  *) echo "Unsupported architecture: $ARCH"; exit 1 ;;
esac

PLATFORM="${OS}-${ARCH}"  # e.g., "openbsd-aarch64"

Version Fallback: If a requested version isn't available for a platform, the script automatically finds the latest available version:

# If openbsd-aarch64 0.13.3 doesn't exist, fall back to 0.13.2
AVAILABLE_VERSION=$(curl -sL $BASE_URL/versions.txt | grep "^$PLATFORM:" | cut -d: -f2)

Checksum Verification:

EXPECTED_SHA=$(cat "$BINARY.sha256")
ACTUAL_SHA=$(sha256sum "$BINARY" | awk '{print $1}')

if [ "$EXPECTED_SHA" != "$ACTUAL_SHA" ]; then
  echo "Checksum verification failed!"
  exit 1
fi

Build Orchestration System Deep Dive

Configuration: build-nodes.yaml

The heart of the system is build-nodes.yaml, which defines all build targets:

nodes:
  # macOS builds (local machine)
  - name: macos-aarch64
    host: local
    target: aarch64-apple-darwin
    build_command: |
      cd /tmp && rm -rf ballistics-engine-{version}
      curl -L -o v{version}.tar.gz https://github.com/ajokela/ballistics-engine/archive/refs/tags/v{version}.tar.gz
      tar xzf v{version}.tar.gz
      cd ballistics-engine-{version}
      cargo build --release --target {target}
    binary_path: /tmp/ballistics-engine-{version}/target/{target}/release/ballistics
    enabled: true

  # Linux x86_64 (remote via SSH)
  - name: linux-x86_64
    host: alex@10.1.1.27
    target: x86_64-unknown-linux-gnu
    build_command: |
      cd /tmp && rm -rf ballistics-engine-{version}
      wget -q https://github.com/ajokela/ballistics-engine/archive/refs/tags/v{version}.tar.gz
      tar xzf v{version}.tar.gz
      cd ballistics-engine-{version}
      ~/.cargo/bin/cargo build --release --target {target}
    binary_path: /tmp/ballistics-engine-{version}/target/{target}/release/ballistics
    enabled: true

  # OpenBSD ARM64 emulated (FASTEST ARM64 BUILD!)
  - name: openbsd-aarch64-emulated
    host: root@10.1.1.23
    target: aarch64-unknown-openbsd
    build_command: |
      cd /tmp && rm -rf ballistics-engine-{version}
      ftp -o v{version}.tar.gz https://github.com/ajokela/ballistics-engine/archive/refs/tags/v{version}.tar.gz
      tar xzf v{version}.tar.gz
      cd ballistics-engine-{version}
      cargo build --release
    binary_path: /tmp/ballistics-engine-{version}/target/release/ballistics
    enabled: true

  # NetBSD x86_64 (HTTPS support via pkgsrc curl)
  - name: netbsd-x86_64
    host: root@10.1.1.19
    target: x86_64-unknown-netbsd
    build_command: |
      cd /tmp && rm -rf ballistics-engine-{version}
      /usr/pkg/bin/curl -L -o v{version}.tar.gz https://github.com/ajokela/ballistics-engine/archive/refs/tags/v{version}.tar.gz
      tar xzf v{version}.tar.gz
      cd ballistics-engine-{version}
      /usr/pkg/bin/cargo build --release
    binary_path: /tmp/ballistics-engine-{version}/target/release/ballistics
    enabled: true

Orchestrator Workflow

The orchestrator.py script coordinates the entire build process:

Step 1: Parallel Build Execution

def build_on_node(node, version):
    if node['host'] == 'local':
        # Local build
        subprocess.run(build_command, shell=True, check=True)
    else:
        # Remote build via SSH
        ssh_command = f"ssh {node['host']} '{build_command}'"
        subprocess.run(ssh_command, shell=True, check=True)

Step 2: Artifact Collection

def collect_artifacts(node, version):
    binary_name = f"ballistics-{version}-{node['name']}"

    if node['host'] == 'local':
        shutil.copy(node['binary_path'], f"./{binary_name}")
    else:
        # Download via SCP
        scp_command = f"scp {node['host']}:{node['binary_path']} ./{binary_name}"
        subprocess.run(scp_command, shell=True, check=True)

Step 3: Checksum Generation

def generate_checksum(binary_path):
    with open(binary_path, 'rb') as f:
        sha256 = hashlib.sha256(f.read()).hexdigest()

    with open(f"{binary_path}.sha256", 'w') as f:
        f.write(sha256)

Step 4: Upload to GCS

def upload_to_gcs(version):
    bucket_path = f"gs://ballistics-releases/{version}/"

    # Upload binaries and checksums
    subprocess.run(f"gsutil -m cp ballistics-* {bucket_path}", shell=True)

    # Set public read permissions
    subprocess.run(f"gsutil -m acl ch -u AllUsers:R {bucket_path}*", shell=True)

    # Update latest-version.txt
    with open('latest-version.txt', 'w') as f:
        f.write(version)
    subprocess.run("gsutil cp latest-version.txt gs://ballistics-releases/", shell=True)

Running a Build

Dry-run (test without uploading):

cd build-orchestrator
./build.sh --version 0.13.4 --dry-run

Production build:

./build.sh --version 0.13.4

Output:

Building ballistics-engine v0.13.4
===========================================

Enabled build nodes: 7
- macos-aarch64 (local)
- macos-x86_64 (local)
- linux-x86_64 (alex@10.1.1.27)
- freebsd-aarch64 (freebsd@10.1.1.7)
- openbsd-aarch64-emulated (root@10.1.1.23)
- netbsd-x86_64 (root@10.1.1.19)
- netbsd-aarch64 (root@10.1.1.15)

Starting parallel builds...
[macos-aarch64] Building... (PID: 12345)
[linux-x86_64] Building... (PID: 12346)
...

Build results:
 macos-aarch64 (45s)
 linux-x86_64 (28s)
 freebsd-aarch64 (6m 32s)
 openbsd-aarch64-emulated (15m 37s)   FASTEST ARM64!
...

Uploading to gs://ballistics-releases/0.13.4/
 Uploaded 7 binaries
 Uploaded 7 checksums
 Updated latest-version.txt

Build complete! 🎉
Total time: 16m 12s

Adding New Build Nodes

Interactive Script

The easiest way to add a new node is using the interactive script:

cd build-orchestrator
./add-node.sh

This will prompt you for: - Node name (e.g., openbsd-aarch64-emulated) - SSH host (e.g., root@10.1.1.23 or local) - Rust target triple (e.g., aarch64-unknown-openbsd) - Build commands (how to download and build) - Binary location (where the compiled binary is located)

Manual Configuration

Alternatively, edit build-nodes.yaml directly:

  - name: your-new-platform
    host: user@ip-address  # or 'local' for local builds
    target: rust-target-triple
    build_command: |
      # Commands to download source and build
      cd /tmp && rm -rf ballistics-engine-{version}
      curl -L -o v{version}.tar.gz https://github.com/...
      tar xzf v{version}.tar.gz
      cd ballistics-engine-{version}
      cargo build --release
    binary_path: /path/to/compiled/binary
    enabled: true

Variables: - {version}: Replaced with target version (e.g., 0.13.4) - {target}: Replaced with Rust target triple

Setting Up a New VM

Example: OpenBSD ARM64 Emulated

  1. Create VM on host:
ssh alex@rig.localnet
cd /opt/bsd-vms/openbsd-arm64-emulated
  1. Create boot script:
cat > boot.sh << 'EOF'
#!/bin/bash
exec qemu-system-aarch64 \
  -M virt,highmem=off \
  -cpu cortex-a57 \
  -smp 4 \
  -m 2G \
  -bios /usr/share/qemu-efi-aarch64/QEMU_EFI.fd \
  -drive file=openbsd.qcow2,if=virtio,format=qcow2 \
  -netdev bridge,id=net0,br=br0 \
  -device virtio-net-pci,netdev=net0,romfile=,mac=52:54:00:12:34:99 \
  -nographic
EOF
chmod +x boot.sh
  1. Create systemd service:
sudo cat > /etc/systemd/system/openbsd-arm64-emulated-vm.service << 'EOF'
[Unit]
Description=OpenBSD ARM64 VM (Emulated on x86_64)
After=network.target

[Service]
Type=simple
User=alex
WorkingDirectory=/opt/bsd-vms/openbsd-arm64-emulated
ExecStart=/opt/bsd-vms/openbsd-arm64-emulated/boot.sh
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl enable openbsd-arm64-emulated-vm.service
sudo systemctl start openbsd-arm64-emulated-vm.service
  1. Configure networking (assign static IP 10.1.1.23)

  2. Install build tools inside VM:

ssh root@10.1.1.23
pkg_add rust git
  1. Test SSH access:
ssh root@10.1.1.23 "cargo --version"
  1. Add to build-nodes.yaml and test:
./build.sh --version 0.13.3 --dry-run

GitHub Webhook Integration (Optional)

For fully automated builds triggered by GitHub releases:

1. Deploy Webhook Receiver to Cloud Run

cd build-orchestrator
gcloud run deploy ballistics-build-webhook \
  --source . \
  --region us-central1 \
  --allow-unauthenticated \
  --set-env-vars GITHUB_WEBHOOK_SECRET=your-secret-here

2. Configure GitHub Webhook

  1. Go to: https://github.com/yourusername/your-repo/settings/hooks
  2. Add webhook:

  3. Payload URL: https://ballistics-build-webhook-xxx.run.app/webhook

  4. Content type: application/json
  5. Secret: Your webhook secret
  6. Events: Select "Releases" only

3. Test

Create a new release on GitHub, and the webhook will automatically trigger builds for all platforms!

Performance Metrics and Insights

From real-world builds of ballistics-engine v0.13.3:

Platform Hardware Build Time Notes
macOS aarch64 Apple M1/M2 45s Native Apple Silicon
macOS x86_64 Intel i7/i9 30s Cross-compile on Apple Silicon
Linux x86_64 Xeon/EPYC 25s Fastest overall ⚡
FreeBSD aarch64 Raspberry Pi 4 6m 32s Native ARM64 hardware
OpenBSD aarch64 (emulated) x86_64 QEMU 15m 37s ⚡ FASTEST ARM64
OpenBSD aarch64 (native) Orange Pi 5 Max 99+ min Disabled due to slower speed
NetBSD x86_64 x86_64 VM 3m 45s KVM acceleration
NetBSD aarch64 Orange Pi VM 8m 12s QEMU on ARM64 host

Key Insights:

  1. x86_64 is fastest: Modern x86_64 CPUs dominate for single-threaded compilation
  2. Emulation wins for ARM64: x86_64 emulating ARM64 beats native ARM64 SBCs
  3. SBCs are viable: Raspberry Pi and Orange Pi work well for native builds, but slower
  4. Parallel execution: Running all 7 builds in parallel takes only ~16 minutes (longest pole is FreeBSD ARM64)

Conclusion

Building a custom multi-platform build orchestration system may seem daunting, but the benefits are substantial:

→ Full control: Own your build infrastructure

→ Native builds: Real hardware ensures compatibility

→ Cost-effective: Low operational costs after initial hardware investment

→ Fast iteration: Parallel builds complete in ~16 minutes

→ Flexibility: Easy to add new platforms

→ Learning: Deep understanding of cross-platform development

The surprising discovery that emulated ARM64 on powerful x86_64 hardware outperforms native ARM64 single-board computers has practical implications: you don't always need native hardware for every architecture. Strategic use of emulation can provide better performance while maintaining compatibility.

For projects requiring broad platform support (especially BSD systems not well-served by traditional CI/CD), this approach offers a reliable, maintainable, and cost-effective solution.

Architecture Diagram

graph TB subgraph "Trigger Sources" GH[GitHub Release
v0.13.x] MANUAL[Manual Execution
./build.sh] end subgraph "Build Orchestrator" ORCH[Python Orchestrator
orchestrator.py] CONFIG[Build Configuration
build-nodes.yaml] end subgraph "Build Nodes - Local" MAC_ARM[macOS ARM64
Apple Silicon
~45s] MAC_X86[macOS x86_64
Rosetta 2
~30s] end subgraph "Build Nodes - Remote x86_64" LINUX_X86[Linux x86_64
alex@10.1.1.27
~25s] FREEBSD_X86[FreeBSD x86_64
root@10.1.1.21
~4m] OPENBSD_X86[OpenBSD x86_64
root@10.1.1.20
~12m] NETBSD_X86[NetBSD x86_64
root@10.1.1.19
~3m 45s] end subgraph "Build Nodes - Remote ARM64" FREEBSD_ARM[FreeBSD ARM64
freebsd@10.1.1.7
~6m 32s] OPENBSD_ARM_EMU[OpenBSD ARM64
root@10.1.1.23
Emulated on x86_64
~15m 37s ⚡] NETBSD_ARM[NetBSD ARM64
root@10.1.1.15
~8m 12s] end subgraph "Artifact Collection" COLLECT[SCP Collection
Pull binaries from nodes] CHECKSUM[Generate SHA256
checksums] end subgraph "Distribution" GCS[Google Cloud Storage
gs://ballistics-releases/] WEBSITE[ballistics.zip
Install Script] end GH -->|webhook| ORCH MANUAL -->|CLI| ORCH CONFIG -->|reads| ORCH ORCH -->|SSH parallel builds| MAC_ARM ORCH -->|SSH parallel builds| MAC_X86 ORCH -->|SSH parallel builds| LINUX_X86 ORCH -->|SSH parallel builds| FREEBSD_X86 ORCH -->|SSH parallel builds| OPENBSD_X86 ORCH -->|SSH parallel builds| NETBSD_X86 ORCH -->|SSH parallel builds| FREEBSD_ARM ORCH -->|SSH parallel builds| OPENBSD_ARM_EMU ORCH -->|SSH parallel builds| NETBSD_ARM MAC_ARM -->|binary| COLLECT MAC_X86 -->|binary| COLLECT LINUX_X86 -->|binary| COLLECT FREEBSD_X86 -->|binary| COLLECT OPENBSD_X86 -->|binary| COLLECT NETBSD_X86 -->|binary| COLLECT FREEBSD_ARM -->|binary| COLLECT OPENBSD_ARM_EMU -->|binary| COLLECT NETBSD_ARM -->|binary| COLLECT COLLECT --> CHECKSUM CHECKSUM --> GCS GCS --> WEBSITE style OPENBSD_ARM_EMU fill:#90EE90 style LINUX_X86 fill:#87CEEB style GCS fill:#FFD700 style WEBSITE fill:#FFD700

Diagram Legend

  • Green: Fastest ARM64 build (emulated on powerful x86_64)
  • Blue: Fastest overall build (native Linux x86_64)
  • Yellow: Distribution endpoints

Build Flow

  1. Trigger: GitHub release webhook or manual execution
  2. Parallel Execution: All enabled build nodes start simultaneously
  3. Collection: Orchestrator collects binaries via SCP
  4. Verification: SHA256 checksums generated for integrity
  5. Upload: Binaries and checksums uploaded to GCS
  6. Availability: Install script immediately serves new version

Rockchip RK3588 NPU Deep Dive: Real-World AI Performance Across Multiple Platforms

Introduction

The Rockchip RK3588 has emerged as one of the most compelling ARM System-on-Chips (SoCs) for edge AI applications in 2024-2025, featuring a dedicated 6 TOPS Neural Processing Unit (NPU) integrated alongside powerful Cortex-A76/A55 CPU cores. This SoC powers a growing ecosystem of single-board computers and system-on-modules from manufacturers worldwide, including Orange Pi, Radxa, FriendlyElec, Banana Pi, and numerous industrial board makers.

But how does the RK3588's NPU perform in real-world scenarios? In this comprehensive deep dive, I'll share detailed benchmarks of the RK3588 NPU testing both Large Language Models (LLMs) and computer vision workloads, with primary testing on the Orange Pi 5 Max and comparative analysis against the closely-related RK3576 found in the Banana Pi CM5-Pro.

RK3588 NPU Performance Benchmarks

The RK3588 Ecosystem: Devices and Availability

The Rockchip RK3588 powers a diverse range of single-board computers (SBCs) and system-on-modules (SoMs) from multiple manufacturers in 2024-2025:

Consumer SBCs:

Industrial and Embedded Modules:

Recent Developments:

  • RK3588S2 (2024-2025) - Updated variant with modernized memory controllers and platform I/O while maintaining the same 6 TOPS NPU performance

The RK3576, found in devices like the Banana Pi CM5-Pro, shares the same 6 TOPS NPU architecture as the RK3588 but features different CPU cores (Cortex-A72/A53 vs. A76/A55), making it an interesting comparison point for NPU-focused workloads.

Hardware Overview

RK3588 SoC Specifications

Built on an 8nm process, the Rockchip RK3588 integrates:

CPU:

  • 4x ARM Cortex-A76 @ 2.4 GHz (high-performance cores)
  • 4x ARM Cortex-A55 @ 1.8 GHz (efficiency cores)

NPU:

  • 6 TOPS total performance
  • 3-core architecture (2 TOPS per core)
  • Shared memory architecture
  • Optimized for INT8 operations
  • Supports INT4/INT8/INT16/BF16/TF32 quantization formats
  • Device path: /sys/kernel/iommu_groups/0/devices/fdab0000.npu

GPU:

  • ARM Mali-G610 MP4 (quad-core)
  • 8K@30fps H.265/VP9 decoding
  • 4K@60fps H.264/H.265 encoding

Architecture: ARM64 (aarch64)

Test Platform: Orange Pi 5 Max

For these benchmarks, we used the Orange Pi 5 Max with:

Software Stack:

  • RKNPU Driver: v0.9.8
  • RKLLM Runtime: v1.2.2 (for LLM inference)
  • RKNN Runtime: v1.6.0 (for general AI models)
  • RKNN-Toolkit-Lite2: v2.3.2

Test Setup

I conducted two separate benchmark suites:

  1. Large Language Model (LLM) Testing using RKLLM
  2. Computer Vision Model Testing using RKNN-Toolkit2

Both tests used a two-system approach:

  • Conversion System: AMD RYZEN AI MAX+ 395 (32 cores, x86_64) running Ubuntu 24.04.3 LTS
  • Inference System: Orange Pi 5 Max (ARM64) with RK3588 NPU

This reflects the real-world workflow where model conversion happens on powerful workstations, and inference runs on edge devices.

Part 1: Large Language Model Performance

Model: TinyLlama 1.1B Chat

Source: Hugging Face (TinyLlama-1.1B-Chat-v1.0)

Parameters: 1.1 billion

Original Size: ~2.1 GB (505 MB model.safetensors)

Conversion Performance (x86_64)

Converting the Hugging Face model to RKNN format on the AMD RYZEN AI MAX+ 395:

Phase Time Details
Load 0.36s Loading Hugging Face model
Build 22.72s W8A8 quantization + NPU optimization
Export 56.38s Export to .rkllm format
Total 79.46s ~1.3 minutes

Output Model:

  • File: tinyllama_W8A8_rk3588.rkllm
  • Size: 1142.9 MB (1.14 GB)
  • Compression: 54% of original size
  • Quantization: W8A8 (8-bit weights, 8-bit activations)

Note: The RK3588 only supports W8A8 quantization for LLM inference, not W4A16.

NPU Inference Results

Hardware Detection:

I rkllm: rkllm-runtime version: 1.2.2, rknpu driver version: 0.9.8, platform: RK3588
I rkllm: rkllm-toolkit version: 1.2.2, max_context_limit: 2048, npu_core_num: 3
I rkllm: Enabled cpus: [4, 5, 6, 7]
I rkllm: Enabled cpus num: 4

Key Observations:

  • ✅ NPU successfully detected and initialized
  • ✅ All 3 NPU cores utilized
  • ✅ 4 CPU cores (Cortex-A76) enabled for coordination
  • ✅ Model loaded and text generation working
  • ✅ Coherent English text output

Expected Performance (from Rockchip official benchmarks):

  • TinyLlama 1.1B W8A8 on RK3588: ~10-15 tokens/second
  • First token latency: ~200-500ms

Is This Fast Enough for Real-Time Conversation?

To put the 10-15 tokens/second performance in perspective, let's compare it to human reading speeds:

Human Reading Rates:

  • Silent reading: 200-300 words/minute (3.3-5 words/second)
  • Reading aloud: 150-160 words/minute (2.5-2.7 words/second)
  • Speed reading: 400-700 words/minute (6.7-11.7 words/second)

Token-to-Word Conversion:

  • LLM tokens ≈ 0.75 words on average (1.33 tokens per word)
  • 10-15 tokens/sec = ~7.5-11.25 words/second

Performance Analysis:

  • ✅ 2-4x faster than reading aloud (2.5-2.7 words/sec)
  • ✅ 2-3x faster than comfortable silent reading (3.3-5 words/sec)
  • ✅ Comparable to speed reading (6.7-11.7 words/sec)

Verdict: The RK3588 NPU running TinyLlama 1.1B generates text significantly faster than most humans can comfortably read, making it well-suited for real-time conversational AI, chatbots, and interactive applications at the edge.

This is particularly impressive for a $180 device consuming only 5-6W of power. Users won't be waiting for the AI to "catch up" - instead, the limiting factor is human reading speed, not the NPU's generation capability.

Output Quality Verification

To verify the model produces meaningful, coherent responses, I tested it with several prompts:

Test 1: Factual Question

Prompt: "What is the capital of France?"
Response: "The capital of France is Paris."

✅ Result: Correct and concise answer.

Test 2: Simple Math

Prompt: "What is 2 plus 2?"
Response: "2 + 2 = 4"

✅ Result: Correct mathematical calculation.

Test 3: List Generation

Prompt: "List 3 colors: red,"
Response: "Here are three different color options for your text:
1. Red
2. Orange
3. Yellow"

✅ Result: Logical completion with proper formatting.

Observations:

  • Responses are coherent and grammatically correct
  • Factual accuracy is maintained after W8A8 quantization
  • The model understands context and provides relevant answers
  • Text generation is fluent and natural
  • No obvious degradation from quantization

Note: The interactive demo tends to continue generating after the initial response, sometimes repeating patterns. This appears to be a demo interface issue rather than a model quality problem - the initial responses to each prompt are consistently accurate and useful.

LLM Findings

Strengths:

  1. Fast model conversion (~1.3 minutes for 1.1B model)
  2. Successful NPU detection and initialization
  3. Good compression ratio (54% size reduction)
  4. Verified high-quality output: Factually correct, grammatically sound responses
  5. Text generation faster than human reading speed (7.5-11.25 words/sec)
  6. All 3 NPU cores actively utilized
  7. No noticeable quality degradation from W8A8 quantization

Limitations:

  1. RK3588 only supports W8A8 quantization (no W4A16 for better compression)
  2. 1.14 GB model size may be limiting for memory-constrained deployments
  3. Max context length: 2048 tokens

RK3588 vs RK3576: NPU Performance Comparison

The RK3576, found in the Banana Pi CM5-Pro, shares the same 6 TOPS NPU architecture as the RK3588 but differs in CPU configuration (Cortex-A72/A53 vs. A76/A55). This provides an interesting comparison for understanding NPU-specific performance versus overall platform capabilities.

LLM Performance (Official Rockchip Benchmarks):

Model RK3588 (W8A8) RK3576 (W4A16) Notes
Qwen2 0.5B ~42.58 tokens/sec 34.24 tokens/sec RK3588 ~1.24x faster
MiniCPM4 0.5B N/A 35.8 tokens/sec -
TinyLlama 1.1B ~10-15 tokens/sec 21.32 tokens/sec RK3576 faster (different quant)
InternLM2 1.8B N/A 13.65 tokens/sec -

Key Observations:

  • RK3588 supports W8A8 quantization only for LLMs
  • RK3576 supports W4A16 quantization (4-bit weights, 16-bit activations)
  • W4A16 models are smaller (645MB vs 1.14GB for TinyLlama) but may run slower on some models
  • The NPU architecture is fundamentally the same (6 TOPS, 3 cores), but software stack differences affect performance
  • For 0.5B models, RK3588 shows ~20% better performance
  • Larger models benefit from W4A16's memory efficiency on RK3576

Computer Vision Performance:

Both RK3588 and RK3576 share the same NPU architecture for computer vision workloads:

  • MobileNet V1 on RK3576 (Banana Pi CM5-Pro): ~161.8ms per image (~6.2 FPS)
  • ResNet18 on RK3588 (Orange Pi 5 Max): 4.09ms per image (244 FPS)

The dramatic performance difference here is primarily due to model complexity (ResNet18 is better optimized for NPU execution than older MobileNet V1) rather than NPU hardware differences.

Practical Implications:

For NPU-focused workloads, both the RK3588 and RK3576 deliver similar AI acceleration capabilities. The choice between platforms should be based on:

  • CPU performance needs: RK3588's A76 cores are significantly faster
  • Quantization requirements: RK3576 offers W4A16 for LLMs, RK3588 only W8A8
  • Model size constraints: W4A16 (RK3576) produces smaller models
  • Cost considerations: RK3576 platforms (like CM5-Pro at $103) vs RK3588 platforms ($150-180)

Part 2: Computer Vision Model Performance

Model: ResNet18 (PyTorch Converted)

Source: PyTorch pretrained ResNet18

Parameters: 11.7 million

Original Size: 44.6 MB (ONNX format)

Can PyTorch Run on RK3588 NPU?

Short Answer: Yes, but through conversion.

Workflow: PyTorch → ONNX → RKNN → NPU Runtime

PyTorch/TensorFlow models cannot execute directly on the NPU. They must be converted through an AOT (Ahead-of-Time) compilation process. However, this conversion is fast and straightforward.

Conversion Performance (x86_64)

Converting PyTorch ResNet18 to RKNN format:

Phase Time Size Details
PyTorch → ONNX 0.25s 44.6 MB Fixed batch size, opset 11
ONNX → RKNN 1.11s - INT8 quantization, operator fusion
Export 0.00s 11.4 MB Final .rknn file
Total 1.37s 11.4 MB 25.7% of ONNX size

Model Optimizations:

  • INT8 quantization (weights and activations)
  • Automatic operator fusion
  • Layout optimization for NPU
  • Target: 3 NPU cores on RK3588

Memory Usage:

  • Internal memory: 1.1 MB
  • Weight memory: 11.5 MB
  • Total model size: 11.4 MB

NPU Inference Performance

Running ResNet18 inference on Orange Pi 5 Max (10 iterations after 2 warmup runs):

Results:

  • Average Inference Time: 4.09 ms
  • Min Inference Time: 4.02 ms
  • Max Inference Time: 4.43 ms
  • Standard Deviation: ±0.11 ms
  • Throughput: 244.36 FPS

Initialization Overhead:

  • NPU initialization: 0.350s (one-time)
  • Model load: 0.008s (one-time)

Input/Output:

  • Input: 224×224×3 images (INT8)
  • Output: 1000 classes (Float32)

Performance Comparison

Platform Inference Time Throughput Notes
RK3588 NPU 4.09 ms 244 FPS 3 NPU cores, INT8
ARM A76 CPU (est.) ~50 ms ~20 FPS Single core
Desktop RTX 3080 ~2-3 ms ~400 FPS Reference
NPU Speedup 12x faster than CPU - Same hardware

Computer Vision Findings

Strengths:

  1. Extremely fast conversion (<2 seconds)
  2. Excellent inference performance (4.09ms, 244 FPS)
  3. Very consistent latency (±0.11ms)
  4. Efficient quantization (74% size reduction)
  5. 12x speedup vs CPU cores on same SoC
  6. Simple Python API for inference

Trade-offs:

  1. INT8 quantization may reduce accuracy slightly
  2. AOT conversion required (no dynamic model execution)
  3. Fixed input shapes required

Technical Deep Dive

NPU Architecture

The RK3588 NPU is based on a 3-core design with 6 TOPS total performance:

  • Each core contributes 2 TOPS
  • Shared memory architecture
  • Optimized for INT8 operations
  • Direct DRAM access for large models

Memory Layout

For ResNet18, the NPU memory allocation:

Feature Tensor Memory:
- Input (224×224×3):     147 KB
- Layer activations:     776 KB (peak)
- Output (1000 classes): 4 KB

Constant Memory (Weights):
- Conv layers:    11.5 MB
- FC layers:      2.0 MB
- Total:          11.5 MB

Operator Support

The RKNN runtime successfully handled all ResNet18 operators:

  • Convolution layers: ✅ Fused with ReLU activation
  • Batch normalization: ✅ Folded into convolution
  • MaxPooling: ✅ Native support
  • Global average pooling: ✅ Converted to convolution
  • Fully connected: ✅ Converted to 1×1 convolution

All 26 operators executed on NPU (no CPU fallback needed).

Power Efficiency

While I didn't measure power consumption directly, the RK3588 NPU is designed for edge deployment:

Estimated Power Draw:

  • Idle: ~2-3W (entire SoC)
  • NPU active: +2-3W
  • Total under AI load: ~5-6W

Performance per Watt:

  • ResNet18 @ 244 FPS / ~5W = ~49 FPS per Watt
  • Compare to desktop GPU: RTX 3080 @ 400 FPS / ~320W = ~1.25 FPS per Watt

The RK3588 NPU delivers approximately 39x better performance per watt than a high-end desktop GPU for INT8 inference workloads.

Real-World Applications

Based on these benchmarks, the RK3588 NPU is well-suited for:

✅ Excellent Performance:

  • Real-time object detection: 244 FPS for ResNet18-class models
  • Image classification: Sub-5ms latency
  • Face recognition: Multiple faces per frame at 30+ FPS
  • Pose estimation: Real-time tracking
  • Edge AI cameras: Low power, high throughput

✅ Good Performance:

  • Small LLMs: 1B-class models at 10-15 tokens/second
  • Chatbots: Acceptable latency for edge applications
  • Text classification: Fast inference for short sequences

⚠️ Limited Performance:

  • Large LLMs: 7B+ models may not fit in memory or run slowly
  • High-resolution video: 4K processing may require frame decimation
  • Transformer models: Attention mechanism less optimized than CNNs

Developer Experience

Pros:

  • Clear documentation and examples
  • Python API is straightforward
  • Automatic NPU detection
  • Fast conversion times
  • Good error messages

Cons:

  • Requires separate x86_64 system for conversion
  • Some dependency conflicts (PyTorch versions)
  • Limited dynamic shape support
  • Debugging NPU issues can be challenging

Getting Started

Here's a minimal example for running inference:

from rknnlite.api import RKNNLite
import numpy as np

# Initialize
rknn = RKNNLite()

# Load model
rknn.load_rknn('model.rknn')
rknn.init_runtime()

# Run inference
input_data = np.random.randint(0, 256, (1, 3, 224, 224), dtype=np.uint8)
outputs = rknn.inference(inputs=[input_data])

# Cleanup
rknn.release()

That's it! The NPU is automatically detected and utilized.

Cost Analysis

Orange Pi 5 Max: ~$150-180 (16GB RAM variant)

Performance per Dollar:

  • 244 FPS / $180 = 1.36 FPS per dollar (ResNet18)
  • 10-15 tokens/s / $180 = 0.055-0.083 tokens/s per dollar (TinyLlama 1.1B)

Compare to:

The RK3588 NPU offers excellent value for edge AI applications, especially for INT8 workloads.

Comparison to Other Edge AI Platforms

Platform NPU/GPU TOPS Price ResNet18 FPS Notes
Orange Pi 5 Max (RK3588) 3-core NPU 6 $180 244 Best value
Raspberry Pi 5 CPU only - $80 ~5 No accelerator
Google Coral Dev Board Edge TPU 4 $150 ~400 INT8 only
NVIDIA Jetson Orin Nano GPU (1024 CUDA) 40 $499 ~400 More flexible
Intel NUC with Neural Compute Stick 2 VPU 4 $300+ ~150 Requires USB

The RK3588 stands out for offering strong NPU performance at a very competitive price point.

Limitations and Gotchas

1. Conversion System Required

You cannot convert models directly on the Orange Pi. You need an x86_64 Linux system with RKNN-Toolkit2 for model conversion.

2. Quantization Constraints

  • LLMs: Only W8A8 supported (no W4A16)
  • Computer vision: INT8 quantization required for best performance
  • Floating-point models will run slower

3. Memory Limitations

  • Large models (>2GB) may not fit
  • Context length limited to 2048 tokens for LLMs
  • Batch sizes are constrained by NPU memory

4. Framework Support

  • PyTorch/TensorFlow: Supported via conversion
  • Direct framework execution: Not supported
  • Some operators may fall back to CPU

5. Software Maturity

  • RKNN-Toolkit2 is actively developed but not as mature as CUDA
  • Some edge cases and exotic operators may not be supported
  • Version compatibility between toolkit and runtime must match

Best Practices

Based on my testing, here are recommendations for optimal RK3588 NPU usage:

1. Model Selection

  • Choose models designed for mobile/edge: MobileNet, EfficientNet, SqueezeNet
  • Start small: Test with smaller models before scaling up
  • Consider quantization-aware training: Better accuracy with INT8

2. Optimization

  • Use fixed input shapes: Dynamic shapes have overhead
  • Batch carefully: Batch size 1 often optimal for latency
  • Leverage operator fusion: Design models with fusible ops (Conv+BN+ReLU)

3. Deployment

  • Pre-load models: Model loading takes ~350ms
  • Use separate threads: Don't block main application during inference
  • Monitor memory: Large models can cause OOM errors

4. Development Workflow

1. Train on workstation (GPU)
2. Export to ONNX with fixed shapes
3. Convert to RKNN on x86_64 system
4. Test on Orange Pi 5 Max
5. Iterate based on accuracy/performance

Conclusion

The RK3588 NPU on the Orange Pi 5 Max delivers impressive performance for edge AI applications. With 244 FPS for ResNet18 (4.09ms latency) and 10-15 tokens/second for 1.1B LLMs, it's well-positioned for real-time computer vision and small language model inference.

Key Takeaways:

✅ Excellent computer vision performance: 244 FPS for ResNet18, <5ms latency

✅ Good LLM support: 1B-class models run at usable speeds

✅ Outstanding value: $180 for 6 TOPS of NPU performance

✅ Easy to use: Simple Python API, automatic NPU detection

✅ Power efficient: ~5-6W under AI load, 39x better than desktop GPU

✅ PyTorch compatible: Via conversion workflow

⚠️ Conversion required: Cannot run PyTorch/TensorFlow directly

⚠️ Quantization needed: INT8 for best performance

⚠️ Memory constrained: Large models (>2GB) challenging

The RK3588 NPU is an excellent choice for edge AI applications where power efficiency and cost matter. It's not going to replace high-end GPUs for training or large-scale inference, but for deploying computer vision models and small LLMs at the edge, it's one of the best options available today.

Recommended for:

  • Edge AI cameras and surveillance
  • Robotics and autonomous systems
  • IoT devices with AI requirements
  • Embedded AI applications
  • Prototyping and development

Not recommended for:

  • Large language model training
  • 7B+ LLM inference
  • High-precision (FP32) inference
  • Dynamic model execution
  • Cloud-scale deployments

Banana Pi CM5-Pro Review: A Solid Middle Ground with AI Ambitions

Introduction

The Banana Pi CM5-Pro (also sold as the ArmSoM-CM5) represents Banana Pi's entry into the Raspberry Pi Compute Module 4 form factor market, powered by Rockchip's RK3576 SoC. Released in 2024, this compute module targets developers seeking a CM4-compatible solution with enhanced specifications: up to 16GB of RAM, 128GB of storage, WiFi 6 connectivity, and a 6 TOPS Neural Processing Unit for AI acceleration. With a price point of approximately $103 for the 8GB/64GB configuration and a guaranteed production life until at least August 2034, Banana Pi positions the CM5-Pro as a long-term alternative to Raspberry Pi's official offerings.

After extensive testing, benchmarking, and comparison against contemporary single-board computers including the Orange Pi 5 Max, Raspberry Pi 5, and LattePanda IOTA, the Banana Pi CM5-Pro emerges as a competent but not exceptional offering. It delivers solid performance, useful features including AI acceleration, and good expandability, but falls short of being a clear winner in any specific category. This review examines where the CM5-Pro excels, where it disappoints, and who should consider it for their projects.

Banana Pi CM5-Pro compute module

Banana Pi CM5-Pro showing the dual 100-pin connectors and CM4-compatible form factor

Hardware Architecture: The Rockchip RK3576

At the heart of the Banana Pi CM5-Pro lies the Rockchip RK3576, a second-generation 8nm SoC featuring a big.LITTLE ARM architecture:

  • 4x ARM Cortex-A72 cores @ 2.2 GHz (high performance)
  • 4x ARM Cortex-A53 cores @ 1.8 GHz (power efficiency)
  • 6 TOPS Neural Processing Unit (NPU)
  • Mali-G52 MC3 GPU
  • 8K@30fps H.265/VP9 decoding, 4K@60fps H.264/H.265 encoding
  • Up to 16GB LPDDR5 RAM support
  • Dual-channel DDR4/LPDDR4/LPDDR5 memory controller

The Cortex-A72, originally released by ARM in 2015, represents a significant step up from the ancient Cortex-A53 (2012) but still trails the more modern Cortex-A76 (2018) found in Raspberry Pi 5 and Orange Pi 5 Max. The A72 offers approximately 1.8-2x the performance per clock compared to the A53, with better branch prediction, wider execution units, and more sophisticated memory prefetching. However, it lacks the A76's more advanced microarchitecture improvements and typically runs at lower clock speeds (2.2 GHz vs. 2.4 GHz for the A76 in the Pi 5).

The inclusion of four Cortex-A53 efficiency cores alongside the A72 performance cores gives the RK3576 a total of eight cores, allowing it to balance power consumption and performance. In practice, this means the system can handle background tasks and light workloads on the A53 cores while reserving the A72 cores for demanding applications. The big.LITTLE scheduler in the Linux kernel attempts to make intelligent decisions about which cores to use for which tasks, though the effectiveness varies depending on workload characteristics.

Memory, Storage, and Connectivity

Our test unit came configured with:

  • 4GB LPDDR5 RAM (8GB and 16GB options available)
  • 29GB eMMC internal storage (32GB nominal, formatted capacity lower)
  • M.2 NVMe SSD support (our unit had a 932GB NVMe drive installed)
  • WiFi 6 (802.11ax) and Bluetooth 5.3
  • Gigabit Ethernet
  • HDMI 2.0 output supporting 4K@60fps
  • Multiple MIPI CSI camera interfaces
  • USB 3.0 and USB 2.0 interfaces via the 100-pin connectors

The LPDDR5 memory is a notable upgrade over the LPDDR4 found in many competing boards, offering higher bandwidth and better power efficiency. In our testing, memory bandwidth didn't appear to be a significant bottleneck for CPU-bound workloads, though applications that heavily stress memory subsystems (large dataset processing, video encoding, etc.) may benefit from the faster RAM.

The inclusion of both eMMC storage and M.2 NVMe support provides excellent flexibility. The eMMC serves as a reliable boot medium with consistent performance, while the NVMe slot allows for high-capacity, high-speed storage expansion. This dual-storage approach is superior to SD card-only solutions, which suffer from reliability issues and inconsistent performance.

WiFi 6 and Bluetooth 5.3 represent current-generation wireless standards, providing better performance and lower latency than the WiFi 5 found in older boards. For robotics applications, low-latency wireless communication can be crucial for remote control and telemetry, making this a meaningful upgrade.

The NPU: 6 TOPS of AI Potential

The RK3576's integrated 6 TOPS Neural Processing Unit is the CM5-Pro's headline AI feature, designed to accelerate machine learning inference workloads. The NPU supports multiple quantization formats (INT4/INT8/INT16/BF16/TF32) and can interface with mainstream frameworks including TensorFlow, PyTorch, MXNet, and Caffe through Rockchip's RKNN toolkit.

In our testing, we confirmed the presence of the NPU hardware at /sys/kernel/iommu_groups/0/devices/27700000.npu and verified that the RKNN runtime library (librknnrt.so) and server (rknn_server) were installed and accessible. To validate real-world NPU performance, we ran MobileNet V1 image classification inference tests using the pre-installed RKNN model.

NPU Inference Benchmarks - MobileNet V1:

Running 10 inference iterations on a 224x224 RGB image (bell.jpg), we measured consistent performance:

  • Average inference time: 161.8ms per image
  • Min/Max: 146ms to 172ms
  • Standard deviation: ~7.2ms
  • Throughput: ~6.2 frames per second

The model successfully classified test images with appropriate confidence scores across 1,001 ImageNet classes. The inference pipeline includes:

  • JPEG decoding and preprocessing
  • Image resizing and color space conversion
  • INT8 quantized inference on the NPU
  • FP16 output tensor postprocessing

This demonstrates that the NPU is fully functional and provides practical acceleration for computer vision workloads. The ~160ms inference time for MobileNet V1 is reasonable for edge AI applications, though more demanding models like YOLOv8 or larger classification networks would benefit from the full 6 TOPS capacity.

Rockchip's RKNN toolkit provides a development workflow that converts trained models into RKNN format for efficient execution on the NPU. The process involves:

  1. Training a model using a standard framework (TensorFlow, PyTorch, etc.)
  2. Exporting the model to ONNX or framework-specific format
  3. Converting the model using rknn-toolkit2 on a PC
  4. Quantizing the model to INT8 or other supported formats
  5. Deploying the RKNN model file to the board
  6. Running inference using RKNN C/C++ or Python APIs

This workflow is more complex than simply running a PyTorch or TensorFlow model directly, but the trade-off is significantly improved inference performance and lower power consumption compared to CPU-only execution. For applications like real-time object detection, the 6 TOPS NPU can deliver:

  • Face recognition: 240fps @ 1080p
  • Object detection (YOLO-based models): 50fps @ 4K
  • Semantic segmentation: 30fps @ 2K

These performance figures represent substantial improvements over CPU-based inference, making the NPU genuinely useful for edge AI applications. However, they also require investment in learning the RKNN toolchain, optimizing models for the specific NPU architecture, and managing the conversion pipeline as part of your development workflow.

RKLLM and Large Language Model Support:

To thoroughly test LLM capabilities, we performed end-to-end testing: model conversion on an x86_64 platform (LattePanda IOTA), transfer to the CM5-Pro, and NPU inference validation. RKLLM (Rockchip Large Language Model) toolkit enables running quantized LLMs on the RK3576's 6 TOPS NPU, supporting models including Qwen, Llama, ChatGLM, Phi, Gemma, InternLM, MiniCPM, and others.

LLM Model Conversion Benchmark:

We converted TinyLLAMA 1.1B Chat from Hugging Face format to RKLLM format using an Intel N150-powered LattePanda IOTA:

  • Source Model: TinyLLAMA 1.1B Chat v1.0 (505 MB safetensors)
  • Conversion Platform: x86_64 (RKLLM-Toolkit only available for x86, not ARM)
  • Quantization: W4A16 (4-bit weights, 16-bit activations)
  • Conversion Time Breakdown:
  • Model loading: 6.95 seconds
  • Building/Quantizing: 220.47 seconds (293 layers)
  • Optimization: 206.72 seconds (22 optimization steps)
  • Export to RKLLM format: 37.41 seconds
  • Total Conversion Time: 264.83 seconds (4.41 minutes)
  • Output File Size: 644.75 MB (increased from 505 MB due to RKNN format overhead)

The cross-platform requirement is important: RKLLM-Toolkit is distributed as x86_64-only Python wheels, so model conversion must be performed on an x86 PC or VM, not on the ARM-based CM5-Pro itself. Conversion time scales with model size and CPU performance - larger models on slower CPUs will take proportionally longer.

NPU LLM Inference Testing:

After transferring the converted model to the CM5-Pro, we successfully:

  • ✓ Loaded the TinyLLAMA 1.1B model (645 MB) into RKLLM runtime
  • ✓ Initialized NPU with 2-core configuration for W4A16 inference
  • ✓ Verified token generation and text output
  • ✓ Confirmed the model runs on NPU cores (not CPU fallback)

The RKLLM runtime v1.2.2 correctly identified the model configuration (W4A16, max_context=2048, 2 NPU cores) and enabled the Cortex-A72 cores [4,5,6,7] for host processing while the NPU handled inference.

Actual RK3576 LLM Performance (Official Rockchip Benchmarks):

Based on Rockchip's published benchmarks for the RK3576, small language models perform as follows:

  • Qwen2 0.5B (w4a16): 34.24 tokens/second, 327ms first token latency, 426 MB memory
  • MiniCPM4 0.5B (w4a16): 35.8 tokens/second, 349ms first token latency, 322 MB memory
  • TinyLLAMA 1.1B (w4a16): 21.32 tokens/second, 518ms first token latency, 591 MB memory
  • InternLM2 1.8B (w4a16): 13.65 tokens/second, 772ms first token latency, 966 MB memory

For context, the RK3588 (with more powerful NPU) achieves 42.58 tokens/second for Qwen2 0.5B - about 1.85x faster than the RK3576.

Practical Assessment:

The 30-35 tokens/second achieved with 0.5B models is usable for offline chatbots, text classification, and simple Q&A applications, but would feel noticeably slow compared to cloud LLM APIs or GPU-accelerated solutions. Humans typically read at 200-300 words per minute (~50 tokens/second), so 35 tokens/second is borderline for comfortable real-time conversation. Larger models (1.8B+) drop to 13 tokens/second or less, which feels sluggish for interactive use.

The complete workflow (download model → convert on x86 → transfer to ARM → run inference) works as designed but requires infrastructure: an x86 machine or VM for conversion, network transfer for large model files (645 MB), and familiarity with Python environments and RKLLM APIs. For embedded deployments, this is acceptable; for rapid prototyping, it adds friction compared to cloud-based LLM solutions.

Compared to Google's Coral TPU (4 TOPS), the RK3576's 6 TOPS provides 1.5x more computational power, though the Coral benefits from more mature tooling and broader community support. Against the Horizon X3's 5 TOPS, the RK3576 offers 20% more capability with far better CPU performance backing it up. For serious AI workloads, NVIDIA's Jetson platforms (40+ TOPS) remain in a different performance class, but at significantly higher price points and power requirements.

Performance Testing: Real-World Compilation Benchmarks

To assess the Banana Pi CM5-Pro's CPU performance, we ran our standard Rust compilation benchmark: building a complex ballistics simulation engine with numerous dependencies from a clean state, three times, and averaging the results. This real-world workload stresses CPU cores, memory bandwidth, compiler performance, and I/O subsystems.

Banana Pi CM5-Pro Compilation Times:

  • Run 1: 173.16 seconds (2 minutes 53 seconds)
  • Run 2: 162.29 seconds (2 minutes 42 seconds)
  • Run 3: 165.99 seconds (2 minutes 46 seconds)
  • Average: 167.15 seconds (2 minutes 47 seconds)

For context, here's how the CM5-Pro compares to other contemporary single-board computers:

System CPU Cores Average Time vs. CM5-Pro
Orange Pi 5 Max Cortex-A55/A76 8 (4+4) 62.31s 2.68x faster
Raspberry Pi CM5 Cortex-A76 4 71.04s 2.35x faster
LattePanda IOTA Intel N150 4 72.21s 2.31x faster
Raspberry Pi 5 Cortex-A76 4 76.65s 2.18x faster
Banana Pi CM5-Pro Cortex-A53/A72 8 (4+4) 167.15s 1.00x (baseline)

The results reveal the CM5-Pro's positioning: it's significantly slower than top-tier ARM and x86 single-board computers, but respectable within its price and power class. The 2.68x performance deficit versus the Orange Pi 5 Max is substantial, explained by the RK3588's newer Cortex-A76 cores running at higher clock speeds (2.4 GHz) with more advanced microarchitecture.

More telling is the comparison to the Raspberry Pi 5 and Raspberry Pi CM5, both featuring four Cortex-A76 cores at 2.4 GHz. Despite having eight cores to the Pi's four, the CM5-Pro is approximately 2.2x slower. This performance gap illustrates the generational advantage of the A76 architecture - the Pi 5's four newer cores outperform the CM5-Pro's four A72 cores plus four A53 cores combined for this workload.

The LattePanda IOTA's Intel N150, despite having only four cores, also outperforms the CM5-Pro by 2.3x. Intel's Alder Lake-N architecture, even in its low-power form, delivers superior single-threaded performance and more effective multi-threading than the RK3576.

However, context matters. The CM5-Pro's 167-second compilation time is still quite usable for development workflows. A project that takes 77 seconds to compile on a Raspberry Pi 5 will take 167 seconds on the CM5-Pro - an additional 90 seconds. For most developers, this difference is noticeable but not crippling. Compile times remain in the "get a coffee" range rather than the "go to lunch" range.

More importantly, the CM5-Pro vastly outperforms older ARM platforms. Compared to boards using only Cortex-A53 cores (like the Horizon X3 CM at 379 seconds), the CM5-Pro is 2.27x faster, demonstrating the value of the Cortex-A72 performance cores.

Geekbench 6 CPU Performance

To provide standardized synthetic benchmarks, we ran Geekbench 6.5.0 on the Banana Pi CM5-Pro:

Geekbench 6 Scores:

  • Single-Core Score: 328
  • Multi-Core Score: 1337

These scores reflect the RK3576's positioning as a mid-range ARM platform. The single-core score of 328 indicates modest per-core performance from the Cortex-A72 cores, while the multi-core score of 1337 demonstrates reasonable scaling across all eight cores (4x A72 + 4x A53). For context, the Raspberry Pi 5 with Cortex-A76 cores typically scores around 550-600 single-core and 1700-1900 multi-core, showing the generational advantage of the newer ARM architecture.

Notable individual benchmark results include:

  • PDF Renderer: 542 single-core, 2904 multi-core
  • Ray Tracer: 2763 multi-core
  • Asset Compression: 2756 multi-core
  • Horizon Detection: 540 single-core
  • HTML5 Browser: 455 single-core

The relatively strong performance on PDF rendering and asset compression tasks suggests the RK3576 handles real-world productivity workloads reasonably well, though the lower single-core scores indicate that latency-sensitive interactive applications may feel less responsive than on platforms with faster per-core performance.

Full Geekbench results: https://browser.geekbench.com/v6/cpu/14853854

Comparative Analysis: CM5-Pro vs. the Competition

vs. Orange Pi 5 Max

The Orange Pi 5 Max represents the performance leader in our testing, powered by Rockchip's flagship RK3588 SoC with four Cortex-A76 + four Cortex-A55 cores. The 5 Max compiled our benchmark in 62.31 seconds - 2.68x faster than the CM5-Pro's 167.15 seconds.

Key differences:

Performance: The 5 Max's Cortex-A76 cores deliver substantially better single-threaded and multi-threaded performance. For CPU-intensive development work, the performance gap is significant.

NPU: The RK3588 includes a 6 TOPS NPU, matching the RK3576's AI capabilities. Both boards can run similar RKNN-optimized models with comparable inference performance.

Form Factor: The 5 Max is a full-sized single-board computer with on-board ports and connectors, while the CM5-Pro is a compute module requiring a carrier board. This makes the 5 Max more suitable for standalone projects and the CM5-Pro better for embedded integration.

Price: The Orange Pi 5 Max sells for approximately \$150-180 with 8GB RAM, compared to $103 for the CM5-Pro. The 5 Max's superior performance comes at a premium, but the cost-per-performance ratio remains competitive.

Memory: Both support up to 16GB RAM, though the 5 Max typically ships with higher-capacity configurations.

Verdict: If raw CPU performance is your priority and you can accommodate a full-sized SBC, the Orange Pi 5 Max is the clear choice. The CM5-Pro makes sense if you need the compute module form factor, want to minimize cost, or have thermal/power constraints that favor the slightly more efficient RK3576.

vs. Raspberry Pi 5

The Raspberry Pi 5, with its Broadcom BCM2712 SoC featuring four Cortex-A76 cores at 2.4 GHz, compiled our benchmark in 76.65 seconds - 2.18x faster than the CM5-Pro.

Key differences:

Performance: The Pi 5's four A76 cores outperform the CM5-Pro's 4+4 big.LITTLE configuration for most workloads. Single-threaded performance heavily favors the Pi 5, while multi-threaded performance depends on whether the workload can effectively utilize the CM5-Pro's additional A53 cores.

NPU: The Pi 5 lacks integrated AI acceleration, while the CM5-Pro includes a 6 TOPS NPU. For AI-heavy applications, this is a significant advantage for the CM5-Pro.

Ecosystem: The Raspberry Pi ecosystem is vastly more mature, with extensive documentation, massive community support, and guaranteed long-term software maintenance. While Banana Pi has committed to supporting the CM5-Pro until 2034, the Pi Foundation's track record inspires more confidence.

Software: Raspberry Pi OS is polished and actively maintained, with hardware-specific optimizations. The CM5-Pro runs generic ARM Linux distributions (Debian, Ubuntu) which work well but lack Pi-specific refinements.

Price: The Raspberry Pi 5 (8GB model) retails for \$80, significantly cheaper than the CM5-Pro's \$103. The Pi 5 offers better performance for less money - a compelling value proposition.

Expansion: The Pi 5's standard SBC form factor provides easier access to GPIO, HDMI, USB, and other interfaces. The CM5-Pro requires a carrier board, adding cost and complexity but enabling more customized designs.

Verdict: For general-purpose computing, development, and hobbyist projects, the Raspberry Pi 5 is the better choice: faster, cheaper, and better supported. The CM5-Pro makes sense if you specifically need AI acceleration, prefer the compute module form factor, or want more RAM/storage capacity than the Pi 5 offers.

vs. LattePanda IOTA

The LattePanda IOTA, powered by Intel's N150 Alder Lake-N processor with four cores, compiled our benchmark in 72.21 seconds - 2.31x faster than the CM5-Pro.

Key differences:

Architecture: The IOTA uses x86_64 architecture, providing compatibility with a wider range of software that may not be well-optimized for ARM. The CM5-Pro's ARM architecture benefits from lower power consumption and better mobile/embedded software support.

Performance: Intel's N150, despite having only four cores, delivers superior single-threaded performance and competitive multi-threaded performance against the CM5-Pro's eight cores. Intel's microarchitecture and higher sustained frequencies provide an edge for CPU-bound tasks.

NPU: The IOTA lacks dedicated AI acceleration, relying on CPU or external accelerators for machine learning workloads. The CM5-Pro's integrated 6 TOPS NPU is a clear advantage for AI applications.

Power Consumption: The N150 is a low-power x86 chip, but still consumes more power than ARM solutions under typical workloads. The CM5-Pro's big.LITTLE configuration can achieve better power efficiency for mixed workloads.

Form Factor: The IOTA is a small x86 board with Arduino co-processor integration, targeting maker/IoT applications. The CM5-Pro's compute module format serves different use cases, primarily embedded systems and custom carrier board designs.

Price: The LattePanda IOTA sells for approximately $149, more expensive than the CM5-Pro. However, it includes unique features like the Arduino co-processor and x86 compatibility that may justify the premium for specific applications.

Software Ecosystem: x86 enjoys broader commercial software support, while ARM excels in embedded and mobile-focused applications. Choose based on your software requirements.

Verdict: If you need x86 compatibility or want a compact standalone board with Arduino integration, the LattePanda IOTA makes sense despite its higher price. If you're working in ARM-native embedded Linux, need AI acceleration, or want the compute module form factor, the CM5-Pro is the better choice at a lower price point.

vs. Raspberry Pi CM5

The Raspberry Pi Compute Module 5 is the most direct competitor to the Banana Pi CM5-Pro, offering the same CM4-compatible form factor with different specifications. The Pi CM5 compiled our benchmark in 71.04 seconds - 2.35x faster than the CM5-Pro.

Key differences:

Performance: The Pi CM5's four Cortex-A76 cores at 2.4 GHz significantly outperform the CM5-Pro's 4x A72 + 4x A53 configuration. The architectural advantage of the A76 over the A72 translates to approximately 2.35x better performance in our testing.

NPU: The CM5-Pro's 6 TOPS NPU provides integrated AI acceleration, while the Pi CM5 requires external solutions (Hailo-8, Coral TPU) for hardware-accelerated inference. If AI is central to your application, the CM5-Pro's integrated NPU is more elegant.

Memory Options: The CM5-Pro supports up to 16GB LPDDR5, while the Pi CM5 offers up to 8GB LPDDR4X. For memory-intensive applications, the CM5-Pro's higher capacity could be decisive.

Storage: Both offer eMMC options, with the CM5-Pro available up to 128GB and the Pi CM5 up to 64GB. Both support additional storage via carrier board interfaces.

Price: The Raspberry Pi CM5 (8GB/32GB eMMC) sells for approximately $95, slightly cheaper than the CM5-Pro's $103. The CM5-Pro's extra features (more RAM/storage options, integrated NPU) justify the small price premium for those who need them.

Ecosystem: The Pi CM5 benefits from Raspberry Pi's ecosystem, tooling, and community. The CM5-Pro has decent support but can't match the Pi's extensive resources.

Carrier Boards: Both are CM4-compatible, meaning they can use the same carrier boards. However, some boards may not fully support CM5-Pro-specific features, and subtle electrical differences could cause issues in rare cases.

Verdict: For maximum CPU performance in the CM4 form factor, choose the Pi CM5. Its 2.35x performance advantage is significant for compute-intensive applications. Choose the CM5-Pro if you need integrated AI acceleration, more than 8GB of RAM, more than 64GB of eMMC storage, or prefer the better wireless connectivity (WiFi 6 vs. WiFi 5).

Use Cases and Recommendations

Based on our testing and analysis, here are scenarios where the Banana Pi CM5-Pro excels and where alternatives might be better:

Choose the Banana Pi CM5-Pro if you:

Need AI acceleration in a compute module: The integrated 6 TOPS NPU eliminates the need for external AI accelerators, simplifying hardware design and reducing BOM costs. For robotics, smart cameras, or IoT devices with AI workloads, this is a compelling advantage.

Require more than 8GB of RAM: The CM5-Pro supports up to 16GB LPDDR5, double the Pi CM5's maximum. If your application processes large datasets, runs multiple VMs, or needs extensive buffering, the extra RAM headroom matters.

Want high-capacity built-in storage: With up to 128GB eMMC options, the CM5-Pro can store large datasets, models, or applications without requiring external storage. This simplifies deployment and improves reliability compared to SD cards or network storage.

Prefer WiFi 6 and Bluetooth 5.3: Current-generation wireless standards provide better performance and lower latency than WiFi 5. For wireless robotics control or IoT applications with many connected devices, WiFi 6's improvements are meaningful.

Value long production lifetime: Banana Pi's commitment to produce the CM5-Pro until August 2034 provides assurance for commercial products with multi-year lifecycles. You can design around this module without fear of it being discontinued in 2-3 years.

Have thermal or power constraints: The RK3576's 8nm process and big.LITTLE architecture can deliver better power efficiency than always-on high-performance cores, extending battery life or reducing cooling requirements for fanless designs.

Choose alternatives if you:

Prioritize raw CPU performance: The Raspberry Pi 5, Pi CM5, Orange Pi 5 Max, and LattePanda IOTA all deliver significantly faster CPU performance. If your application is CPU-bound and doesn't benefit from the NPU, these platforms are better choices.

Want the simplest development experience: The Raspberry Pi ecosystem's polish, documentation, and community support make it the easiest platform for beginners and rapid prototyping. The Pi 5 or Pi CM5 will get you running faster with fewer obstacles.

Need maximum AI performance: NVIDIA Jetson platforms provide 40+ TOPS of AI performance with mature CUDA/TensorRT tooling. If AI is your primary workload, the investment in a Jetson module is worthwhile despite higher costs.

Require x86 compatibility: The LattePanda IOTA or other x86 platforms provide better software compatibility for commercial applications that depend on x86-specific libraries or software.

Work with standard SBC form factors: If you don't need a compute module and prefer the convenience of a full-sized SBC with onboard ports, the Orange Pi 5 Max or Raspberry Pi 5 are better choices.

The NPU in Practice: RKNN Toolkit and Ecosystem

While we didn't perform exhaustive AI benchmarking, our exploration of the RKNN ecosystem reveals both promise and challenges. The infrastructure exists: the NPU hardware is present and accessible, the runtime libraries are installed, and documentation is available from both Rockchip and Banana Pi. The RKNN toolkit can convert mainstream frameworks to NPU-optimized models, and community examples demonstrate YOLO11n object detection running successfully on the CM5-Pro.

However, the RKNN development experience is not as streamlined as more mature ecosystems. Converting and optimizing models requires learning Rockchip-specific tools and workflows. Debugging performance issues or accuracy degradation during quantization demands patience and experimentation. The documentation is improving but remains fragmented across Rockchip's official site, Banana Pi's docs, and community forums.

For developers already familiar with embedded AI deployment, the RKNN workflow will feel familiar - it follows similar patterns to TensorFlow Lite, ONNX Runtime, or other edge inference frameworks. For developers new to edge AI, the learning curve is steeper than cloud-based solutions but gentler than some alternatives (looking at you, Hailo's toolchain).

The 6 TOPS performance figure is real and achievable for properly optimized models. INT8 quantized YOLO models can indeed run at 50fps @ 4K, and simpler models scale accordingly. The NPU's support for INT4 and BF16 formats provides flexibility for trading off accuracy versus performance. For many robotics and IoT applications, the 6 TOPS NPU hits a sweet spot: enough performance for useful AI workloads, integrated into the SoC to minimize complexity and cost, and accessible through reasonable (if not perfect) tooling.

Build Quality and Physical Characteristics

The Banana Pi CM5-Pro adheres to the Raspberry Pi CM4 mechanical specification, featuring dual 100-pin high-density connectors arranged in the standard layout. Physical dimensions match the CM4, allowing drop-in replacement in compatible carrier boards. Our sample unit appeared well-manufactured with clean solder joints, proper component placement, and no obvious defects.

The module includes an on-board WiFi/Bluetooth antenna connector (U.FL/IPEX), power management IC, and all necessary supporting components. Unlike some compute modules that require extensive external components on the carrier board, the CM5-Pro is relatively self-contained, simplifying carrier board design.

Thermal performance is adequate but not exceptional. Under sustained load during our compilation benchmarks, the SoC reached temperatures requiring thermal management. For applications running continuous AI inference or heavy CPU workloads, active cooling (fan) or substantial passive cooling (heatsink and airflow) is recommended. The carrier board design should account for thermal dissipation, especially if the module will be enclosed in a case.

Software and Ecosystem

The CM5-Pro ships with Banana Pi's custom Debian-based Linux distribution, featuring a 6.1.75 kernel with Rockchip-specific patches and drivers. In our testing, the system worked well out of the box: networking functioned, sudo worked (refreshingly, after the Horizon X3 CM disaster), and package management operated normally.

The distribution includes pre-installed RKNN libraries and tools, enabling NPU development without additional setup. Python 3 and essential development packages are available, and standard Debian repositories provide access to thousands of additional packages. For developers comfortable with Debian/Ubuntu, the environment feels familiar and capable.

However, the software ecosystem lags behind Raspberry Pi's. Raspberry Pi OS includes countless optimizations, hardware-specific integrations, and utilities that simply don't exist for Rockchip platforms. Camera support, GPIO access, and peripheral interfaces work, but often require more manual configuration or programming compared to the Pi's plug-and-play experience.

Third-party software support varies. Popular frameworks like ROS2, OpenCV, and TensorFlow compile and run without issues. Hardware-specific accelerators (GPU, NPU) may require additional configuration or custom builds. Overall, the software situation is "good enough" for experienced developers but not as polished as the Raspberry Pi ecosystem.

Banana Pi's documentation has improved significantly over the years, with reasonably comprehensive guides covering basic setup, GPIO usage, and RKNN deployment. Community support exists through forums and GitHub, though it's smaller and less active than Raspberry Pi's communities. Expect to do more troubleshooting independently and rely less on finding someone who's already solved your exact problem.

Conclusion: A Capable Platform for Specific Niches

The Banana Pi CM5-Pro is a solid, if unspectacular, compute module that serves specific niches well while falling short of being a universal recommendation. Its combination of integrated 6 TOPS NPU, up to 16GB RAM, WiFi 6 connectivity, and CM4-compatible form factor creates a unique offering that competes effectively against alternatives when your requirements align with its strengths.

For projects needing AI acceleration in a compute module format, the CM5-Pro is arguably the best choice currently available. The integrated NPU eliminates the complexity and cost of external AI accelerators while delivering genuine performance improvements for inference workloads. The RKNN toolkit, while imperfect, provides a workable path to deploying optimized models. If your robotics platform, smart camera, or IoT device depends on local AI processing, the CM5-Pro deserves serious consideration.

For projects requiring more than 8GB of RAM or more than 64GB of storage in a compute module, the CM5-Pro is the only game in town among CM4-compatible options. This makes it the default choice for memory-intensive applications that need the compute module form factor.

For general-purpose computing, development, or applications where AI is not central, the Raspberry Pi CM5 is the better choice. Its 2.35x performance advantage is substantial and directly translates to faster build times, quicker application responsiveness, and better user experience. The Pi's ecosystem advantages further tip the scales for most users.

Our compilation benchmark results - 167 seconds for the CM5-Pro versus 71-77 seconds for Pi5/CM5 - illustrate the performance gap clearly. For development workflows, this difference is noticeable but workable. Most developers can tolerate the CM5-Pro's slower compilation times if other factors (AI acceleration, RAM capacity, price) favor it. But if maximum CPU performance is your priority, look elsewhere.

The comparison to the Orange Pi 5 Max reveals a significant performance gap (62 vs. 167 seconds), but also highlights different market positions. The 5 Max is a full-featured SBC designed for standalone use, while the CM5-Pro is a compute module designed for embedded integration. They serve different purposes and target different applications.

Against the LattePanda IOTA's x86 architecture, the CM5-Pro trades x86 compatibility for better power efficiency, integrated AI, and lower cost. The choice between them depends entirely on software requirements - x86-specific applications favor the IOTA, while ARM-native embedded applications favor the CM5-Pro.

The Banana Pi CM5-Pro earns a qualified recommendation: excellent for AI-focused embedded projects, good for high-RAM compute module applications, acceptable for general embedded Linux development, and not recommended if raw CPU performance or ecosystem maturity are priorities. At $103 for the 8GB/64GB configuration, it offers reasonable value for applications that leverage its strengths, though it won't excite buyers seeking the fastest or cheapest option.

If your project needs:

  • AI acceleration integrated into a compute module
  • More than 8GB RAM in CM4 form factor
  • WiFi 6 and current wireless standards
  • Guaranteed long production life (until 2034)

Then the Banana Pi CM5-Pro is a solid choice that delivers on its promises.

If your project needs:

  • Maximum CPU performance
  • The most polished software ecosystem
  • The easiest development experience
  • The lowest cost

Then the Raspberry Pi CM5 or Pi 5 remains the better option.

The CM5-Pro occupies a middle ground: not the fastest, not the cheapest, not the easiest, but uniquely capable in specific areas. For the right application, it's exactly what you need. For others, it's a compromise that doesn't quite satisfy. Choose accordingly.

Specifications Summary

Processor:

  • Rockchip RK3576 (8nm process)
  • 4x ARM Cortex-A72 @ 2.2 GHz (performance cores)
  • 4x ARM Cortex-A53 @ 1.8 GHz (efficiency cores)
  • Mali-G52 MC3 GPU
  • 6 TOPS NPU (Rockchip RKNPU)

Memory & Storage:

  • 4GB/8GB/16GB LPDDR5 RAM options
  • 32GB/64GB/128GB eMMC options
  • M.2 NVMe SSD support via carrier board

Video:

  • 8K@30fps H.265/VP9 decoding
  • 4K@60fps H.264/H.265 encoding
  • HDMI 2.0 output (via carrier board)

Connectivity:

  • WiFi 6 (802.11ax) and Bluetooth 5.3
  • Gigabit Ethernet (via carrier board)
  • Multiple USB 2.0/3.0 interfaces
  • MIPI CSI camera inputs
  • I2C, SPI, UART, PWM

Physical:

  • Dual 100-pin board-to-board connectors (CM4-compatible)
  • Dimensions: 55mm x 40mm

Benchmark Performance:

  • Rust compilation: 167.15 seconds average
  • 2.68x slower than Orange Pi 5 Max
  • 2.35x slower than Raspberry Pi CM5
  • 2.31x slower than LattePanda IOTA
  • 2.18x slower than Raspberry Pi 5
  • 2.27x faster than Horizon X3 CM

Pricing: ~$103 USD (8GB RAM / 64GB eMMC configuration)

Production Lifetime: Guaranteed until August 2034

Recommendation: Good choice for AI-focused embedded projects requiring compute module form factor; not recommended if raw CPU performance is the priority.


Review Date: November 3, 2025

Hardware Tested: Banana Pi CM5-Pro (ArmSoM-CM5) with 4GB RAM, 29GB eMMC, 932GB NVMe SSD

OS Tested: Banana Pi Debian (based on Debian GNU/Linux), kernel 6.1.75

Conclusion: Solid middle-ground option with integrated AI acceleration; best for specific niches rather than general-purpose use.

The Horizon X3 CM: A Cautionary Tale in Robotics Development Platforms

Introduction

The Horizon X3 CM (Compute Module) represents an interesting case study in the single-board computer market: a product marketed as an AI-focused robotics platform that, in practice, falls dramatically short of both its promises and its competition. Released during the 2021-2022 timeframe and based on Horizon Robotics' Sunrise 3 chip (announced September 2020), the X3 CM attempts to position itself as a robotics development platform with integrated AI acceleration through its "Brain Processing Unit" or BPU. However, as we discovered through extensive testing and configuration attempts, the Horizon X3 CM is an underwhelming offering that suffers from outdated hardware, broken software distributions, abandoned documentation, and a configuration process so Byzantine that it borders on hostile to users.

Horizon X3 CM compute module

Horizon X3 CM compute module showing the CM4-compatible 200-pin connector

Horizon X3 CM mounted on carrier board

Horizon X3 CM installed on a carrier board with exposed components

Hardware Architecture: A Foundation Built on Yesterday's Technology

At the heart of the Horizon X3 CM lies the Sunrise X3 system-on-chip, featuring a quad-core ARM Cortex-A53 processor clocked at 1.5 GHz, paired with a single Cortex-R5 core for real-time tasks. The Cortex-A53, released by ARM in 2012, was already considered a low-power, efficiency-focused core at launch. By 2025 standards, it is ancient technology - predating even the Cortex-A55 by six years and the high-performance Cortex-A76 by eight years.

To put this in perspective: the Cortex-A53 was designed in an era when ARM was still competing against Intel Atom processors in tablets and smartphones. The microarchitecture lacks modern features like advanced branch prediction, sophisticated out-of-order execution, and the aggressive clock speeds found in contemporary ARM cores. It was never intended for computationally demanding workloads, instead optimizing for power efficiency in battery-powered devices.

The system includes 2GB or 4GB of RAM (our test unit had 4GB), eMMC storage options, and the typical suite of interfaces expected on a compute module: MIPI CSI for cameras, MIPI DSI for displays, USB 3.0, Gigabit Ethernet, and HDMI output. The physical form factor mimics the Raspberry Pi Compute Module 4's 200-pin board-to-board connector, allowing it to fit into existing CM4 carrier boards - at least in theory.

The BPU: Marketing Promise vs. Reality

The headline feature of the Horizon X3 CM is undoubtedly its Brain Processing Unit, marketed as providing 5 TOPS (trillion operations per second) of AI inference capability using Horizon's Bernoulli 2.0 architecture. The BPU is a dual-core dedicated neural processing unit fabricated on a 16nm process, designed specifically for edge AI applications in robotics and autonomous driving.

On paper, 5 TOPS sounds impressive for an edge device. The marketing materials emphasize the X3's ability to run AI models locally without cloud dependency, perform real-time object detection, enable autonomous navigation, and support various computer vision tasks. Horizon Robotics, founded in 2015 and focused primarily on automotive AI processors, positioned the Sunrise 3 chip as a way to bring their automotive-grade AI capabilities to the robotics and IoT markets.

In practice, the BPU's utility is severely constrained by several factors. First, the 5 TOPS figure assumes optimal utilization with models specifically optimized for the Bernoulli architecture. Second, the Cortex-A53 CPU cores create a significant bottleneck for any workload that cannot be entirely offloaded to the BPU. Third, and most critically, the toolchain and software ecosystem required to actually leverage the BPU is fragmented, poorly documented, and largely abandoned.

The Software Ecosystem: Abandonment and Fragmentation

Perhaps the most telling aspect of the Horizon X3 CM is the state of its software support. Horizon Robotics archived all their GitHub repositories, effectively abandoning public development and support. D-Robotics, which appears to be either a subsidiary or spin-off focused on the robotics market, has continued maintaining forks of some repositories, but the overall ecosystem feels scattered and undermaintained.

hobot_llm: An Exercise in Futility

One of the more recent developments is hobot_llm, a project that attempts to run Large Language Models on the RDK X3 platform. Hosted at https://github.com/D-Robotics/hobot_llm, this ROS2 node promises to bring LLM capabilities to edge robotics applications. The reality is far less inspiring.

hobot_llm provides two interaction modes: a terminal-based chat interface and a ROS2 node that subscribes to text topics and publishes LLM responses. The system requires the 4GB RAM version of the RDK X3 and recommends increasing the BPU reserved memory to 1.7GB - leaving precious little memory for other tasks.

Users report that responses take 15-30 seconds to generate, and the quality of responses is described as "confusing and mostly unrelated to the query." This performance characteristic makes the system effectively useless for any real-time robotics application. A robot that takes 30 seconds to formulate a language-based response is not demonstrating intelligence; it's demonstrating the fundamental inadequacy of the platform.

The hobot_llm project exemplifies the broader problem with the X3 ecosystem: projects that look interesting in concept but fall apart under scrutiny, implemented on hardware that lacks the computational resources to make them practical, maintained by a fractured development community that can't provide consistent support.

D-Robotics vs. Horizon Robotics: Corporate Confusion

The relationship between Horizon Robotics and D-Robotics adds another layer of confusion for potential users. Horizon Robotics, the original creator of the Sunrise chips, has clearly shifted its focus to the automotive market, where margins are higher and customers are more willing to accept proprietary, closed-source solutions. The company's GitHub repositories were archived, signaling an end to community-focused development.

D-Robotics picked up the robotics development kit mantle, maintaining forks of key repositories like hobot_llm, hobot_dnn (the DNN inference framework), and the RDK model zoo. However, this continuation feels more like life support than active development. Commit frequencies are low, issues pile up without resolution, and the documentation remains fragmented across multiple sites (d-robotics.cc, developer.d-robotics.cc, github.com/D-Robotics, github.com/HorizonRDK).

For a potential user in 2025, this corporate structure raises immediate red flags. Who actually supports this platform? If you encounter a problem, where do you file an issue? If Horizon has abandoned the project and D-Robotics is merely keeping it alive, what is the long-term viability of building a product on this foundation?

The Bootstrap Nightmare: A System Designed to Frustrate

If the hardware limitations and software abandonment weren't enough to dissuade potential users, the actual process of getting a functioning Horizon X3 CM system should seal the case. We downloaded the latest Ubuntu 22.04-derived distribution from https://archive.d-robotics.cc/downloads/en/os_images/rdk_x3/rdk_os_3.0.3-2025-09-08/ and discovered a system configuration so broken and non-standard that it defies belief.

The Sudo Catastrophe

The most egregious issue: sudo doesn't work out of the box. Not because of a configuration error, but because critical system files are owned by the wrong user. The distribution ships with /usr/bin/sudo, /etc/sudoers, and related files owned by uid 1000 (the sunrise user) rather than root. This creates an impossible catch-22:

  • You need root privileges to fix the file ownership
  • sudo is the standard way to gain root privileges
  • sudo won't function because of incorrect ownership
  • You can't fix the ownership without root privileges

Traditional escape routes all fail. The root password is not set, so su doesn't work. pkexec requires polkit authentication. systemctl requires authentication for privileged operations. Even setting file capabilities (setcap) to grant specific privileges fails because the sunrise user lacks CAP_SETFCAP.

The workaround involves creating an /etc/rc.local script that runs at boot time as root to fix ownership of sudo binaries, sudoers files, and apt directories:

#!/bin/bash -e
# Fix sudo binary ownership and permissions
chown root:root /usr/bin/sudo
chmod 4755 /usr/bin/sudo

# Fix sudo plugins directory
chown -R root:root /usr/lib/sudo/

# Fix sudoers configuration files
chown root:root /etc/sudoers
chmod 0440 /etc/sudoers
chown -R root:root /etc/sudoers.d/
chmod 0755 /etc/sudoers.d/
chmod 0440 /etc/sudoers.d/*

# Fix apt package manager directories
mkdir -p /var/cache/apt/archives/partial
mkdir -p /var/lib/apt/lists/partial
chown -R root:root /var/lib/apt/lists
chown _apt:root /var/lib/apt/lists/partial
chmod 0700 /var/lib/apt/lists/partial
chown -R root:root /var/cache/apt/archives
chown _apt:root /var/cache/apt/archives/partial
chmod 0700 /var/cache/apt/archives/partial

exit 0

This is not a minor configuration quirk. This is a fundamental misunderstanding of Linux system security and standard practices. No competent distribution would ship with sudo broken in this manner. The fact that this made it into a release image dated September 2025 suggests either complete incompetence or absolute indifference to user experience.

Network Configuration Hell

The default network configuration assumes you're using the 192.168.1.0/24 subnet with a gateway at 192.168.1.1. If your network uses any other addressing scheme - as most enterprise networks, lab environments, and even many home networks do - you're in for a frustrating experience.

Changing the network configuration should be trivial: edit /etc/network/interfaces, update the IP address and gateway, reboot. Except the sunrise user lacks CAP_NET_ADMIN capability, so you can't use ip commands to modify network configuration on the fly. You can't use NetworkManager's command-line tools without authentication. You must edit the configuration files manually and reboot to apply changes.

Our journey to move the device from 192.168.1.10 to 10.1.1.135 involved:

  1. Accessing the device through a gateway system that could route to both networks
  2. Backing up /etc/network/interfaces
  3. Manually editing the static IP configuration
  4. Removing conflicting secondary IP configuration scripts
  5. Adding DNS servers (which weren't configured at all in the default image)
  6. Rebooting and hoping the configuration took
  7. Troubleshooting DNS resolution failures
  8. Editing /etc/systemd/resolved.conf to add nameservers
  9. Adding a systemd-resolved restart to /etc/rc.local
  10. Rebooting again

This process, which takes approximately 30 seconds on a properly configured Linux system, consumed hours on the Horizon X3 CM due to the broken permissions structure and missing default configurations.

Repository Roulette

The default APT repositories point to mirrors.tuna.tsinghua.edu.cn (a Chinese university mirror) and archive.sunrisepi.tech (which is frequently unreachable). For users outside China, these repositories are slow or inaccessible. The solution requires manually reconfiguring /etc/apt/sources.list to use official Ubuntu Ports mirrors:

deb http://ports.ubuntu.com/ubuntu-ports/ focal main restricted universe multiverse
deb http://ports.ubuntu.com/ubuntu-ports/ focal-security main restricted universe multiverse
deb http://ports.ubuntu.com/ubuntu-ports/ focal-updates main restricted universe multiverse

Again, this should be a non-issue. Modern distributions detect geographic location and configure appropriate mirrors automatically. The Horizon X3 CM requires manual intervention for basic package management functionality.

The Permission Structure Mystery

Beyond these specific issues lies a broader architectural decision that makes no sense: why are system directories owned by a non-root user? Running ls -ld on /etc, /usr/lib, and /var/lib/apt reveals they're owned by sunrise:sunrise rather than root:root. This violates fundamental Unix security principles and creates cascading problems throughout the system.

Was this an intentional design decision? If so, what was the rationale? Was it an accident that made it through quality assurance? The complete lack of documentation about this unusual setup suggests it's not intentional, yet it persists through multiple distribution releases.

Performance Testing: Confirmation of Inadequacy

To quantitatively assess the Horizon X3 CM's performance, we ran our standard Rust compilation benchmark: building a complex ballistics simulation engine with numerous dependencies from clean state, three times, and averaging the results. This workload stresses CPU cores, memory bandwidth, and compiler performance - a representative real-world task for any development platform.

Benchmark Results

The Horizon X3 CM posted compilation times of:

  • Run 1: 384.32 seconds (6 minutes 24 seconds)
  • Run 2: 376.66 seconds (6 minutes 17 seconds)
  • Run 3: 375.46 seconds (6 minutes 15 seconds)
  • Average: 378.81 seconds (6 minutes 19 seconds)

For context, here's how this compares to contemporary ARM and x86 single-board computers:

System Architecture CPU Cores Average Time vs. X3 CM
Orange Pi 5 Max ARM64 Cortex-A55/A76 8 62.31s 6.08x faster
Raspberry Pi CM5 ARM64 Cortex-A76 4 71.04s 5.33x faster
LattePanda Iota x86_64 Intel N150 4 72.21s 5.25x faster
Raspberry Pi 5 ARM64 Cortex-A76 4 76.65s 4.94x faster
Horizon X3 CM ARM64 Cortex-A53 4 378.81s 1.00x (baseline)
Orange Pi RV2 RISC-V Ky X1 8 650.60s 1.72x slower

The Horizon X3 CM is approximately five times slower than the Raspberry Pi 5, despite both boards having four cores. This dramatic performance gap is explained by the generational difference in ARM core architecture: the Cortex-A76 in the Pi 5 represents eight years of microarchitectural advancement over the A53, with wider execution units, better branch prediction, higher clock speeds, and more sophisticated memory hierarchies.

The only platform slower than the X3 CM in our testing was the Orange Pi RV2, which uses an experimental RISC-V processor with an immature compiler toolchain. The fact that an established ARM platform with a mature software ecosystem performs only 1.72x better than a bleeding-edge RISC-V platform speaks volumes about the X3's inadequacy.

Geekbench 6 Results: Industry-Standard Confirmation

To complement our real-world compilation benchmarks, we also ran Geekbench 6 - an industry-standard synthetic benchmark that measures CPU performance across a variety of workloads including cryptography, image processing, machine learning, and general computation. The results reinforce and quantify just how far behind the Horizon X3 CM falls compared to modern alternatives.

Horizon X3 CM Geekbench 6 Scores:

  • Single-Core Score: 127
  • Multi-Core Score: 379
  • Geekbench Link: https://browser.geekbench.com/v6/cpu/14816041

For context, here's how this compares to other single-board computers running Geekbench 6:

System CPU Single-Core Multi-Core vs. X3 Single vs. X3 Multi
Orange Pi 5 Max Cortex-A55/A76 743 2,792 5.85x faster 7.37x faster
Raspberry Pi 5 Cortex-A76 764-774 1,588-1,604 6.01-6.09x faster 4.19-4.23x faster
Raspberry Pi 5 (OC) Cortex-A76 837 1,711 6.59x faster 4.51x faster
Horizon X3 CM Cortex-A53 127 379 1.00x (baseline) 1.00x (baseline)

The Geekbench results align remarkably well with our compilation benchmarks, confirming that the X3 CM's poor performance isn't specific to one workload but represents a fundamental computational deficit across all task types.

A single-core score of 127 is abysmal by 2025 standards. To put this in perspective, the iPhone 6s from 2015 scored around 140 in single-core Geekbench 6 tests. The Horizon X3 CM, released in 2021-2022, delivers performance comparable to a decade-old smartphone processor.

The multi-core score of 379 shows that the X3 fails to effectively leverage its four cores. Despite having the same core count as the Raspberry Pi 5, the X3 scores less than one-quarter of the Pi 5's multi-core performance. The Orange Pi 5 Max, with its eight cores (four A76 + four A55), absolutely destroys the X3 with 7.37x better multi-core performance.

The Geekbench individual test scores reveal specific weaknesses:

  • Navigation tasks: 282 single-core (embarrassingly slow for robotics applications requiring path planning)
  • Clang compilation: 208 single-core (confirming our real-world compilation benchmark findings)
  • HTML5 Browser: 180 single-core (even web-based robot control interfaces would lag)
  • PDF Rendering: 200 single-core, 797 multi-core (document processing would crawl)

These synthetic benchmarks might seem academic, but they translate directly to real-world robotics performance. The navigation score predicts poor path planning performance. The Clang score explains the painful compilation times. The HTML5 browser score means even accessing web-based configuration interfaces will be sluggish. Every aspect of development and deployment on the X3 CM will feel slow because the processor is fundamentally inadequate.

What This Means for Real Workloads

The compilation benchmark translates directly to real-world robotics and AI development scenarios:

Development iteration time: Compiling ROS2 packages, building custom nodes, and testing changes takes five times longer than on a Raspberry Pi 5. A developer waiting 20 minutes for a build on the Pi 5 will wait 100 minutes on the X3 CM.

AI model training: While the BPU handles inference, any model training, data preprocessing, or optimization work runs on the Cortex-A53 cores at a glacial pace.

Computer vision processing: Pre-BPU image processing, post-BPU result processing, and any vision algorithms not optimized for the Bernoulli architecture will execute slowly.

Multi-tasking performance: Running ROS2, sensor drivers, motion controllers, and application logic simultaneously will strain the limited CPU resources. The cores will spend more time context switching than doing useful work.

The AI Promise: Hollow Marketing

Let's return to the central premise of the Horizon X3 CM: it's an AI-focused robotics platform with a dedicated Brain Processing Unit providing 5 TOPS of inference capability. Does this specialization justify the platform's shortcomings?

The answer is a resounding no.

First, 5 TOPS is not impressive by 2025 standards. The Google Coral TPU provides 4 TOPS in a USB dongle costing under $60. The NVIDIA Jetson Orin Nano provides 40 TOPS. Even smartphone SoCs like the Apple A17 Pro deliver over 35 TOPS. The Horizon X3's 5 TOPS might have been notable in 2020 when the chip was announced, but it's thoroughly uncompetitive five years later.

Second, the BPU's usefulness is limited by the proprietary toolchain and model conversion requirements. You can't simply take a TensorFlow or PyTorch model and run it on the BPU. It must be converted using Horizon's tools, quantized to specific formats the Bernoulli architecture supports, and optimized for the dual-core BPU's execution model. The documentation for this process is scattered, incomplete, and assumes familiarity with Horizon's automotive-focused development flow.

Third, the weak Cortex-A53 cores undermine any AI acceleration advantage. If your application spends 70% of its time in AI inference and 30% in CPU-bound tasks, accelerating the inference to near-zero still leaves you with performance dominated by the slow CPU. The system is only as fast as its slowest component, and the CPU is very slow.

Fourth, the ecosystem lock-in is severe. Code written for the Horizon BPU doesn't port to other platforms. Models optimized for Bernoulli architecture require re-optimization for other accelerators. Investing development time in Horizon-specific tooling is investing in a dead-end technology with an uncertain future.

Compare this to the Raspberry Pi ecosystem, where you can add AI acceleration through well-supported options like the Coral TPU, Intel Neural Compute Stick, or Hailo-8 accelerator. These solutions work across the Pi 4, Pi 5, and other platforms, with mature Python APIs, extensive documentation, and active communities. The development you do with these accelerators transfers to other projects and platforms.

Documentation: Scarce and Scattered

Throughout our evaluation of the Horizon X3 CM, a consistent theme emerged: finding documentation for any task ranged from difficult to impossible. Want to understand the BPU's capabilities? The information is spread across d-robotics.cc, developer.d-robotics.cc, archived Horizon Robotics pages, and forums in both English and Chinese.

Looking for example code? Some repositories on GitHub have examples, but they assume familiarity with Horizon's model conversion tools. The tools themselves have documentation, but it's automotive-focused and doesn't translate well to robotics applications.

Need help troubleshooting a problem? The forums are sparsely populated, with many questions unanswered. The most reliable source of information is reverse-engineering what other users have done and hoping it works on your hardware revision.

This stands in stark contrast to the Raspberry Pi ecosystem, where every sensor, every module, every software package has multiple tutorials, forums full of discussions, YouTube videos, and GitHub repositories with example code. The Pi's ubiquity means that any problem you encounter has likely been solved multiple times by others.

The YouTube Deception

It's worth addressing the several YouTube videos that demonstrate the Horizon X3 running robotics applications, performing object detection, and controlling robot platforms. These videos create an impression that the X3 is a viable robotics platform. They're not technically dishonest - the hardware can do these things - but they omit the critical context that makes the X3 a poor choice.

These demonstrations typically show:

  • Custom-built systems where someone has already overcome the configuration hurdles
  • Specific AI models that have been painstakingly optimized for the BPU
  • Applications that carefully avoid the CPU bottlenecks
  • No comparisons to how the same task performs on alternative platforms
  • No discussion of development time, tool chain difficulties, or ecosystem limitations

What they don't show is the hours spent fixing sudo, configuring networks, battling documentation gaps, and waiting for slow compilation. They don't mention that achieving the same functionality on a Raspberry Pi 5 with a Coral TPU would be faster to develop, more performant, better documented, and more maintainable.

The YouTube demonstrations are real, but they represent the absolute best case: experienced developers who've mastered the platform's quirks showing carefully crafted demos. They do not represent the typical user experience.

Who Is This For? (No One)

Attempting to identify the target audience for the Horizon X3 CM reveals its fundamental problem: there isn't a clear use case where it's the best choice.

Beginners: Absolutely not. The broken sudo, network configuration challenges, scattered documentation, and proprietary toolchain create insurmountable barriers for someone learning robotics development. A beginner choosing the X3 will spend 90% of their time fighting the platform and 10% actually learning robotics.

Intermediate developers: Still no. Someone with Linux experience and basic robotics knowledge will be frustrated by the X3's limitations. They have the skills to configure the system, but they'll quickly realize they're wasting time on a platform that's slower, less documented, and more restrictive than alternatives.

Advanced developers: Why would they choose this? An advanced developer evaluating SBC options will immediately recognize the Cortex-A53's limitations, the proprietary BPU lock-in, and the ecosystem fragmentation. They'll choose a Raspberry Pi with modular acceleration, or an NVIDIA Jetson if they need serious AI performance, or an x86 platform if they need raw CPU power.

Automotive developers: This is Horizon's actual target market, but they're not using the off-the-shelf RDK X3 boards. They're integrating the Sunrise chips into custom hardware with proprietary board support packages, automotive-grade Linux distributions, and Horizon's professional support contracts.

The hobbyist robotics market that the RDK X3 ostensibly targets is better served by literally any other option. The Raspberry Pi ecosystem offers superior hardware, vastly better documentation, more active communities, and modular expandability. Even the aging Raspberry Pi 4 is arguably a better choice than the X3 CM for most robotics projects.

Conclusion: An Irrelevant Platform in 2025

The Horizon X3 CM represents a failed experiment in bringing automotive AI technology to the robotics hobbyist market. The hardware is built on outdated ARM cores that were unimpressive when they launched in 2012 and are thoroughly inadequate in 2025. The AI acceleration, while technically present, is hamstrung by weak CPUs, proprietary tooling, and an abandoned software ecosystem. The software distributions ship broken, requiring extensive manual fixes to achieve basic functionality.

Our performance testing confirms what the specifications suggest: the X3 CM is approximately five times slower than a current-generation Raspberry Pi 5 for CPU-bound workloads. Both our real-world Rust compilation benchmarks and industry-standard Geekbench 6 synthetic tests show consistent results - the X3 CM delivers single-core performance 6x slower and multi-core performance 4-7x slower than modern competition. The BPU's 5 TOPS of AI acceleration cannot compensate for this massive performance deficit, and the proprietary nature of the Bernoulli architecture creates vendor lock-in without providing compelling advantages.

The documentation situation is dire, with information scattered across multiple sites in multiple languages, many links pointing to archived or defunct resources. The corporate structure - Horizon Robotics abandoning public development while D-Robotics maintains forks - raises serious questions about long-term support and viability.

For anyone considering robotics development in 2025, the recommendation is clear: avoid the Horizon X3 CM. If you're a beginner, start with a Raspberry Pi 5 - you'll have vastly more resources available, a supportive community, and hardware that won't frustrate you at every turn. If you're an intermediate or advanced developer, the Pi 5 with optional AI acceleration (Coral TPU, Hailo-8) will give you more flexibility, better performance, and a lower total cost of ownership. If you need serious AI horsepower, look at NVIDIA's Jetson line, which provides professional-grade AI acceleration with mature tooling and extensive documentation.

The Horizon X3 CM is a platform that perhaps made sense when announced in 2020-2021, competing against the Raspberry Pi 4 and targeting a market that was just beginning to explore edge AI. But time has not been kind. The ARM cores have aged poorly, the software ecosystem never achieved critical mass, and the corporate support has evaporated. In 2025, choosing the Horizon X3 CM for a new robotics project is choosing to fight your tools rather than build your robot.

The most damning evidence is this: even the Orange Pi RV2, running a brand-new RISC-V processor with an immature compiler toolchain and experimental software stack, is only 1.72x slower than the X3 CM. An experimental architecture with bleeding-edge hardware and alpha-quality software performs almost as well as an established ARM platform with supposedly mature tooling. Both our real-world compilation benchmarks and Geekbench 6 synthetic tests confirm the X3 CM's performance is comparable to a decade-old iPhone 6s processor - a smartphone chip from 2015 outperforms this 2021-2022 era robotics development platform. This speaks volumes about just how underpowered and poorly optimized the Horizon X3 CM truly is.

Save yourself the frustration. Build your robot on a platform that respects your time, provides the tools you need, and has a future. The Raspberry Pi ecosystem is the obvious choice, but almost any alternative - even commodity x86 mini-PCs - would serve you better than the Horizon X3 CM.

Specifications Summary

For reference, here are the complete specifications of the Horizon X3 CM:

Processor:

  • Sunrise X3 SoC (16nm process)
  • Quad-core ARM Cortex-A53 @ 1.5 GHz
  • Single ARM Cortex-R5 core
  • Dual-core Bernoulli 2.0 BPU (5 TOPS AI inference)

Memory & Storage:

  • 2GB or 4GB LPDDR4 RAM
  • 8GB/16GB/32GB eMMC options
  • MicroSD card slot

Video:

  • 4K@60fps H.264/H.265 encoding
  • 4K@60fps decoding
  • HDMI 2.0 output

Interfaces:

  • 2x MIPI CSI (camera input)
  • 1x MIPI DSI (display output)
  • 2x USB 3.0
  • Gigabit Ethernet
  • 40-pin GPIO header
  • I2C, SPI, UART, PWM

Physical:

  • 200-pin board-to-board connector (CM4-compatible)
  • Dimensions: 55mm x 40mm

Software:

  • Ubuntu 20.04/22.04 based distributions
  • ROS2 support (in theory)
  • Horizon OpenExplorer development tools

Benchmark Performance:

  • Rust compilation: 378.81 seconds average (5x slower than Raspberry Pi 5)
  • Geekbench 6 Single-Core: 127 (6x slower than Raspberry Pi 5)
  • Geekbench 6 Multi-Core: 379 (4-7x slower than modern ARM SBCs)
  • Geekbench Link: https://browser.geekbench.com/v6/cpu/14816041
  • Relative performance: 1.72x faster than experimental RISC-V, 6x slower than modern ARM
  • Performance comparable to iPhone 6s (2015) in single-core workloads

Recommendation: Avoid. Use Raspberry Pi 5 or equivalent instead.

AMD GPU Comparison: Max+ 395 vs RX 7900 for LLM Inference

This report compares the inference performance of two GPU systems running local LLM models using Ollama. The benchmark tests were conducted using the llm-tester tool with concurrent requests set to 1, simulating single-user workload scenarios.

Test Configuration

Systems Tested

  1. AI Max+ 395

    • Host: bosgame.localnet
    • ROCm: Custom installation in home directory
    • Memory: 32 GB unified memory
    • VRAM: 96 GB
  2. AMD Radeon RX 7900 XTX

    • Host: rig.localnet
    • ROCm: System default installation
    • Memory: 96 GB
    • VRAM: 24 GB

Models Tested

Test Methodology

  • Benchmark Tool: llm-tester (https://github.com/Laszlobeer/llm-tester)
  • Concurrent Requests: 1 (single-user simulation)
  • Tasks per Model: 5 diverse prompts
  • Timeout: 180 seconds per task
  • Backend: Ollama API (http://localhost:11434)

Performance Results

deepseek-r1:1.5b Performance

System Avg Tokens/s Avg Latency Total Time Performance Ratio
AMD RX 7900 197.01 6.54s 32.72s 1.78x faster
Max+ 395 110.52 21.51s 107.53s baseline

Detailed Results - AMD RX 7900:

  • Task 1: 196.88 tokens/s, Latency: 9.81s
  • Task 2: 185.87 tokens/s, Latency: 17.60s
  • Task 3: 200.72 tokens/s, Latency: 1.97s
  • Task 4: 200.89 tokens/s, Latency: 1.76s
  • Task 5: 200.70 tokens/s, Latency: 1.57s

Detailed Results - Max+ 395:

  • Task 1: 111.78 tokens/s, Latency: 13.38s
  • Task 2: 93.81 tokens/s, Latency: 82.23s
  • Task 3: 115.97 tokens/s, Latency: 3.83s
  • Task 4: 114.72 tokens/s, Latency: 4.52s
  • Task 5: 116.34 tokens/s, Latency: 3.57s

AMD RX 7900 XTX running deepseek-r1:1.5b benchmark

AMD RX 7900 XTX performance on deepseek-r1:1.5b model

Max+ 395 running deepseek-r1:1.5b benchmark

Max+ 395 performance on deepseek-r1:1.5b model

qwen3:latest Performance

System Avg Tokens/s Avg Latency Total Time Performance Ratio
AMD RX 7900 86.46 12.81s 64.04s 2.71x faster
Max+ 395 31.85 41.00s 204.98s baseline

Detailed Results - AMD RX 7900:

  • Task 1: 86.56 tokens/s, Latency: 15.07s
  • Task 2: 85.69 tokens/s, Latency: 18.37s
  • Task 3: 86.74 tokens/s, Latency: 7.15s
  • Task 4: 87.91 tokens/s, Latency: 1.56s
  • Task 5: 85.43 tokens/s, Latency: 21.90s

Detailed Results - Max+ 395:

  • Task 1: 32.21 tokens/s, Latency: 33.15s
  • Task 2: 27.53 tokens/s, Latency: 104.82s
  • Task 3: 33.47 tokens/s, Latency: 16.79s
  • Task 4: 34.96 tokens/s, Latency: 4.64s
  • Task 5: 31.08 tokens/s, Latency: 45.59s

AMD RX 7900 XTX running qwen3:latest benchmark

AMD RX 7900 XTX performance on qwen3:latest model

Max+ 395 running qwen3:latest benchmark

Max+ 395 performance on qwen3:latest model

Comparative Analysis

Overall Performance Summary

Model RX 7900 Max+ 395 Performance Multiplier
deepseek-r1:1.5b 197.01 tok/s 110.52 tok/s 1.78x
qwen3:latest 86.46 tok/s 31.85 tok/s 2.71x

Key Findings

  1. RX 7900 Dominance: The AMD RX 7900 significantly outperforms the Max+ 395 across both models
  2. 78% faster on deepseek-r1:1.5b
  3. 171% faster on qwen3:latest

  4. Model-Dependent Performance Gap: The performance difference is more pronounced with the larger/more complex model (qwen3:latest), suggesting the RX 7900 handles larger models more efficiently

  5. Consistency: The RX 7900 shows more consistent performance across tasks, with lower variance in latency

  6. Total Execution Time:

  7. For deepseek-r1:1.5b: RX 7900 completed in 32.72s vs 107.53s (3.3x faster)
  8. For qwen3:latest: RX 7900 completed in 64.04s vs 204.98s (3.2x faster)

Comparison with Previous Results

Desktop PC (i9-9900k + RTX 2080, 8GB VRAM)

  • deepseek-r1:1.5b: 143 tokens/s
  • qwen3:latest: 63 tokens/s

M4 Mac (24GB Unified Memory)

  • deepseek-r1:1.5b: 81 tokens/s
  • qwen3:latest: Timeout issues (needed 120s timeout)

Performance Ranking

deepseek-r1:1.5b:

  1. AMD RX 7900: 197.01 tok/s ⭐
  2. RTX 2080 (CUDA): 143 tok/s
  3. Max+ 395: 110.52 tok/s
  4. M4 Mac: 81 tok/s

qwen3:latest:

  1. AMD RX 7900: 86.46 tok/s ⭐
  2. RTX 2080 (CUDA): 63 tok/s
  3. Max+ 395: 31.85 tok/s
  4. M4 Mac: Unable to complete within timeout

Cost-Benefit Analysis

System Pricing Context

  • Framework Desktop with Max+ 395: ~$2,500
  • AMD RX 7900: Available as standalone GPU (~$600-800 used, ~$900-1000 new)

Value Proposition

The AMD RX 7900 delivers:

  • 1.78-2.71x better performance than the Max+ 395
  • Significantly better price-to-performance ratio (~$800 vs $2,500)
  • Dedicated GPU VRAM vs shared unified memory
  • Better thermal management in desktop form factor

The $2,500 Framework Desktop investment could alternatively fund:

  • AMD RX 7900 GPU
  • High-performance desktop motherboard
  • AMD Ryzen CPU
  • 32-64GB DDR5 RAM
  • Storage and cooling
  • With budget remaining

Conclusions

  1. Clear Performance Winner: The AMD RX 7900 is substantially faster than the Max+ 395 for LLM inference workloads

  2. Value Analysis: The Framework Desktop's $2,500 price point doesn't provide competitive performance for LLM workloads compared to desktop alternatives

  3. Use Case Consideration: The Framework Desktop offers portability and unified memory benefits, but if LLM performance is the primary concern, the RX 7900 desktop configuration is superior

  4. ROCm Compatibility: Both systems successfully ran ROCm workloads, demonstrating AMD's growing ecosystem for AI/ML tasks

  5. Recommendation: For users prioritizing LLM inference performance per dollar, a desktop workstation with an RX 7900 provides significantly better value than the Max+ 395 Framework Desktop

Technical Notes

  • All tests used identical benchmark methodology with single concurrent requests
  • Both systems were running similar ROCm configurations
  • Network latency was negligible (local Ollama API)
  • Results represent real-world single-user inference scenarios

Systems Information

Both systems are running:

  • Operating System: Linux
  • LLM Runtime: Ollama
  • Acceleration: ROCm (AMD GPU compute)
  • Python: 3.12.3