Z80 CP/M: History and Legacy and Emulation

After reading an article that Adafruit put out on running CP/M on an emulator running on an Arduino, I thought I could expand up the article and add to the story. Enjoy.

In the early years of microcomputers, their processing power was incredibly limited compared to what we are accustomed to today. These devices, which emerged in the 1970s, were designed to be affordable and accessible for individual users, businesses, and small organizations, marking a stark departure from the large and expensive mainframes and minicomputers of the time. However, this accessibility came at a cost: the processing capabilities of these early microcomputers were constrained by the technology of the era, as well as by economic and practical considerations.

One of the initial limitations of early microcomputers was the processor itself. Early models, such as the Altair 8800 and Apple I, relied on 8-bit microprocessors like the Intel 8080 and MOS 6502. These 8-bit processors could typically handle only very simple calculations and operations in comparison to more advanced processors. Clock speeds were also significantly lower; they generally ranged from under 1 MHz to a few MHz. This lack of processing speed constrained the tasks that these computers could perform; complex calculations, large datasets, and intricate simulations were largely beyond their reach.

Memory was another significant limiting factor. Early microcomputers were equipped with a very small amount of RAM, often measured in kilobytes rather than the gigabytes or terabytes commonplace today. The limited RAM constrained the size and complexity of the programs that could be run, as well as the amount of data that could be processed at one time. It was not uncommon for users to constantly manage their memory use meticulously, choosing which programs and data could be loaded into their precious few kilobytes of RAM.

Storage capacity in early microcomputers was also quite constrained. Hard drives were expensive and uncommon in the earliest microcomputers, which often used cassette tapes or floppy disks for data storage. These mediums offered extremely limited storage capacity, often on the order of a few tens or hundreds of kilobytes. This required users to be extremely judicious with how they used and stored data and software, as the total available storage space was minuscule compared to today's standards.

In addition to hardware limitations, the software available for early microcomputers was often rudimentary due to the limited processing power. Graphical interfaces were virtually non-existent in the earliest microcomputers, with users typically interacting with the system through text-based command-line interfaces. Software applications were often basic and focused on simple tasks, such as word processing or basic spreadsheet calculations. Sophisticated applications like advanced graphics editing, video processing, or 3D modeling were well beyond the capabilities of these early systems.

Against this burgeoning backdrop of the microcomputer revolution, a man by the name of Gary Kildall developed the Control Program for Microcomputers (CP/M) system. CP/M was a pre-MS-DOS operating system. Kildall, while working at Intel, developed a high-level language named PL/M (Programming Language for Microcomputers). He needed a way to test and debug programs written in PL/M on the newly developed Intel 8080 microprocessor. This led to the creation of CP/M. Recognizing the imminent proliferation of different hardware systems, Kildall, with his experience at Intel and knowledge of microprocessors, saw a need for a standardized software platform. Many microcomputers were operating on incompatible systems, and Kildall's solution was CP/M, an operating system designed to work across diverse hardware setups.

At the heart of CP/M's design was its modularity, characterized predominantly by the BIOS (Basic Input/Output System). The BIOS acted as an intermediary layer that handled the direct communication with the hardware, such as disk drives, keyboards, and displays. By isolating system-specific hardware instructions within the BIOS, CP/M maintained a core set of generic commands. This modular architecture meant that to make CP/M compatible with a new machine, only the BIOS needed to be tailored to the specific hardware, preserving the integrity of the rest of the operating system. This modularity enabled rapid porting of CP/M across a wide array of early microcomputers without rewriting the entire OS.

Another notable technical feature of CP/M was its file system. CP/M used a disk-oriented file system, which was one of the first to use a hierarchical directory structure. This structure allowed users to organize and manage files efficiently on floppy disks. The operating system employed a simple 8.3 filename convention (up to 8 characters for the filename and 3 for the extension) which, though limited by today's standards, was effective for the time. Files were accessed through File Control Blocks (FCBs), a data structure that provided a consistent interface for file operations, further simplifying application development.

CP/M's command-line interface (CLI) was a hallmark feature, providing users with a means to interact with the system and run applications. The CLI, while rudimentary by today's standards, allowed users to navigate the directory structure, execute programs, and manage files. Coupled with a set of basic utilities bundled with the OS, this interface provided an accessible environment for both end-users and developers. For developers, CP/M provided a BDOS (Basic Disk Operating System) interface, allowing applications to be written without deep knowledge of the underlying hardware, thus fostering a rich ecosystem of software tailored for the CP/M platform.

However, CP/M's technical success didn't guarantee lasting market dominance. As it gained traction, Kildall's company, Digital Research, became a major player in the microcomputer software industry. But a missed business opportunity with IBM led to IBM choosing Microsoft's MS-DOS, which bore similarities to CP/M, for its Personal Computer. The story of early personal computing is interesting, and is depicted nicely in Pirates of Silicon Valley (available on DVD). The IBM + MS-DOS choice tilted the scales in the software market, positioning MS-DOS and its successors as major players, while CP/M gradually faded. Nonetheless, CP/M's role in early personal computing is significant, representing a key step towards standardized operating systems.

I wasn't around for the early days of personal computing when CP/M was a big deal. By the time I started exploring computers in the mid-1980s, the Apple IIe was the choice in education where I was first really exposed to personal computers. The Apple IIe was straightforward and easy to use. When I turned it on, I was met with the AppleSoft BASIC interface. In 1992, as I would soon be a teenager, my family purchased its first personal computer from Gateway 2000. Even though I missed the CP/M phase, the Apple IIe provided a solid introduction to the world of computing for me with the Gateway 2000 being foundational in my ever growing interest in computers.

Let's get back to CP/M.

The primary architecture CP/M was designed for was the Intel 8080 and its compatible successor, the Zilog Z80. However, CP/M was adapted to run on several different architectures over time. Here's a brief overview of some architectures and their technical specs:

  1. Intel 8080:

    • 8-bit microprocessor
    • Clock speeds typically up to 2 MHz
    • 4.5k transistors
    • 16-bit address bus, enabling it to access 65,536 memory locations
  2. Zilog Z80:

    • 8-bit microprocessor
    • Clock speeds of 2.5 MHz to 10 MHz
    • Around 8.5k transistors
    • 16-bit address bus, 8-bit data bus
    • It had enhanced instruction set compared to 8080 and was binary compatible with it.
  3. Intel 8085:

    • 8-bit microprocessor
    • Clock speeds of up to 5 MHz
    • An improved and more power-efficient version of the 8080
    • Included new instructions over the 8080
  4. Zilog Z8000 and Intel 8086/8088:

    • These were 16-bit processors.
    • CP/M-86 was developed for these processors as an extension to the original 8-bit CP/M.
    • The 8086 had a 16-bit data bus, and the 8088, used in the original IBM PC, had an 8-bit data bus.
  5. Motorola 68000:

    • While not a primary platform for CP/M, there were ports and adaptations made for the 16/32-bit Motorola 68000 series.
    • Used in early Apple Macintosh computers, Atari ST, Commodore Amiga, and others.
  6. Interdata 7/32:

    • This is a lesser-known 32-bit minicomputer for which CP/M was adapted.

We have already looked at the Z80 (in the context of the TI-84+ graphing calculator) as well as the Motorola 68000 (in the context of the TI-89 graphing calculator). Instead of focusing on a specific architecture, the RC2014, to run CP/M on bare metal, we will be looking at running a CP/M emulator on Adafruit's Grand Central M4 Express. I would love to get of the RC2014 kits and run CP/M on bare metal, but for now, we won't be doing that.

We're concentrating on setting up RunCPM on the Grand Central, so we'll only touch on the Z80 briefly. For additional information on the Z80, visit z80.info. The person behind z80.info also write an in-depth look at Z80 hardware and assembly language in Hackspace Magazine issues 7 & 8. If you're interested in a comprehensive study of the Z80, consider the books: Build your own Z80 computer - design guidelines and application notes by Steve Ciarcia (you can also grab the PDF here or here) and Programming the Z80 by Rodnay Zaks (you can also grab PDFs here, or here or here or here). Both books have out of print for decades and are rather expensive on Amazon.


CP/M incorporated wildcard characters in its file naming conventions, a legacy we continue to see in modern systems. Specifically, '?' was used to match any single character, and '*' could match part of or an entire file name or file type.

In terms of commands, many accepted these wildcards, and such a command was labeled as using an ambiguous file reference, abbreviated as "afn". In contrast, commands that required file references to be specific, without the use of wildcards, were termed as using an unambiguous file reference or "ufn". These shorthand terms, "afn" and "ufn", are frequently found in official CP/M documentation and will be adopted for our discussion here.

Builtin Commands:

  • DIR afn (or simply DIR): Employed to display the names of files that match the specified wildcard pattern.

  • ERA afn: This command is used to delete one or multiple files.

  • REN ufn1=ufn2: As the name suggests, this command allows users to rename a specific file.

  • TYPE ufn: Useful for viewing the contents of an ASCII file..

Standard Programs:

CP/M was equipped with a suite of standard programs, often referred to as Transient Commands. These weren't embedded within the core of CP/M but were accessible to the user as needed. They'd be loaded, run, and then purged from the memory. Several of these commands were fundamental for operations within the CP/M environment. A concise overview of some notable Transient Commands is provided below, though a more exhaustive exploration can be found in the CP/M manual.

  • STAT: This program offers insights into the current disk's status, specifics about individual files, and device assignment details.

  • ASM: A tool for program assembly. It takes a source code input and assembles it to produce an executable.

  • LOAD: Designed for Intel HEX formatted code files, this command loads the code and subsequently converts it into an executable format.

  • DDT: This is CP/M's built-in debugger, essential for diagnosing and resolving program errors.

  • ED: CP/M's text editor, enabling users to create and modify text files within the operating system.

  • SUBMIT: A utility to accept a file containing a list of commands, essentially enabling batch processing.

  • DUMP: A handy tool for those looking to view a file's contents represented in hexadecimal format.

For those eager to dive deeper into the vast ocean of CP/M's capabilities and legacy, the Tim Olmstead Memorial Digital Research CP/M Library is an invaluable resource, housing a trove of information and code associated with CP/M.

RunCPM is essentially a Z80 emulator that comes packaged with different CP/M versions tailored to function on the emulated Z80. It's a comprehensive toolkit for those interested in delving into Z80 assembly language programming, with the added perk of accessing the Grand Central's IO capabilities. As a bonus, Microsoft Basic is incorporated within the package, and for enthusiasts looking to explore further, various other languages can be sourced online. One such language is Modula-2, which holds significance as Niklaus Wirth's successor to the famed Pascal language.

When it comes to building RunCPM, the approach isn't one-size-fits-all. The build method you opt for is contingent on the target platform. In our case, we're aiming for compatibility with the Grand Central, so the Arduino method is the route we'll take. Begin by launching the RunCPM.ino file within the Arduino IDE (or Visual Code). However, prior to this step, ensure that the IDE is configured to build for the Grand Central. The following are stripped down instructions for RunCPM from its Github repo.

RunCPM - Z80 CP/M emulator

RunCPM is an application which can execute vintage CP/M 8 bits programs on many modern platforms, like Windows, Mac OS X, Linux, FreeBSD, Arduino DUE and variants, like Adafruit Grand Central Station, and the Teensy or ESP32. It can be built both on 32 and 64 bits host environments and should be easily portable to other platforms.

RunCPM is fully written in C and in a modular way, so porting to other platforms should be only a matter of writing an abstraction layer file for it. No modification to the main code modules should be necessary.

If you miss using powerful programs like Wordstar, dBaseII, mBasic and others, then RunCPM is for you. It is very stable and fun to use.

RunCPM emulates CP/M from Digital Research as close as possible, the only difference being that it uses regular folders on the host instead of disk images.

Grand Central M4 (GSM4)

  • The ATSAMD51 is large with an Arduino Mega shape and pinout.
  • The front half has the same shape and pinout as Adafruit's Metro's, so it is compatible with all Adafruit shields.
  • It's got analog pins, and SPI/UART/I2C hardware support in the same spot as the Metro 328 and M0.
  • It's powered with an ATSAMD51P20, which includes:
    • Cortex M4 core running at 120 MHz
    • Floating point support with Cortex M4 DSP instructions
    • 1MB flash, 256 KB RAM
    • 32-bit, 3.3V logic and power
    • 70 GPIO pins in total
    • Dual 1 MSPS DAC (A0 and A1)
    • Dual 1 MSPS ADC (15 analog pins)
    • 8 x hardware SERCOM (can be I2C, SPI or UART)
    • 22 x PWM outputs
    • Stereo I2S input/output with MCK pin
    • 12-bit Parallel capture controller (for camera/video in)
    • Built-in crypto engines with AES (256 bit), true RNG, Pubkey controller
  • Power the Grand Central with 6-12V polarity protected DC or the micro USB connector to any 5V USB source.
  • The 2.1mm DC jack has an on/off switch next to it so you can turn off your setup easily.
  • The board will automagically switch between USB and DC.
  • Grand Central has 62 GPIO pins, 16 of which are analog in, and two of which are true analog out.
  • There's a hardware SPI port, hardware I2C port, and hardware UART.
  • 5 more SERCOMs are available for extra I2C/SPI/UARTs.
  • Logic level is 3.3V.

The GC M4 comes with native USB support, eliminating the need for a separate hardware USB to Serial converter. When configured to emulate a serial device, this USB interface enables any computer to send and receive data to the GC M4. Moreover, this interface can be used to launch and update code via the bootloader. The board’s USB support extends to functioning as a Human Interface Device (HID), allowing it to act like a keyboard or mouse, which can be a significant feature for various interactive projects.

On the hardware front, the GC M4 features four indicator LEDs and one NeoPixel located on the front edge of the PCB, designed for easy debugging and status indication. The set includes one green power LED, two RX/TX LEDs that indicate data transmission over USB, and a red LED connected to a user-controllable pin. Adjacent to the reset button, there is an RGB NeoPixel. This NeoPixel can be programmed to serve any purpose, such as displaying a status color code, which adds a visually informative aspect to your projects.

Furthermore, the GC M4 includes an 8 MB QSPI (Quad SPI) Flash storage chip on board. This storage can be likened to a miniature hard drive embedded within the microcontroller. In a CircuitPython environment, this 8 MB flash memory serves as the storage space for all your scripts, libraries, and files, effectively acting as the "disk" where your Python code lives. When the GC M4 is used in an Arduino context, this storage allows for read/write operations, much like a small data logger or an SD card. A dedicated helper program is provided to facilitate accessing these files over USB, making it easy to transfer data between the GC M4 and a computer. This built-in storage is a significant feature, as it simplifies the process of logging data and managing code, and it opens up new possibilities for more advanced and storage-intensive projects.

The GC M4 board boasts a built-in Micro SD Card slot, providing a convenient and flexible option for removable storage of any size. This storage is connected to an SPI (Serial Peripheral Interface) SERCOM, providing high-speed data communication. Notably, SDIO (Secure Digital Input Output), a faster interface that is commonly used for SD cards, is not supported on this board. Nevertheless, the availability of a dedicated Micro SD Card slot is a standout feature, as it allows users to easily expand the storage capacity of their projects without any complex setup. This integrated Micro SD Card slot is a substantial advantage when comparing the GC M4 to other boards, such as the Arduino Due. Unlike the GC M4, the Arduino Due does not come with built-in SD card support. For projects that require additional storage or data logging capabilities on the Due, users must purchase and connect an external Micro SD adapter or a shield, which can add to the overall cost and complexity of the setup. The built-in SD Card slot on the GC M4 eliminates the need for such additional components, simplifying project designs and reducing both the cost and the physical footprint of the final build.

This convenient feature underscores the GC M4's design philosophy of providing an integrated, user-friendly experience. By including an SD Card slot directly on the board, the GC M4 encourages broader experimentation with data-intensive applications, such as data logging, file storage, and multimedia processing, which can be essential for a wide range of creative and practical projects.

Comes pre-loaded with the UF2 bootloader, which looks like a USB storage key. Simply drag firmware on to program, no special tools or drivers needed! It can be used to load up CircuitPython or Arduino IDE (it is bossa v1.8 compatible)

With all of these features, it probably seems like cheating for getting CP/M working. And we will be barely exercising these features. If only Gary Kildall could see how computers and technology have evolved.

Grand Central Specific Adaptations for RunCMP

Arduino digital and analog read/write support was added by Krzysztof Kliś via extra non-standard BDOS calls (see the bottom of cpm.h file for details).

LED blink codes: GSM4 user LED will blink fast when RunCPM is waiting for a serial connection and will send two repeating short blinks when RunCPM has exited (CPU halted). Other than that the user LED will indicate disk activity.

RunCPM needs A LOT of RAM and Flash memory by Arduino standards, so you will need to run on Arduinos like the DUE (not the Duemilanove) and similar controllers, like Adafruit's Grand Central. It is theoretically possible to run it on an Arduino which has enough Flash (at least 96K) by adding external RAM to it via some shield, but this is untested, probably slow and would require an entirely different port of RunCPM code. That could be for another day, but if you want to get CP/M running quickly, grab a Grand Central or Due.

You will also need a micro sd ("tf") card.

When using Arduino boards, the serial speed as well as other parameters, may be set by editing the RunCPM.ino sketch. The default serial speed is 9600 for compatibility with vintage terminals.

You will need to clone the RunCPM repository:

git clone https://github.com/MockbaTheBorg/RunCPM.git -v

In RunCPM.ino, you will want to specify the Grand Center header file be included:

#include "hardware/arduino/gc.h"

instead of

#include "hardware/arduino/due_sd_tf.h"

Getting Started

Preparing the RunCPM folder :

To set up the RunCPM environment, create a folder that contains both the RunCPM executable and the CCP (Console Command Processor) binaries for the system. Two types of CCP binaries are provided: one for 64K memory and another for 60K memory. On your micro SD card, you will want to create a directory called A which will need a directory called 0 in it. Place in 0 the contents of A.zip.

The 64K version of the CCPs maximizes the amount of memory available to CP/M applications. However, its memory addressing ranges are not reflective of what a real CP/M computer would have, making it less authentic in terms of emulating a physical CP/M machine.

On the other hand, the 60K version of the CCPs aims to provide a more realistic memory addressing space. It maintains the CCP entry point at the same loading address that it would occupy on a physical CP/M computer, adding to the authenticity of the emulation.

While the 64K and 60K versions are standard, it is possible to use other memory sizes, but this would necessitate rebuilding the CCP binaries. The source code needed to do this is available on disk A.ZIP. The CCP binaries are named to correspond with the amount of memory they are designed to operate with. For example, DRI's CCP designed for a 60K memory environment would be named CCP-DR.60K. RunCPM searches for the appropriate file based on the amount of memory selected when it is built.

It is important to note that starting with version 3.4 of RunCPM, regardless of the amount of memory allocated to the CP/M system, RunCPM will allocate 64K of RAM on the host machine. This ensures that the BIOS always starts at the same position in memory. This design decision facilitates the porting of an even wider range of CCP codes to RunCPM. Starting from version 3.4, it is essential to use new copies of the master disk A.ZIP, as well as the ZCPR2 CCP and ZCPR3 CCP (all of which are provided in the distribution).

Building dependencies

All boards now use the SdFat 2.x library, from here: https://github.com/greiman/SdFat/ All Arduino libraries can be found here: https://www.arduinolibraries.info/

SdFat library change

If you get a 'File' has no member named 'dirEntry' error, then a modification is needed on the SdFat Library SdFatConfig.h file (line 78 as of version 2.0.2) changing:




As file type 1 is required for most of the RunCPM ports.

To find your libraries folder, open the Preferences in Arduino IDE and look at the Sketchbook location field.

Changes to Adapt to the Grand Central

Given that the official repository has already integrated the modifications to support the Grand Central, the following changes are primarily to serve educational purposes or as guidance for those intending to adapt the setup for other M4 boards.

All of the following should already be set in RunCPM.ino, but I'll write them out so you can see what changes have been made.

abstraction_arduino.h For the Grand Central, the alteration pertains to the setting of HostOs:

On line 8, the line:


Should be transformed to:


RunCPM.ino Aligning with the alteration in abstraction_arduino.h, we also need to integrate Grand Central support in this file. Specifically, configurations relating to the SD card, LED interfaces, and the board's designation need adjustment. Insert a branch to the board configuration #if structure at approximately line 28:

  #define USE_SDIO 0
  SdFat SD;
  #define LED 13
  #define LEDinv 0

Due to certain ambiguous factors (perhaps the unique SPI bus configuration for the SD card), initializing the SD card and file system requires a different approach. Thus, following the insertion of the previous snippet, at line 108:

  if (SD.cardBegin(SDINIT, SD_SCK_MHZ(50))) {
    if (!SD.fsBegin()) {
      _puts("\nFile System initialization failed.\n");
  if (SD.begin(SDINIT)) {

This snippet replaces the original:

if (SD.begin(SDINIT)) {

Following these modifications, it's straightforward to get RunCPM functional. For communication, the USB connection acts as the terminal interface. However, take note that not all terminal emulators provide flawless compatibility. Since CP/M anticipates a VT100-style terminal, some features might not behave as expected.

Installing Adafruit SAMD M4 Boards

If you haven't already, you will need to add Adafruit board definitions to Arduino IDE. To do this, copy the URL below and paste into the text field in the dialog box; navigate to File --> Preferences


We will only need to add one URL to the IDE in this example, but you can add multiple URLS by separating them with commas. Copy and paste the link below into the Additional Boards Manager URLs option in the Arduino IDE preferences.

Preparing the CP/M virtual drives :

VERY IMPORTANT NOTE - Starting with RunCPM version 3.7, the use of user areas has become mandatory. The support for disk folders without user areas was dropped between versions 3.5 and 3.6. If you are running a version up to 3.5, it is advisable to consider upgrading to version 3.7 or higher. However, before making this move, it is important to update your disk folder structure to accommodate the newly required support for user areas.

RunCPM emulates the disk drives and user areas of the CP/M operating system by means of subfolders located under the RunCPM executable’s directory. To prepare a folder or SD card for running RunCPM, follow these procedures:

Create subfolders in the location where the RunCPM executable is located. Name these subfolders "A", "B", "C", and so forth, corresponding to each disk drive you intend to use. Each one of these folders represents a separate disk drive in the emulated CP/M environment. Within the "A" folder, create a subfolder named "0". This represents user area 0 of disk A:. Extract the contents of the A.ZIP package into this "0" subfolder. When you switch to another user area within CP/M, RunCPM will automatically create the respective subfolders, named "1", "2", "3", etc., as they are selected. For user areas 10 through 15, subfolders are created with names "A" through "F".

It is crucial to keep all folder and file names in uppercase to avoid potential issues with case-sensitive filesystems. CP/M originally supported only 16 disk drives, labeled A: through P:. Therefore, creating folder names representing drives beyond P: will not function in the emulation, and the same limitation applies to user areas beyond 15 (F).

Available CCPs :

RunCPM can run on its internal CCP or using binary CCPs from real CP/M computers. A few CCPs are provided:

  • CCP-DR - Is the original CCP from Digital Research.
  • CCP-CCPZ - Is the Z80 CCP from RLC and others.
  • CCP-ZCP2 - Is the original ZCPR2 CCP modification.
  • CCP-ZCP3 - Is the original ZCPR3 CCP modification.
  • CCP-Z80 - Is the Z80CCP CCP modification, also from RLC and others.

The A.ZIP package includes the source code for the Console Command Processors (CCPs), allowing for native rebuilding if necessary. To facilitate this, SUBMIT (.SUB) files are provided, which are also useful for rebuilding some of the RunCPM utilities.

While the package comes with a set of CCPs, users can adapt additional CCPs to work with RunCPM. If successful in this adaptation, users are encouraged to share their work so it can be potentially added to the package for others to use. By default, RunCPM utilizes an internal CCP. However, if you prefer to use a different CCP, two specific steps must be taken, which are outlined below:

1 - Change the selected CCP in globals.h (in the RunCPM folder). Find the lines that show:

/ Definition of which CCP to use (must define only one) /

#define CCP_INTERNAL // If this is defined, an internal CCP will emulated

//#define CCP_DR

//#define CCP_CCPZ

//#define CCP_ZCPR2

//#define CCP_ZCPR3

//#define CCP_Z80

Comment out the CCP_INTERNAL line by inserting two slashes at the line's beginning. Then remove the two slashes at the start of the line containing the name of the CCP you intend to use. Save the file.

2 - Copy a matching CCP from the CCP folder to the folder that holds your A folder. Each CCP selection will have two external CCP's, one for 60K and another for 64K. If you have already built the executable, you will need to do it again.

Anytime you wish to change the CCP, you must repeat these steps and rebuild.

IMPORTANT NOTE - CCP-Z80 expects the $$$.SUB to be created on the currently logged drive/user, so when using it, use SUBMITD.COM instead of SUBMIT.COM when starting SUBMIT jobs.

Contents of the "master" disk (A.ZIP) :

The "master" disk, labeled as A.ZIP, serves as the foundational environment for CP/M within RunCPM. It includes the source code for the Console Command Processors (CCPs) and features the EXIT program, which terminates RunCPM when executed.

The master disk also houses the FORMAT program, designed to create a new drive folder, simulating the process of formatting a disk. Importantly, the FORMAT program doesn't affect existing drive folders, ensuring its safe use. Despite its ability to create these drive folders, it doesn't have the capability to remove them from within RunCPM. To remove a drive folder created by the FORMAT program, manual deletion is necessary, which involves accessing the RunCPM folder or SD Card via a host machine.

In addition to these utilities, the master disk contains Z80ASM, a potent Z80 assembler that directly produces .COM files, ready for execution. To further enhance the RunCPM experience, the master disk also includes various CP/M applications not originally part of Digital Research Inc.'s (DRI's) official distribution. A detailed list of these additional applications can be found in the 1STREAD.ME file included on the master disk.


Printing to the PUN: and LST: devices is allowed and will generate files called "PUN.TXT" and "LST.TXT" under user area 0 of disk A:. These files can then be tranferred over to a host computer via XMODEM for real physical printing. These files are created when the first printing occurs, and will be kept open throughout RunCPM usage. They can be erased inside CP/M to trigger the start of a new printing. As of now RunCPM does not support printing to physical devices.

Limitations / Misbehaviors

The objective of RunCPM is not to emulate a Z80 CP/M computer perfectly, but to allow CP/M to be emulated as close as possible while keeping its files on the native (host) filesystem.

This will obviously prevent the accurate physical emulation of disk drives, so applications like MOVCPM and STAT will not be useful.

The master disk, A.ZIP, continues to provide the necessary components to maintain compatibility with Digital Research Inc.'s official CP/M distribution. Currently, only CP/M 2.2 is fully supported, though work is ongoing to bring support for CP/M 3.0.

IN/OUT instructions are designated to facilitate communication between the soft CPU BIOS and BDOS and the equivalent functions within RunCPM, thus these instructions are reserved for this purpose and cannot be used for other tasks. The video monitor in this emulation environment is assumed to be an ANSI/VT100 emulation, which is the standard for DOS/Windows/Linux distributions. This means CP/M applications hard-coded for different terminals may encounter issues with screen rendering.

When using a serial terminal emulator with RunCPM, it is important to configure the emulator to send either a Carriage Return (CR) or a Line Feed (LF) when the Enter key is pressed, but not both (CR+LF). Sending both can disrupt the DIR listing on Digital Research’s Command Control Processor (CCP), consistent with standard CP/M 2.2 behavior.

RunCPM does not support setting files to read-only or applying other CP/M-specific file attributes. All files within the RunCPM environment will be both visible and read/write at all times, necessitating careful file handling. RunCPM does support setting "disks" to read-only, but this read-only status applies only within the context of RunCPM. It does not alter the read/write attributes of the disk’s containing folder on the host system.

Furthermore, some applications, such as Hi-Tech C, may attempt to access user areas numbered higher than 15 to check for a specific CP/M flavor other than 2.2. This action results in the creation of user areas labeled with letters beyond 'F', which is expected behavior and will not be altered in RunCPM.

CP/M Software

CP/M software library here! or here

Having inserted the microSD card and connected the Grand Central appropriately, ensuring both board and port settings are accurate, proceed to build and install onto the Grand Central.

RunCPM provides access to Arduino I/O capabilities through CP/M's BDOS (Basic Disk Operating System) interface. By loading the C register with a function number and a call to address 5, additional functionality that has been added to the system can be accessed.

For these functions, the number of the pin being used is placed in the D register and the value to write (when appropriate) is placed in E. For read functions, the result is returned as noted.


LD C, 220
LD D, pin_number
LD E, mode ;(0 = INPUT, 1 = OUTPUT, 2 = INPUT_PULLUP)


LD C, 221
LD D, pin_number
CALL 5 Returns result in A (0 = LOW, 1 = HIGH). 


LD C, 222
LD D, pin_number
LD E, value ;(0 = LOW, 1 = HIGH)


LD C, 223
LD D, pin_number

Returns result in HL (0 - 1023). 

AnalogWrite (i.e. PWM)

LD C, 224
LD D, pin_number
LD E, value (0-255)
Turning on a LED

Using the provided PinMode and DigitalWrite calls, writing code to control an LED, such as turning it on when connected to pin D8, becomes a straightforward task. To accomplish this, one can use the ED editor to create a file named LED.ASM with the necessary code. This file editing can be done directly on your workstation and saved to the SD card, which is a convenient approach given that ED, the editor, hails from a different era of computing and might feel a bit foreign to modern users accustomed to contemporary text editors.

; Turn on a LED wired to pin 8
org 100h    ;start address
mvi c, 220  ;pinmode
mvi d, 8    ;digital pin number
mvi e, 1    ;value (0 = low, 1 = high)
push d      ;save arguments
call 5      ;call BDOS
pop d       ;restore arguments
mvi c, 222  ;digital write
call 5      ;call BDOS
ret         ;exit to CP/M

Then use the ASM command to assemble it:

A>asm led

RunCPM Version 3.7 (CP/M 2.2 60K)

This produces several files. LED.PRN is a text file containing your assembly language program along with the machine code it assembles to. Each line has 3 columns: address, machine code, and assembly language.

A>type led.prn

0100          org 100h
0100 0EDC     mvi c,220
0102 1608     mvi d,8
0104 1E01     mvi e, 1
0106 D5       push d
0107 CD0500   call 5
010A D1       pop d
010B 0EDE     mvi c, 222
010D CD0500   call 5
0110 C9       ret

There is also now a LED.HEX file. We can use the LOAD command/program to convert it into LED.COM which can be executed.

A> load led

BYTES READ    0011

Now it can executed:


which will turn on the LED connected to pin D8.

So now we can read and write digital and analog I/O from Z80 assembly language code that's running on a Z80 emulated on the Grand Central. That seems pretty round-about.

While that's true, the point is to be able to play around with Z80 assembly language (and CP/M in this case) without having to find or build an actual Z80 system (although that can be its own kind of fun).

Closing Thoughts

One of the most lasting legacies of CP/M is its file system and command-line interface, with its 8-character filename followed by a 3-character file type (e.g., filename.txt), which became a standard that was carried into MS-DOS and later Windows. Its command-line interface, with commands like DIR to list files and REN to rename files, has echoes in the MS-DOS command prompt and persists in modern versions of Windows as the Command Prompt and PowerShell. CP/M was notable for being one of the first operating systems that was largely machine-independent, due to its separation between the operating system and the BIOS (Basic Input/Output System). This made it relatively easy to port CP/M to different computer systems and paved the way for the concept of a software ecosystem that is not tied to a specific set of hardware, a key principle in modern operating system design.

CP/M played a crucial role in the early days of personal computing; before the dominance of MS-DOS and Windows, CP/M was the de facto standard operating system for early microcomputers, fostering the personal computing revolution by making computers more approachable and useful for everyday tasks. When IBM was developing its first personal computer, CP/M was initially considered the operating system of choice, and although IBM ultimately went with MS-DOS (largely due to cost and timing), MS-DOS itself was heavily influenced by CP/M, with many command-line commands being similar and the overall architecture of MS-DOS bearing a strong resemblance to CP/M. This influence extended as MS-DOS evolved into Windows, making CP/M an indirect ancestor of one of the world’s most widely used operating systems. Even after its decline as a primary OS for general-purpose computers, CP/M found a second life in embedded systems and other specialized computing applications due to its lightweight, efficient design, setting the stage for the importance of compact, efficient operating systems in embedded and specialized computing devices, a category that has grown with the proliferation of IoT (Internet of Things) devices. In summary, CP/M stands as an iconic example of how early innovations in computing continue to have ripple effects that extend far into the future.


Stewart Cheifet and his Computer Chronicles

Stewart Cheifet is a name that carries significant weight in the world of technology broadcasting. For nearly two decades, he was the calm and insightful host of "The Computer Chronicles," a pioneering television series that debuted in the early 1980s on PBS. At a time when computers were transitioning from specialized tools to household staples, Cheifet emerged as a pivotal figure. With a demeanor that was both authoritative and approachable, he served as a trusted guide through the rapidly evolving landscape of personal computing, software development, and digital technology. Each week, Cheifet's show provided viewers with interviews, product reviews, and hands-on demonstrations, delivering invaluable insights in a way that was engaging and accessible to both tech enthusiasts and novices alike. As a technology communicator, Cheifet excelled in his ability to bridge the gap between the complex world of technology and the general public. His journalistic style was characterized by clarity, curiosity, and a deep respect for his audience’s intelligence, regardless of their familiarity with the subject at hand. Cheifet had a knack for asking the questions that viewers themselves might have posed, and his interactions with guests—ranging from tech industry titans to innovative programmers—were marked by an earnest desire to inform, rather than merely impress. Through "The Computer Chronicles," Cheifet didn't just report on the digital revolution; he played a vital role in demystifying it, making technology more accessible and comprehensible to millions of viewers around the world.

"The Computer Chronicles" was a groundbreaking television series that provided viewers with an informative and comprehensive look into the swiftly evolving world of personal computing. Conceived by Stewart Cheifet and co-creator Jim Warren, the show emerged as an earnest attempt to demystify computers and technology for the average person, at a time when such devices were beginning to permeate households and workplaces alike. Each episode of "The Computer Chronicles" offered a deep dive into various aspects of computing, ranging from hardware and software reviews to interviews with industry leaders, providing its viewers with a rare and detailed insight into the burgeoning tech world. The show launched in 1983, initially as a local program on KCSM-TV in San Mateo, California, before gaining nationwide syndication. What started as a modest production with a simple set and straightforward format quickly blossomed into an essential resource for viewers across the country. Under the stewardship of Cheifet and with the early influence of Warren, the show broke new ground, not merely following the tech trends of the time but often anticipating and spotlighting innovations before they reached the mainstream, thereby cementing its status as a must-watch guide in a rapidly changing digital landscape. From its inception, the central mission of "The Computer Chronicles" was to demystify technology for the average person. Cheifet and his team dedicated themselves to creating content that was both educational and accessible, understanding that for many of their viewers, the world of computers was both exciting and daunting. Each episode was crafted to break down complex concepts into easily digestible segments, whether it was explaining the basics of hardware and software, offering tutorials on popular applications, or providing insights into the broader trends of the tech industry.

The show originally aired from 1983 to 2002, a timeframe that was witness to some of the most transformative years in the history of computing. In this span, "The Computer Chronicles" chronicled the transition from bulky, expensive personal computers to sleek, affordable, and ubiquitous devices integral to daily life. It stood as a key resource during an era that saw the rise of the internet, the advent of user-friendly operating systems, and the explosion of software capable of tasks that had previously been the stuff of science fiction. The show was not just a product of its time, but a vital chronicle of a period of rapid technological advancement. The show emerged during a culturally significant era when technology was increasingly intersecting with daily life, but the public's understanding of this technology often lagged behind. This was a time when computers were transitioning from being perceived as intimidating, esoteric machines used only by scientists and engineers, to becoming central to education, communication, and entertainment in the broader culture. The show, in this context, played a pivotal role in helping to shape public perception of what computers could do and in promoting computer literacy at a time when that was becoming an increasingly essential skill. At the core of "The Computer Chronicles" was the mission to educate. Cheifet, along with a rotating roster of co-hosts and guest experts, took complex topics and translated them into language that was accessible to a general audience. Each episode aimed to empower viewers, whether they were tech-savvy enthusiasts or complete novices, with knowledge about the capabilities and potential of computers. In doing so, "The Computer Chronicles" served not only as a guide to understanding the technical developments of the era but also as a lens through which to view the broader cultural shifts that these technologies were driving.

In his early professional life, Cheifet navigated a variety of roles that paved the way for his iconic career in technology broadcasting. His background was in law, but his passion for technology and media quickly became apparent. His unique combination of legal acumen and genuine interest in the burgeoning world of computing offered him a distinct perspective, enabling him to articulate complex technological concepts in a way that was accessible and understandable to a wide audience. This fusion of skills would prove invaluable as he transitioned into a role that required the ability to communicate effectively about an industry that was, at the time, in its nascent stages and shrouded in technical jargon. Cheifet's path into broadcasting was serendipitous. He began working at a public television station in San Francisco in the late 1970s. Initially tasked with handling legal and administrative work, he quickly saw the potential for using television as a medium to educate the public about the rapidly evolving world of computers. Recognizing a gap in public knowledge about technology—a gap that was widening as computers became increasingly integral to both work and daily life—Cheifet became an advocate for the creation of a show that could bridge this divide. This advocacy, coupled with Cheifet’s natural on-camera presence and expertise in technology, led to the birth of "The Computer Chronicles." Under his leadership as host and producer, the show became an essential resource for viewers interested in keeping pace with the technological revolution that was unfolding before their eyes. Cheifet was not just the face of the program; he was its guiding force, curating content that was informative, engaging, and demystifying. In this role, he became more than a broadcaster; he became one of the most influential technology communicators of his time, deftly translating the complexities of computing into terms that viewers could not only understand but use to enhance their interaction with the rapidly changing digital world. In an era when personal computing was still a relatively new concept, Cheifet occupied a unique and essential role as a technology communicator. He stood at the intersection of the fast-paced world of technology and the general public, many of whom were just beginning to integrate computers into their daily lives. Cheifet's role was multifaceted: part educator, part interpreter, and part guide. He wasn't simply reporting on technological advancements; he was providing context, offering explanations, and helping viewers make sense of an industry that was revolutionizing society at a breathtaking pace.

Cheifet's talent lay in his ability to bridge the gap between the intricate, often intimidating world of technology and the average person. He recognized that, for many, the world of bits and bytes, processors and modems was a foreign landscape, but one that was becoming increasingly important to navigate. It was this recognition that drove Cheifet to break down complex topics into digestible, relatable segments. With a calm and steady demeanor, he approached each episode as an opportunity to empower his viewers, transforming intimidating jargon into clear and understandable language. Whether discussing the specifics of a new piece of software, the inner workings of a computer, or the broader implications of internet privacy, Cheifet acted as a translator, converting the technical into the practical. In this capacity, he played a pioneering role in tech communication. He understood that technology was not just for the experts; it was becoming a central part of everyone’s life, and thus everyone deserved to understand it. Cheifet saw the potential for technology to be a tool for widespread empowerment and sought to equip people with the knowledge they needed to harness that potential. Through "The Computer Chronicles," he demystified the computer revolution, making it approachable and accessible for viewers of all backgrounds. In doing so, he shaped the way an entire generation came to understand and interact with the technological world, emphasizing that technology was not just a subject for specialists, but a fundamental aspect of modern life that everyone could—and should—engage with.

Cheifet's concept for "The Computer Chronicles" was brought to life through a crucial partnership with Jim Warren, a notable computer enthusiast and the founder of the West Coast Computer Faire, one of the earliest and most significant personal computer conventions. Warren’s extensive connections in the tech community and passion for promoting computing to the general public made him an ideal partner for this venture. Together, Cheifet and Warren conceived of a program that would not simply report on the developments in computing, but would provide hands-on demonstrations, in-depth interviews with industry leaders, and practical advice for consumers — all delivered in a format that was both engaging and informative. The partnership between Cheifet and Warren was symbiotic, drawing on each other’s strengths to create a show that was greater than the sum of its parts. Cheifet, with his calm demeanor, articulate presentation, and background in broadcasting, was the steady hand steering the show's content and tone. Warren, with his deep connections, enthusiasm for computing, and desire to make tech accessible to the public, brought the kind of insider perspective that added depth and authenticity to the program. Together, they created a dynamic and effective team that would go on to shape "The Computer Chronicles" into a beloved and respected institution in the tech world.

To stay relevant and beneficial, "The Computer Chronicles" knew it had to do more than just keep pace with the fast-evolving world of technology; it needed to stay ahead. Cheifet and his team were constantly on the lookout for emerging technologies and trends, often bringing viewers an early look at innovations that would later become commonplace. This forward-looking approach wasn't just about showcasing the latest gadgets and gizmos; it was about helping viewers understand the trajectory of technology and how it could impact their lives in meaningful ways. This focus on anticipating the future of tech was a defining characteristic of the show and a testament to its commitment to empowering its audience. "The Computer Chronicles" was not only a guide but also a trusted advisor for viewers. It assumed a responsibility to deliver not just information, but also critical analysis and advice. Cheifet and his co-hosts didn't shy away from asking hard-hitting questions of their guests, who ranged from tech industry titans to innovative start-up founders. The show took its role as a public educator seriously, aiming to provide viewers with the knowledge they needed to make informed decisions, whether they were purchasing a new piece of hardware, choosing software for their business, or simply trying to understand the social and ethical implications of a new technology. Underlying all of this was a deep respect for the audience. The show never assumed excessive prior knowledge, nor did it oversimplify to the point of condescension. The balance that Cheifet and his team struck—between depth and accessibility, between enthusiasm for technology and a critical eye—was the essence of the show’s enduring appeal. It respected its viewers as curious, intelligent individuals eager to engage with the digital world, and took on the role of guide with humility and grace, always aiming to educate, enlighten, and empower.

In the midst of the rapidly evolving tech landscape, "The Computer Chronicles" managed to spotlight some of the most significant figures and innovations of its time. The interviews conducted on the show were more than just conversations; they were historical records, capturing the insights and visions of the individuals who were shaping the future of technology. Stewart Cheifet’s interviews with Bill Gates explored the rise of Microsoft and the Windows operating system that would come to dominate personal computing. His conversations with Steve Jobs provided a glimpse into the mind of a man whose ideas would revolutionize multiple industries, from personal computers with the Macintosh to animated movies with Pixar, and later, mobile communications with the iPhone. Beyond these famous figures, "The Computer Chronicles" showcased a multitude of other influential personalities in the tech world, such as Gary Kildall, the developer of the CP/M operating system, and Mitch Kapor, the founder of Lotus Development Corporation and the architect of Lotus 1-2-3, a pioneering spreadsheet application that played a key role in the success of IBM's PC platform. These interviews provided viewers with an intimate understanding of the key players in the tech industry and their visions for the future, directly from the source.

The technology showcases on "The Computer Chronicles" were a core part of its mission to educate the public. The program offered hands-on demonstrations of groundbreaking products and software, serving as a critical resource for viewers in a time before the internet made such information widely accessible. For example, the show provided early looks at graphical user interfaces, which made computers more user-friendly and accessible to non-experts; this was a transformative shift in how people interacted with computers. It also featured episodes on emerging technologies such as CD-ROMs, early forms of internet connectivity, and the first portable computers, shedding light on how these innovations would come to be integrated into everyday life. Through these showcases, the program didn't just report on technology; it brought technology into the living rooms of viewers, making the future feel tangible and immediate. The show, in its near two-decade run, was not confined to an American audience. Its international syndication expanded its reach to a global scale, touching the lives of viewers across continents. In a period when access to technology news and developments was limited in many parts of the world, "The Computer Chronicles" stood as a beacon of information. It played an instrumental role in familiarizing international audiences with the developments in Silicon Valley, the emerging global hub of technology. For many overseas, the show became the window through which they glimpsed the cutting-edge advancements in computing and the digital revolution that was reshaping societies.

As the show journeyed through the years, its chronicles mirrored the seismic shift in global tech culture. In the early 1980s, when "The Computer Chronicles" began its broadcast, computers were predominantly seen as large, intimidating machines reserved for business, academia, scientific research, engineering, or the realm of enthusiastic hobbyists. They were more an anomaly than a norm in households. However, as the years progressed and the show continued to share, explain, and demystify each technological advancement, a noticeable transformation was underway. Computers evolved from being hefty, esoteric devices to compact, user-friendly, and essential companions in everyday life. This shift in tech culture was not solely about hardware evolution. The show also highlighted the software revolutions, the birth of the internet, and the early inklings of the digital society that we live in today. "The Computer Chronicles" documented the journey from a time when software was purchased in physical boxes to the era of digital downloads; from the era where online connectivity was a luxury to the age where it became almost as vital as electricity. The show captured the world's transition from disconnected entities to a globally connected network, where information and communication became instantaneous. Reflecting on the legacy of the show, it's evident that its influence transcended mere entertainment or education. It served as a compass, helping global viewers navigate the torrent of technological advancements. By chronicling the shift in tech culture, the show itself became an integral part of that transformation, shaping perceptions, bridging knowledge gaps, and fostering a sense of global camaraderie in the shared journey into the digital age. The show was more than just a television show; it was a comprehensive educational resource that was utilized in a variety of contexts. Schools, colleges, and community centers often integrated episodes of the show into their curricula to provide students with real-world insights into the fast-evolving landscape of technology. The detailed product reviews, software tutorials, and expert interviews that were a staple of the program served as valuable supplemental material for educators striving to bring technology topics to life in the classroom. In a period where textbooks could quickly become outdated due to the pace of technological change, "The Computer Chronicles" offered timely and relevant content that helped students stay abreast of the latest developments in the field.

The show didn’t just educate; it inspired. Its unique blend of in-depth analysis, hands-on demonstrations, and approachable dialogue set a standard for technology communication that has had a lasting influence on subsequent generations of tech shows and podcasts. "The Computer Chronicles" proved that it was possible to engage with complex technological concepts in a way that was both rigorous and accessible, a principle that has been embraced by many contemporary tech commentators. Its format — which seamlessly blended product reviews, expert interviews, and thematic explorations of tech trends — has become a template that many tech-focused shows and podcasts continue to follow, a testament to the show's innovative and effective approach to technology journalism. Furthermore, The show was an early example of public media's power to engage in significant educational outreach beyond the traditional classroom setting. Its commitment to public service broadcasting meant that it prioritized content that was not only informative but also genuinely useful for its viewers. Whether helping a small business owner understand the potential of a new software suite, or guiding a parent through the maze of educational tools available for their children, the show was constantly oriented towards empowerment and enrichment. In doing so, it exemplified the potential for technology-focused media to serve as a force for widespread public education and digital literacy.

"The Computer Chronicles" serves as a remarkable and extensive historical document of a pivotal era in the evolution of technology. As it tracked and discussed the innovations of its time, the show unintentionally created a comprehensive and detailed record of the late 20th-century digital revolution. Each episode now stands as a snapshot of a specific moment in tech history, capturing the state of hardware, software, and digital culture at various points in time. From early computers with limited capabilities to the dawn of the internet and the rapid advancement of personal computing devices, "The Computer Chronicles" chronicled not just the technologies themselves, but also the ways in which people engaged with and thought about these new tools. As such, the show provides future generations with a rich, nuanced, and human perspective on a transformative era.

Recognizing the historical and educational value of "The Computer Chronicles," various institutions have taken steps to preserve and make accessible this unique resource. Notably, the Internet Archive, a non-profit digital library offering free access to a vast collection of digital content, hosts a comprehensive collection of episodes from the show. This initiative ensures that the extensive trove of information, insights, and interviews from "The Computer Chronicles" remains available to the public, researchers, and historians. By housing the show in such archives, the program is preserved as a significant part of the public record, a move that acknowledges the profound impact that this show had on shaping public understanding of technology. Each episode is also readily available for viewing on YouTube.

Beyond its archival function, the preservation of "The Computer Chronicles" in repositories like the Internet Archive also invites contemporary audiences to engage with the program anew. For tech enthusiasts, educators, or anyone interested in the history of technology, these archives are a goldmine. They offer an engaging way to explore the trajectory of digital tools and culture, and to better understand the foundations upon which our current, highly interconnected digital world was built. As technology continues to advance at an ever-accelerating pace, the preservation of shows like "The Computer Chronicles" ensures that we maintain a connection to, and understanding of, the roots of our digital age.

Stewart Cheifet has maintained his keen perspective on the ever-evolving world of technology. In recent interviews and statements, he often draws parallels between the early years of personal computing, which "The Computer Chronicles" so meticulously documented, and today's rapidly advancing digital age. Cheifet has remarked on the cyclical nature of tech innovation; where once the personal computer was a revolutionary concept that promised to change the world, today we see similar transformative promises in areas like artificial intelligence, blockchain technology, and quantum computing. He has noted how each new wave of technology brings with it a mix of excitement, skepticism, disruption, and adaptation — patterns that were as evident in the era of "The Computer Chronicles" as they are in today's tech landscape. Cheifet’s views on the evolution of the tech world are informed by a deep historical perspective. He has often spoken about the increasing integration of technology into our daily lives, a trend that "The Computer Chronicles" began tracking at its infancy. In the show’s early days, computers were largely separate from other aspects of life; today, Cheifet observes, they are deeply embedded in everything we do, from how we work and learn to how we socialize and entertain ourselves. This is a transformation that "The Computer Chronicles" both predicted and helped to shape, as it worked to demystify computers and promote digital literacy at a time when the technology was new and unfamiliar to most people.

Furthermore, Cheifet has provided insights on the responsibilities that come with technological advancements. He has emphasized the ethical considerations that technology developers and users must grapple with, particularly as digital tools become more powerful and pervasive. Cheifet has stressed the importance of thoughtful, informed dialogue about the implications of new technologies — a principle that was at the heart of "The Computer Chronicles" and that remains deeply relevant today. As the digital world continues to evolve at a breakneck pace, Cheifet’s voice is a reminder of the need to approach technology with both enthusiasm and critical awareness, values that he has championed throughout his career. His influence on tech journalism and education is profound and enduring. As the host of "The Computer Chronicles," he pioneered a format for technology communication that was both accessible and deeply informative, bridging the gap between the technical community and the general public at a critical juncture in the history of computing. His calm, clear, and insightful manner of presentation turned what could have been complex and intimidating subjects into comprehensible and engaging content. Cheifet’s work helped to demystify the world of computers at a time when they were becoming an integral part of society, making technology accessible and approachable for viewers of all backgrounds and levels of understanding. In this sense, he played a pivotal role in shaping the public’s relationship with technology, promoting a level of digital literacy that was foundational for the internet age.

Beyond journalism, Cheifet's impact reverberates in educational circles as well. "The Computer Chronicles" was not only a popular TV show; it became a valuable educational resource used by teachers and trainers to familiarize students with the world of computers. Even after the show ended, Cheifet continued his role as an educator, engaging with academic communities through lectures and contributions to educational content. By fostering a deeper understanding of technology's role and potential, Stewart Cheifet has left a lasting legacy that goes beyond broadcasting — he has contributed significantly to the culture of tech education and awareness that we recognize as essential in today’s interconnected world. "The Computer Chronicles" stands as an enduring and invaluable record of a transformative era in the history of technology. Its extensive archive of episodes offers a detailed chronicle of the evolution of computing, from the early days of personal computers to the rise of the internet and beyond. In a world where the pace of technological innovation continues to accelerate, "The Computer Chronicles" serves as a foundational document, preserving the context, the excitement, and the challenges of a time when computers were moving from the realm of specialists into the hands of the general public. For today’s tech enthusiasts, it provides a vivid and insightful perspective on how the digital world as we know it was built, offering lessons on innovation, adaptation, and the human side of technological progress.

The show’s enduring relevance is also reflected in its approach to tech journalism — rigorous, curious, and always striving to demystify complex topics for its viewers. "The Computer Chronicles" was more than a show about gadgets; it was a show about the people who made and used those gadgets, and the ways in which technology was starting to reshape society. As such, it offers a model for future tech communicators on how to cover the world of technology in a way that is both deeply informed and broadly accessible. In this sense, "The Computer Chronicles" continues to serve as an essential resource not only for understanding the past, but also for engaging thoughtfully with the future of technology. As a pioneering tech communicator, Cheifet stands as a seminal figure in the landscape of technology journalism and education. For over two decades, through "The Computer Chronicles," he brought the complex world of computers and technology into the living rooms of millions, acting as both a guide and a translator between the burgeoning world of digital innovation and a public hungry to understand and engage with it. With a demeanor that was authoritative yet approachable, Cheifet had an uncanny ability to take intricate, technical topics and distill them into digestible, relatable content. His work has left an indelible mark on how we interact with and think about technology. Today, as we navigate an ever-changing digital environment, the foundational literacy in computing that Cheifet and his show promoted feels not just prescient, but essential. His lasting legacy is apparent not only in the rich archive of "The Computer Chronicles," which continues to be a resource for tech enthusiasts and historians alike, but also in the broader culture of tech journalism and communication. Cheifet’s influence can be seen in every tech podcast that seeks to break down complex topics for a general audience, every YouTube tech reviewer who strives to balance expertise with accessibility, and every tech educator who uses media to bring digital skills to a wider community. In a world increasingly shaped by digital tools and platforms, Stewart Cheifet’s pioneering work as a tech communicator remains a touchstone, exemplifying the clarity, curiosity, and humanity that effective technology communication demands.

Motorola 68000 Processor and the TI-89 Graphing Calculator

The Revolutionary Motorola 68000 Microprocessor

In the annals of computing history, few microprocessors stand out as prominently as the Motorola 68000. This silicon marvel, often referred to simply as the "68k," laid the foundation for an entire generation of computing, playing a seminal role in the development of iconic devices ranging from the Apple Macintosh to the Commodore Amiga, and from the Sega Genesis to the powerful workstations of the 1980s, like the Sun-1 workstation, introduced by Sun Microsystems in 1982.

Inception and Background

Introduced to the world in 1979 by Motorola Semiconductor Products Sector, the Motorola 68000, a family of 32-bit complex instruction set computer (CISC) microprocessors, emerged as a direct response to the demand for more powerful and flexible CPUs. For a trip back in time, checkout this The Computer Chronicles from 1986 on RISC vs. CISC architectures video. The 1970s witnessed an explosion of microprocessor development, with chips like the Intel 8080 - introduced in 1974, the MOS Technology 6502 - introduced in 1975, and the Zilog Z80 - introduced in 1976, shaping the first wave of personal computers. But as the decade drew to a close, there was a noticeable need for something more—a processor that could handle the increasing complexities of software and pave the way for the graphical user interface and multimedia era The m68k was one of the first widely available processors with a 32-bit instruction set, large unsegmented address space, and relatively high speed for the era. As a result, it became a popular design through the 1980s, and was used in a wide variety of personal computers, workstations, and embedded systems.

The m68k has a rich instruction set that includes a variety of features for both general-purpose and specialized applications. For example, the m68k has instructions for floating-point arithmetic, bit manipulation, and memory management. It also has a number of instructions for handling interrupts and exceptions.

The m68k is a well-documented and well-supported processor. There are a number of compilers and development tools available for the m68k, and it is supported by a variety of operating systems, including Unix, Linux, and macOS.

The m68k is still in use today, albeit to a lesser extent than it was in the 1980s and 1990s. It is still used in some embedded systems, and it is also used in some retrocomputing projects.

The 68k's Distinction

Several factors distinguished the 68k from its contemporaries. At the heart of its design was a 32-bit internal architecture. This was a significant leap forward, as many microprocessors of the era, including its direct competitors, primarily operated with 8-bit or 16-bit architectures. This expansive internal data width allowed the 68k to manage larger chunks of data at once and perform computations more efficiently.

Yet, in a nod to compatibility and cost-effectiveness, the 68k featured a 16-bit external data bus and a 24-bit address bus. This nuanced approach meant that while the chip was designed with a forward-looking architecture, it also remained accessible and affordable for its intended market.

Here's a deeper look into the distinct attributes that set the 68k apart:

  1. 32-bit Internal Architecture: At its core, the 68k was designed as a 32-bit microprocessor, which was a visionary move for its time. While many competing processors like the Intel 8086 and Zilog Z8000 were primarily 16-bit, the 68k's 32-bit internal data paths meant it could process data in larger chunks, enabling faster and more efficient computation. This internal width was a signal to the industry about where the future of computing was headed, and the 68k was at the forefront.

  2. Hybrid Bus System: Despite its 32-bit internal prowess, the 68k was pragmatic in its external interfacing. It featured a 16-bit external data bus and a 24-bit address bus. This choice was strategic: it allowed the 68k to communicate with the then-available 16-bit peripheral devices and memory systems, ensuring compatibility and reducing system costs. The 24-bit address bus meant it could address up to 16 megabytes of memory, a generous amount for the era.

  3. Comprehensive Instruction Set: One of the crowning achievements of the 68k was its rich and versatile instruction set. Starting with 56 instructions, it was not just about the number but the nature of these instructions. They were designed to be orthogonal, meaning instructions could generally work with any data type and any addressing mode, leading to more straightforward assembly programming and efficient use of the available instruction set. This design consideration provided a more friendly and versatile environment for software developers.

  4. Multiple Register Design: The 68k architecture sported 16 general-purpose registers, split equally between data and address registers. This was a departure from many contemporaneous designs that offered fewer registers. Having more registers available meant that many operations could be performed directly in the registers without frequent memory accesses, speeding up computation significantly.

  5. Forward-Thinking Design Philosophy: Motorola designed the 68k not just as a response to the current market needs but with an anticipation of future requirements. Its architecture was meticulously crafted to cater to emerging multitasking operating systems, graphical user interfaces, and more complex application software. This forward-leaning philosophy ensured that the 68k remained relevant and influential for years after its debut.

  6. Developer and System Designer Appeal: The 68k's design was not just about raw power but also about usability and adaptability. Its clean, consistent instruction set and powerful addressing modes made it a favorite among software developers. For system designers, its compatibility with existing 16-bit components and its well-documented interfacing requirements made it a practical choice for a wide range of applications.

Redefining an Era

But perhaps what truly set the 68k apart from its peers was not just its technical specifications but its broader philosophy. Where many processors of the era were designed with a focus on backward compatibility, the 68k looked forward. It was built not just for the needs of the moment, but with an eye on the future—a future of graphical user interfaces, multimedia applications, and multitasking environments. In the context of the late 1970s and early 1980s, the Motorola 68000 was a beacon of innovation. Its architecture represented a departure from many conventions of the time, heralding a new wave of computing possibilities.

In the vibrant landscape defined by the Motorola 68000's influential reign, there exists a contemporary 68k tiny computer: the Tiny68K, a modern homage to this iconic microprocessor. A compact, single-board computer embodies the 68k's forward-thinking design philosophy, serving both as an educational tool and a nostalgic nod to the golden age of computing. Equipped with onboard RAM, ROM, and serial communication faculties, the Tiny68K is more than just a tribute; it's a hands-on gateway for enthusiasts and students to dive deep into the 68k architecture. By offering a tangible platform for assembly programming and hardware design exploration, the Tiny68K seamlessly marries the pioneering spirit of the 68k era with the curiosity of contemporary tech enthusiasts.

But, if we step back two and a half decades from the contemporary Tiny68k, you will find the Texas Instruments TI-89 graphing calculator. Launched in the late 1990s, and predating the TI-84+ by several years, the TI-89 represented a significant leap forward in handheld computational capability for students and professionals. While the 68k had already etched its mark in workstations and desktop computers, its adoption into the TI-89 showcased its versatility and longevity. This wasn't just any calculator; it was a device capable of symbolic computation, differential equations, and even 3D graphing — functionalities akin to sophisticated computer algebra systems, but fitting snugly in one's pocket. The choice of the 68k for the TI-89 wasn't merely a hardware decision; it was a statement of intent, bringing near-desktop-level computational power to the classroom. The TI-89, with its 68k heart, became an indispensable tool for millions of students worldwide. In this manner, the 68k's legacy took a pedagogical turn, fostering learning and scientific exploration in academic settings globally, further cementing its storied and diverse contribution to the world of computing.

During the late 1990s and early 2000s, as I delved into the foundational calculus studies essential for every engineering and computer science student, I invested in a TI-89. Acquiring it with the savings from my college job, this graphing calculator, driven by the robust 68k architecture, swiftly became an invaluable tool. Throughout my undergraduate academic journey, the TI-89 stood out not just as a calculator, but as a trusted companion in my studies. From introductory calculus to multivariate calculus to linear algebra and differential equations, my TI-89 was rarely out of reach while in the classroom.

The TI-89 was not the only device in my backpack. At the same time in my schooling, my undergraduate university, the University of Minnesota Duluth (UMD), took, what was at the time, a pioneering step in using technology integrated into the education process. In 2001, the university instituted the forward-thinking requirement for its science and engineering students: the ownership and use of an HP iPAQ. Laptops, at the time, were not seen as being universal like they are now. The College of Science and Engineering felt the iPAQ would be a good choice.

In 2001, the popular models of the HP iPAQ were the H3600 series. These iPAQs were powered by the Intel StrongARM SA-1110 processor, which typically ran at 206 MHz. The StrongARM was a low-power, high-performance microprocessor that made it particularly suitable for mobile devices like the iPAQ, providing a balance between performance and battery life.

The StrongARM microprocessor was a result of collaboration between ARM Ltd. and Digital Equipment Corporation (DEC) in the mid-1990s. It was developed to combine ARM's architectural designs with DEC's expertise in high-performance processor designs.

The processor was based on the ARM v4 architecture, a derivative of the RISC design. It operated at speeds between 160 MHz to 233 MHz and was notable for its minimal power consumption, making it ideal for mobile and embedded systems. Some models consumed as little as 1 mW/MHz. With a performance rate nearing 1 MIPS per MHz, it was designed for high-performance tasks. Manufactured using a 0.35-micron CMOS process, the StrongARM featured a 32-bit data and address bus, incorporated both instruction and data cache, and came with integrated features like memory management units. It was widely used in devices like the iPAQ, various embedded systems, and network devices. Though its production lifespan was relatively short after DEC's acquisition by Intel, the StrongARM significantly showcased the capabilities of ARM designs in merging high performance with power efficiency.

By most measurements, the HP iPAQ and its StrongARM processor had more processing power, more memory, and a subjectively more modern user interface; despite these impressive characteristics, the requirement and use of the device at UMD was short lived. Among a number of issues, connectivity and available software were problems that made the iPAQ program fall short. Often, it couldn't be used on tests, it did not readily have software for symbolic algebra and calculus, and despite having a subjectively snappier UI, devices like the TI-89 (and others in the TI-8x family) had far more intuitive user interface navigation without the need for a stylus pen. By the fall of 2002, I was no longer carrying around this extra device.

The TI-89 was simply better suited for the engineer-in-training. The TI-89 bridged the gap between abstract theoretical concepts and tangible results. But there was more to the device. The TI-89's programmable nature ushered in a culture of innovation. Students and enthusiasts alike began developing custom applications, ranging from utilities to assist in specific academic fields to games that offered a brief respite from rigorous studies. This inadvertently became an entry point for many into the world of programming and software development.

The legacy of the TI-89 extends beyond its lifespan. It's seen in the modern successors of graphing calculators and educational tools that continue to be inspired by its pioneering spirit. It's remembered fondly by a generation who witnessed firsthand the transformative power of integrating cutting-edge technology into education.

Here are some highlights of the TI-89:

  1. Memory Architecture: The TI-89 was no slouch when it came to memory, boasting around 256 KB of Flash ROM and 188 KB of RAM in its initial versions. This generous allocation, especially for a handheld device of its era, allowed for advanced applications, expansive user programs, and data storage.

    • RAM (Random Access Memory): The TI-89 features 256 KB (kilobytes) of RAM. This type of memory is used for active calculations, creating variables, and running programs. It can be cleared or reset, which means data stored in RAM is volatile and can be lost if the calculator is turned off or resets.

    • Flash ROM (Read-Only Memory): The TI-89 boasts 2 MB (megabytes) of Flash ROM. This memory is non-volatile, meaning that data stored here remains intact even if the calculator is turned off. Flash ROM is primarily used to store the calculator's operating system, apps, and other user-installed content. Because it's "flashable," the OS can be updated, and additional apps can be added without replacing any hardware.

    • Archive Space: A portion of the Flash ROM (usually the majority of it) is used as "archive" space. This is where users can store programs, variables, and other data that they don't want to lose when the calculator is turned off or if the RAM is cleared.

  2. Display Capabilities: A 160x100 pixel LCD screen was central to the TI-89's interface. This high-resolution display was capable of rendering graphs, tables, equations, and even simple grayscale images. It was instrumental in visualizing mathematical concepts, from 3D graphing to differential equation solutions.

  3. Input/Output (I/O) Interfaces: The TI-89 was equipped with an I/O port, enabling connection with other calculators, computers, or peripheral devices. This feature facilitated data transfer, software upgrades, and even collaborative work. Additionally, the calculator could be connected to specific devices like overhead projectors for classroom instruction, further emphasizing its role as an educational tool.

  4. Operating System and Software: The calculator ran on an advanced operating system that supported not only arithmetic and graphing functionalities but also symbolic algebra and calculus. Furthermore, the TI-89 could be programmed in its native TI-BASIC language or with m68k assembly, offering flexibility for developers and hobbyists alike. We will go into the OS in more detail later in this write-up.

  5. Expandability: One of the distinguishing features of the TI-89 was its ability to expand its capabilities through software. Texas Instruments, along with third-party developers, created numerous applications for a range of academic subjects, from physics to engineering to finance. Its programmable nature also allowed students and enthusiasts to write custom programs tailored to their needs.

  6. Hardware Extensions: Over the years, peripheral hardware was developed to extend the capabilities of the TI-89. This included items like memory expansion modules, wired and wireless communication modules, and even sensors for data collection in scientific experiments.

  7. Power Management: The TI-89 was designed for efficient power management. Relying on traditional AAA batteries and a backup coin cell battery to retain memory during main battery replacement, it optimized power usage to ensure long operational periods, essential for students during extended classes or examination settings.

The Business Side

The graphing calculator market possesses several unique characteristics. At its core, the market is oligopolistic in nature, with just a handful of brands like Texas Instruments, Casio, and Hewlett-Packard taking center stage. This structure not only restricts consumer choices but also provides these companies with considerable clout over pricing and product evolution.

A significant factor for these devices is the stable demand they enjoy, primarily driven by their use in high school and college mathematics courses. Year after year, there's a consistent need for these tools, ensuring a predictable market. In terms of technological evolution, the graphing calculator hasn't witnessed revolutionary changes. However, there are discernible improvements, such as the integration of color screens, rechargeable batteries, and augmented processing capabilities in newer models. Another dimension to the equation is the regulatory environment, especially in the context of standardized testing. Only particular calculators are permitted in such settings, which can heavily impact the popularity of specific models among students. Yet, as technology advances, these traditional devices face stiff competition from modern smartphone apps and software offering similar functionalities. Although regulations and educational preferences keep dedicated devices relevant, the growing digital ecosystem poses a formidable challenge.

Pricing in this market is interesting as well. Given their essential role in education, these calculators exhibit a degree of price inelasticity. Students, when presented with a need for a specific model by their institutions, often have little choice but to purchase it, irrespective of minor price hikes. This brings us to another vital market feature: the influence of educational institution recommendations. Schools and colleges often have a say in the models or brands their students should buy, like my undergraduate requirement to have an HP iPAQ, thereby significantly shaping purchase decisions.

Prior to 2008, Texas Instruments broke out their Education Technologies business into its own line item in Securities & Exchange Commission (SEC) 10-Q filings. Education Technologies was primarily concerned with graphing calculators. In 2009, the Wall Street Journal highlighted that Texas Instruments dominated the US graphing calculator market, accounting for roughly 80% of sales. Meanwhile, its rival, Hewlett-Packard (HP), secured less than 5% of this market share. The report further revealed that for all of 2007, calculator sales contributed $526 million in revenues and $208 million in profits to TI, making up about 5% of the company's yearly profits. TI has since rolled their Education Technologies division into an "Other" category on their SEC filings. Even without explicitly calling out its graphing calculators business, their technology remains a mainstay in the educational-industrial complex that is the secondary education system in the US. For a student opinion on the monopolistic grip TI has on the market, check out this.

Texas Instruments Operating System

The TI-89, one of Texas Instruments' advanced graphing calculators, operates on the TI-OS (Texas Instruments Operating System), which offers a slew of sophisticated features catering to high-level mathematics and science needs. The TI-OS provides symbolic manipulation capabilities, allowing users to solve algebraic equations, differentiate and integrate functions, and manipulate expressions in symbolic form. It supports multiple graphing modes, including 3D graphing and parametric, polar, and sequence graphing. The system comes equipped with a versatile programming environment, enabling users to write their custom programs in TI-BASIC or Assembly (as was previously mentioned above). Additionally, the OS incorporates advanced calculus functionalities, matrix operations, and differential equations solvers. It also boasts a user-friendly interface with drop-down menus, making navigation intuitive and efficient.

Beyond the aforementioned features, the TI-89's TI-OS also extends its capabilities to advanced mathematical functions like Laplace and Fourier transforms, facilitating intricate engineering and physics calculations. The calculator’s list and spreadsheet capabilities permit data organization, statistical calculations, and regression analysis. Its built-in Computer Algebra System (CAS) is particularly noteworthy, as it can manipulate mathematical expressions and equations, breaking them down step by step – a godsend for students trying to understand complex mathematical procedures.

In terms of usability, the OS supports a split-screen interface, enabling simultaneous graph and table viewing. This becomes especially helpful when analyzing functions and their respective data points side by side. The operating system also supports the ability to install and utilize third-party applications, expanding the calculator's functionality according to the user's requirements.

Connectivity-wise, TI-OS facilitates data transfers between calculators and to computers. This makes it easier for students and professionals to share programs, functions, or data sets. Moreover, with the integration of interactive geometry software, users can explore mathematical shapes and constructions graphically, fostering a more interactive learning environment. Overall, the TI-89's TI-OS is a robust system that merges comprehensive mathematical tools with user-centric design, making complex computations and data analysis both effective and intuitive.

Writing Software on the TI-89

Let's outline a simple Monte Carlo method to estimate π:

The basic idea of the Monte Carlo method is to randomly generate points inside a square and determine how many fall inside a quarter-circle inscribed within that square. The ratio of points that fall inside the quarter-circle to the total number of points generated will approximate π/4. Isn't math just magical?

Here's a rudimentary outline:

  1. Create a square with sides of length 2 (so it ranges from -1 to 1 on both axes).
  2. The quarter-circle within the square is defined by the equation: $$( x^2 + y^2 ≤ 1 )$$
  3. Randomly generate points (x, y) within the square.
  4. Count how many points fall within the quarter-circle.

If you generate N points and M of them fall inside the quarter-circle, then the approximation for π is:

$$[ \pi ≈ 4 \times \frac{M}{N} ]$$

Here's a basic implementation in TI-BASIC:

: Prompt N   ; Ask the user for the number of iterations.
: 0M        ; Initialize M, the number of points inside the quarter-circle.
: For(I,1,N) ; Start a loop from 1 to N.
: randX     ; Generate a random number for the x-coordinate between 0 and 1.
: randY     ; Generate a random number for the y-coordinate between 0 and 1.
: If X^2 + Y^2  1  ; Check if the point (X, Y) lies inside the quarter-circle.
: M + 1M    ; If it does, increment the count M.
: End        ; End the loop.
: 4*M/NP    ; Calculate the approximation for π.
: Disp "Approximation for π:", P  ; Display the result.

When you run this program, you'll input the number of random points (N) to generate. More points will give a more accurate approximation, but the program will run longer. The rand function in TI-BASIC returns a random number between 0 and 1, which is ideal for this method.

Here's a basic M68k assembly outline for the TI-89 (please note this is a high-level, pseudo-code-style representation, as creating an exact and fully functional assembly code requires a more detailed approach):

This was written with the generous help of ChatGPT

    ORG     $0000                   ; Starting address (set as needed)

    ; Initialize your counters and total points (iterations)
    MOVE.L  #TOTAL_POINTS, D2       ; D2 will be our total iteration counter
    CLR.L   D3                      ; D3 will be our "inside circle" counter

    ; Generate random x and y in the range [-1,1]
    JSR     GenerateRandom
    FMOVE   FP0, FP2                ; FP2 is our x-coordinate
    JSR     GenerateRandom
                                  ; FP0 is our y-coordinate

    ; Compute distance from (0,0): sqrt(x^2 + y^2)
    FMUL    FP2, FP2                ; x^2
    FMUL    FP0, FP0                ; y^2
    FADD    FP0, FP2                ; x^2 + y^2
    FSQRT   FP2, FP0                ; sqrt(x^2 + y^2)

    ; Check if point lies inside the circle of radius 1
    FCMP    #1, FP0
    FBLT    InsideCircle            ; If distance < 1, it is inside the circle

    ; Update counters
    SUBQ.L  #1, D2
    BNE     Loop
    BRA     ComputePi

    ADDQ.L  #1, D3                  ; Increment the inside circle counter
    BRA     Loop

    ; Calculate pi: 4 * (points inside circle / total points)
    ; Assuming D3 and D2 are long, this operation will be integer-based, which will result in 0 or 1.
    ; We can multiply D3 by 4 beforehand to get an integer estimate of pi.

    ASL.L   #2, D3                  ; D3 = D3 * 4
    DIVS    D2, D3                  ; Divide by total points (D2)

    ; Convert the result to a string
    MOVE.L  D3, D0
    JSR     IntToStr                ; Result string will be in A1

    ; Display the result (or do whatever you wish with A1)
    ; ...


; RNG using Linear Congruential Generator
    MOVE.L  SEED, D0            ; Load current seed into D0
    MULU.L  #A, D0              ; Multiply seed by 'a'
    ADD.L   #C, D0              ; Add 'c'
    DIVU    #M, D0              ; Divide by M. Remainder in D1
    MOVE.L  D1, SEED            ; Store new seed
    MOVE.L  D1, D0              ; Return value in D0

    ; Input: D0 = Integer value to be converted
    ; Output: A1 = Pointer to the resulting string

    LEA     buffer(PC), A0          ; A0 points to end of buffer (for null-terminated string)
    MOVE.B  #0, (A0)                ; Null-terminate

    TST.L   D0                      ; Test if D0 is zero
    BNE     NotZero                 ; If not, proceed with conversion
    MOVE.B  #'0', -1(A0)           ; Store '0' character
    MOVEA.L A0, A1                  ; Move pointer to result
    SUBA.L  #1, A1                  ; Adjust to point at the '0' character

    ; Handle negative numbers
    TST.L   D0
    BPL     Positive
    NEG.L   D0
    MOVE.B  #'-', -1(A0)
    SUBA.L  #1, A0

    ; Convert each digit to a character

    DIVU    #10, D0                 ; Divide by 10, quotient in D0, remainder in D1
    ADD.B   #'0', D1                ; Convert to ASCII
    MOVE.B  D1, -1(A0)              ; Store character at next position in buffer
    SUBA.L  #1, A0                  ; Move buffer pointer backwards
    TST.L   D0                      ; Check if quotient is zero
    BNE     LoopConvert             ; If not, continue loop

    MOVEA.L A0, A1                  ; Move pointer to result string to A1

buffer  DS.B    12                  ; Allocate space for max 32-bit number + null-terminator

TOTAL_POINTS  EQU  100000           ; Number of iterations for Monte Carlo (change as needed)

An actual implementation may require adjusting the code, especially if wanting to make use of system routines to display your shiny new approximation of π. Unless you are trying to squeeze out more performance, writing in m68k assembly is not really practical. You are having to track everything manually. Higher level languages were designed to not have to deal with such low level commands. Let's look at C using an older port of the ubiquitous GNU GCC. Don't hold yourself for a porting of Rust to TI-89.

Use the TI-GCC SDK for the TI-89 to write a C program, the Monte Carlo method for estimating the value of π would look like the following block of code. TI-GCC is Windows only, but will install and run quite well under Linux + Wine; I was even able to get these Win32 executables to run on Apple M2 Silicon hardware using wine-crossover.

#include <tigcclib.h>  // Include the necessary header for TI-GCC
#include <stdlib.h>    // For rand() and RAND_MAX

#define N 10000  // Number of random points to generate

void _main(void) {
    int i, cnt = 0;
    float x, y;

    for (i = 0; i < N; i++) {
        x = (float)rand() / RAND_MAX;  // Generates a random float between 0 and 1
        y = (float)rand() / RAND_MAX;

        if (x*x + y*y <= 1.0) {

    float pi_approximation = 4.0 * cnt / N;
    char buffer[50];
    sprintf(buffer, "Approximated Pi: %f", pi_approximation);

    // Display the result on the calculator's screen


This program does the following:

The code begins by including necessary headers and defining a macro, N, to denote the number of random points (10,000) that will be generated. Within the main function _main, two random floating-point numbers between 0 and 1 are generated for each iteration, representing the x and y coordinates of a point. The point's distance from the origin is then checked to determine if it lies within a unit quarter circle. If so, a counter (cnt) is incremented. After generating all the points, an approximation of π is calculated using the ratio of points inside the quarter circle to the total points, multiplied by four. The result is then formatted as a string and displayed on the calculator's screen using the ST_helpMsg function.

  1. Uses the rand() function from the stdlib.h to generate random numbers.
  2. Generates N (in this case, 10,000) random points.
  3. Checks if the point lies within the unit quarter circle.
  4. Approximates π using the ratio of points that lie inside the quarter circle.

To compile and run:

  1. Set up the TI-GCC SDK and compile this program.
    • If you are using Linux on a x86/amd64 based system, you should be able to simply install wine
    • If you are using an old-ish Mac that is an amd64 based system, you should be good. You will need install a few things through brew, but there are instructions readily available via Google
  2. Transfer the compiled program to your TI-89.
  3. Run the program on your TI-89.

As you can see, the 68k has a storied history and lived on in the TI-89. You can also see that there was an active community around the TI-89 who were able to even port a C compiler to its m68k. So, go out and buy a TI-89, don't forget a transfer cable and go have some late 1990s and early 2000s tiny computer fun.

An Exploration into the TI-84+

An Exploration into the TI-84+

0. Preamble

I came across several TI-85 calculators in a closet in the house I grew up in. These got me thinking: graphing calculators are essentially tiny computers. Graphing calculators are potentially the first tiny computers. Long before Raspberry Pi, Pine64, Orange Pi, Banana Pi, and the long list of other contemporary tiny computers that use modern Arm processors, graphing calculators were using Motorola 68000s and Zilog Z80s. The first Texas Instruments graphing calculator was the TI-81 which was introduced in 1990; it contained a Z80 processor. I have fond memories of the TI-85 in high school. Transferring games and other programs between TI-85s before physics or trigonometry class using a transfer cable -- there was no Bluetooth or WiFi. But, the TI-81, TI-85 and, discussed briefly in the introduction, TI-89 are not the subject of this writeup. The subject is, in fact, a graphing calculator that I managed to never use: the TI-84.

1. Introduction

The Texas Instruments TI-84 Plus graphing calculator has been a significant tool in the realm of education since its launch in 2004. Its use spans the gamut from middle school math classrooms to university-level courses. Traditionally, students in algebra, geometry, precalculus, calculus, and statistics classes, as well as in some science courses, have found this calculator to be a fundamental tool for understanding complex concepts. The TI-84 Plus allowed students to graph equations, run statistical tests, and perform advanced calculations that are otherwise difficult or time-consuming to do by hand. Its introduction marked a significant shift in how students could interact with mathematics, making abstract concepts more tangible and understandable. I, being over forty, never used a TI-84+ calculator in any of my schooling. I entered high school in the mid-1990s and calculator of choice for math and science was the TI-85. The TI-85 also utilized a Z80 processor. As I progressed mathematically and engineeringly in the early 2000s, I used a TI-89. It was an amazing tool for differential equations and linear algebra. The 89 used a M68k processor; as an aside, I plan on writing a piece on the M68k. Even as I entered graduate school in my mid-30s, my TI-89 found use in a few of my courses.

2. The Humble TI-84+ Graphing Calculator

One might wonder why, nearly two decades later, the TI-84 Plus is still in widespread use. There are several reasons for this. First, its durable design, user-friendly interface, and robust suite of features have helped it withstand the test of time. The device is built for longevity, capable of years of regular use without significant wear or loss of functionality. Second, Texas Instruments has kept the calculator updated with new apps and features that have kept it relevant in a continually evolving educational landscape. Perhaps most importantly, the TI-84 Plus is accepted on all major standardized tests, including the SAT, ACT, and Advanced Placement exams in the U.S. This widespread acceptance has cemented the TI-84 Plus as a standard tool in math and science education, despite the advent of newer technologies. Additionally, there's a significant advantage for students and teachers in having a standardized tool that everyone in a class knows how to use, reducing the learning curve and potential technical difficulties that could detract from instructional time.

1. Model Evolution

  • TI-84 Plus (2004): The original model runs on a Zilog Z80 microprocessor, has 480 kilobytes of ROM and 24 kilobytes of RAM, and features a 96x64-pixel monochrome LCD. It is powered by four AAA batteries and a backup battery.

  • TI-84 Plus Silver Edition (2004): Launched alongside the original, this version comes with an expanded 1.5-megabyte flash ROM, enabling more applications and data storage.

  • TI-84 Plus C Silver Edition (2013): The first model to offer a color display, it comes with a full-color, high-resolution backlit display, and a rechargeable lithium-polymer battery.

  • TI-84 Plus CE (2015): Maintains the Zilog Z80 processor but boasts a streamlined design, a high-resolution 320x240-pixel color display, a rechargeable lithium-ion battery, and an expanded 3-megabyte user-accessible flash ROM.

2. Texas Instruments Operating System (TI-OS)

TI-OS, the operating system on which all TI-84 Plus models run, is primarily written in Z80 assembly language, with certain routines, particularly floating-point ones, in C. As a single-user, single-tasking operating system, it relies on a command-line interface.

The core functionality of TI-OS involves the management of several key system resources and activities:

  • Input and Output Management: It controls inputs from the keypad and outputs to the display, ensuring the calculator responds accurately to user commands.

  • Memory Management: TI-OS manages the allocation and deallocation of the calculator's memory, which includes the read-only memory (ROM) and random access memory (RAM). This ensures efficient usage of the memory and avoids memory leaks that could otherwise cause the system to crash or slow down.

  • Program Execution: TI-OS supports the execution of programs written in TI-BASIC and Z80 assembly languages. Users can develop and run their own programs, extending the calculator's functionality beyond standard computations.

  • File System: It also handles the file system, which organizes and stores user programs and variables. The file system is unique in that it's flat, meaning all variables and programs exist on the same level with no folder structure.

  • Error Handling: It also manages error handling. When the user enters an invalid input or an error occurs during a computation, TI-OS responds with an appropriate error message.

  • Driver Management: The OS also communicates with hardware components such as the display and keypad via drivers, and facilitates functions such as powering the system on and off, putting it to sleep, or waking it.

Texas Instruments periodically releases updates to TI-OS, introducing new features, security updates, and bug fixes, ensuring a continually improved user experience.

3. Software and Functionality

The TI-84 Plus series maintains backward compatibility with TI-83 Plus software, providing access to a wide library of resources. Texas Instruments has fostered third-party software development for the TI-84 Plus series, resulting in a rich variety of applications that expand the calculator's functionality beyond mathematical computations.

3. The Humble Z80 Processor

The Zilog Z80 microprocessor found its way into a myriad of systems, from early personal computers to game consoles, embedded systems, and graphing calculators like the TI-84 Plus. Despite being a nearly 50-year-old technology, it still finds application today, and there are several reasons for this.

The Z80's design is simple, robust, and reliable. Despite being a CISC architecture, it has a relatively small instruction set that is easy to program, which makes it a good choice for teaching purposes in computer science and electronic engineering courses. The Z80 is also relatively inexpensive and energy-efficient, which can be crucial in certain embedded systems applications.

The longevity of the Z80 can also be attributed to its presence in legacy systems. A lot of older, yet still functioning machinery—be it industrial, medical, or scientific—rely on Z80 chips for their operation. Replacing these systems entirely just to update the microprocessor might be prohibitively expensive or practically unfeasible, especially when they continue to perform their intended functions adequately.

The Z80 is not exactly a new piece of technology, and much of the documentation on it is rather old, but there are a number of books available: here, here and here. There is also an official Zilog Z80 CPU User Manual.

4. Z80 Assembly Language: Hello World

Consider the 'Hello World' program in Z80 assembly language:

#include "ti83plus.inc"
.db $BB,$6D
.org 9D95h
.db t2ByteTok
    ld hl,txtHello
    .db "Hello World",0

The given code is a Z80 assembly program designed for the TI-84+ calculator, which uses a Z80 processor. The code is meant to display the "Hello World" message on the calculator's screen. Here's an explanation of each part:

  1. #include "ti83plus.inc": This line includes the ti83plus.inc file, which usually contains definitions of constants and routines specific to the TI-83+/TI-84+ calculators. It helps the assembler to understand specific labels, constants, and ROM calls used in the code.

  2. .org 9D95h: The .org directive is used to set the program counter to a specific address, here 0x9D95. It is specifying where in memory the following code should be loaded.

  3. ld hl,txtHello: This line loads the address of the label txtHello into the register pair HL. In this context, it's preparing to display the text string located at that address.

  4. bcall(_puts): The bcall instruction is specific to the TI-83+/TI-84+ calculators and is used to call a routine from the calculator's ROM. In this case, it's calling the _puts routine, which is typically used to print a null-terminated string to the screen. The address of the string is already loaded into HL, so this call will print "Hello World" to the display.

  5. ret: This is the return instruction, which will return to whatever code called this routine. If this code is the main program, it effectively ends the program.

  6. txtHello:: This is a label used to mark the location of the "Hello World" string.

  7. .db "Hello World",0: This directive defines a sequence of bytes representing the ASCII characters for "Hello World", followed by a null byte (0). This null-terminated string is what gets printed by the _puts routine.

  8. .end: This directive marks the end of the source file.

5. Assembling

Downloading, Compiling, and Running the Z80 Assembler SPASM-ng

The Z80 Assembler SPASM-ng is an open-source assembler for the Z80 microprocessor.

Section 1: Downloading SPASM-ng

1.1 Requirements
  • Git (for cloning the repository)
  • A compatible C compiler
1.2 Process
  1. Open the terminal or command prompt.
  2. Clone the repository using the following command: git clone https://github.com/alberthdev/spasm-ng.git
  3. Navigate to the downloaded directory: cd spasm-ng

Section 2: Compiling SPASM-ng

Once downloaded, SPASM-ng needs to be compiled.

2.1 Install dependencies

Suggested packages for Ubuntu/Debian:

  • build-essential
  • libssl-dev
  • zlib1g-dev
  • libgmp-dev
2.2 Compiling on Linux/Unix
  1. Compile the source code: make
  2. Install: sudo make install

Section 3: Running SPASM-ng

Once compiled, SPASM-ng can be used to assemble Z80 programs.

3.1 Basic Usage

The basic command for assembling a file is:

./spasm input.asm output.8xp
3.2 Additional Options

SPASM-ng offers various command-line options for different assembly needs. Run:

./spasm -h

to see all available options.

6. Running the Program

The last step is running the 'Hello World' program on the TI-84+ calculator. The TI-84+ calculator interface has several buttons similar to a physical calculator, which are used to interact with the software. Here's how to execute the program:

To initiate the process, select 2ND followed by 0 (CATALOG), select ASM. Press the PRGM button on the calculator. This action opens a list of available programs on the calculator. Navigate this list using the arrow keys provided on the calculator's interface.

Once you locate your program—named after your .8xp file—press ENTER. This action displays the name of the program on the calculator's home screen.

Close the parentheses - ) - to run the program, press ENTER again. With this action, the TI-84+ calculator executes the program. If the program has been correctly written and uploaded, you should see the 'Hello World' message displayed on the screen. This signals that your program ran successfully.

The History, Use and Technical Details of QEMU


In today's interconnected world, the need for virtualization technologies is more pronounced than ever. These tools help developers simulate different computing environments, facilitating easy testing and debugging across platforms. QEMU, an abbreviation for Quick Emulator, is one such powerful technology. It's a free and open-source hypervisor that performs hardware virtualization. Since its inception, it has become a cornerstone in the world of emulation, offering support for various hardware platforms and operating systems. This article will delve into the history of QEMU, its uses, and technical aspects.

To be honest, I had shyed away from experimenting with QEMU; I had used VMWare and VirtualBox for some time. I first utlitized VMWare in 2007 when a, at the time, friend and I started a software development and consultancy. Amazon Web Services had been launched just one year prior and it was not an established name is computing. Cloud Computing was not called that; I recall a reading about Sun Microsystems developing "Utility Computing"; pay as you go. The idea seemed crazy to me at the time. When you needed compute power, you bought physical hardware, in our case, a Dell PowerEdge 1950. We did an initial installation of Debian Linux and installed VMWare and then shipped the server to a colocation facility in Miami, FL. There, we would create, manage, and ultimately destroy countless virtual machines over the lifetime of the server as well as the halcyon days of the consultancy.

In an odd chain-reaction of thoughts, I wanted to experiment with getting MacOS 9 running on modern hardware. As it turns out, QEMU comes in quite handy.

History of QEMU

QEMU was initially released in 2003, a creation of French programmer Fabrice Bellard, who is also known for creating other groundbreaking tools like FFmpeg and LZEXE. Bellard's idea was to develop a fast, portable dynamic translator that could make software developed for one machine run on another.

While the initial version of QEMU only supported emulation for x86, ARM, and SPARC architectures, it gradually expanded to cater to various others like PowerPC, MIPS, and more. Throughout its history, QEMU has continually evolved and improved, integrating with other projects like KVM (Kernel-based Virtual Machine) and libvirt, and extending its support to system and user-mode emulation.

Use of QEMU

QEMU has a wide array of applications, ranging from cross-platform development, virtualization, sandboxing, to hardware design and testing.

Cross-Platform Development and Testing: As QEMU can emulate different architectures, it has become an invaluable tool for developers, allowing them to compile and test their code across various platforms without needing physical access to them. This functionality significantly accelerates the software development process, enabling more efficient multi-platform software creation.

Virtualization: In conjunction with KVM, QEMU can run multiple operating systems concurrently on a single hardware host. This feature has driven the rapid growth of cloud computing, where multiple virtual machines are hosted on powerful servers, offering flexibility and scalability to businesses.

Sandboxing: QEMU's ability to emulate an entire system makes it ideal for creating secure sandboxes. This is particularly useful for testing potentially harmful code or software without risking the host system's integrity.

Hardware Design and Testing: In hardware design and testing, QEMU is used to emulate the behavior of different hardware components. It enables hardware designers to simulate and validate their designs before manufacturing the physical components.

Technical Details of QEMU

QEMU can function in two primary modes: system emulation mode and user-mode emulation.

System Emulation Mode: In this mode, QEMU can emulate a full computer system, including a processor and various peripherals. It can boot and run different operating systems and applications compiled for a different CPU, providing complete system isolation. The system mode is beneficial for debugging system code and running unmodified guest operating systems.

User-mode Emulation: In user mode, QEMU can launch individual Linux processes compiled for one CPU on another CPU. This approach is highly beneficial for running binary files from different architectures and is often used to build and test software in a cross-compilation environment.

The technical prowess of QEMU lies in its dynamic binary translation. It translates the binary code of a guest system into the binary code of a host one instruction at a time, creating a cache of translated code to optimize performance. QEMU supports numerous architectures, including x86, ARM, MIPS, SPARC, and PowerPC, among others, either as the host or the guest architecture.

When paired with KVM, QEMU's capabilities expand even further. KVM, a feature of the Linux kernel, facilitates hardware-assisted virtualization. When QEMU is combined with KVM, it uses the CPU's virtualization extensions, offering near-native performance for the emulated guest system.

To manage QEMU and KVM virtual machines, libvirt, a toolkit, is often used. It provides a common, consistent API, simplifying the management of different virtualization technologies.


  1. Installation: sudo apt-get install qemu-system-ppc

    • You may or may not need to install other packages. Google will be your friend.
  2. Download Required Files:

    • Head over to here to grab an ISO of macOS 9.2.2
  3. Prepare a QEMU Disk Image:

    • Create a blank disk image that will serve as the virtual hard drive for macOS 9.2. You can do this using the qemu-img command: qemu-img create -f qcow2 macos9.qcow2 10G
  4. Install macOS 9.2 on QEMU:

    • Launch the QEMU virtual machine with the following command: qemu-system-ppc -M mac99 -m 512 -boot d -cdrom path/to/macos9.iso -hda macos9.qcow2 -netdev user,id=net0 -device sungem,netdev=net0 Note: Replace path/to/macos9.iso with the path to your macOS 9.2 installation ISO or CD image.
  5. Restore:

    • You cannot simply install macOS, you will need to restore. Locate Apple Software Restore on the CD drive. Run the program and follow the instructions.

  6. Launching macOS 9.2:

  7. Once the restore is complete, you can start macOS 9.2 using a similar command as before: qemu-system-ppc -M mac99 -m 512 -hda macos9.qcow2 -netdev user,id=net0 -device sungem,netdev=net0 -g 1024x768x32

Keep in mind that macOS 9.2 may not work perfectly on QEMU due to limited hardware emulation and driver support. If you encounter any issues, there might not be straightforward solutions. Use Google

Final Thoughts

Here are instructions on running MacOS 9 on a couple different flavors of Mac architecture. I personally got MacOS 9 running under Linux and specifically Armbian running on a Pine64 ROCKPro64; this meant that audio and a couple other things did not work.

BIGTREETECH Manta E3EZ: Ender 3 Pro Meets Klipper Part Two

BIGTREETECH Manta E3EZ enclosure (version 6)

It has been a couple months since I last wrote about the Ender 3 Pro inspired 3D printer that will be running Klipper (as opposed to running Marlin Firmware). The project itself is still progressing, albeit slowly. All of the frame parts have had the anodizing stripped off with lye and subsequently painted with an off-white enamel.

The base of the printer is assembled and linear rails have been installed on the Y-axis. Over the last month or so, much of my time for this project has been dedicated to testing materials and correct fitment for the 3D printed components, like the enclosure for the BIGTREETECH Manta E3EZ controller board as well as a set of drawers. I wasted a lot of expensive carbon fiber filament by not prototyping parts first in cheap PLA, but I subsequently learned my lesson on that front. I have tons of random colors of PLA, why not use a spool of that.

I settled on polycarbonate carbon fiber from Prusa. Why? I like the satin finish, there are barely any extrusion marks and it is incredibly strong. The model I used for the enclosure is based on Ender 3 (V2) front case for BTT Manta E3 EZ, for stock board/SKR Mini E3, and for Orange Pi Zero 2. You can find all of model iterations that I did here. As of this writing, I have not included the case cover because I have yet to create it.

Other notable milestones on the project include a shift from Creality Sprite Pro to a BIQU H2O Liquid Cooled Hotend. Why? Because it can handle filaments that require high temperatures, like PEI (PolyEther Imide) which requires extrusion temperatures over 365° C, or PEEK (PolyEtherEtherKetone), which requires extrusion of up to 410° C. There would be other requirements, like a heated enclosure for those types of filament, but that is for another upgrade down the road.

BIGTREETECH Manta E3EZ: Ender 3 Pro Meets Klipper Part One

For this build, as the title suggests, we will be using a Creality Ender 3 Pro as our base. I have written before about the Ender 3 Pro printers (here and here) that I have, but here is a bit of information about why I like this particular printer. The Ender 3 Pro features a sturdy frame that is made of aluminum extrusions. The frame is easy to assemble and disassemble, which we will be doing the latter, but that will be for another post.

The BIGTREETECH E3EZ Manta Mainboard is a 32-bit control board designed for use in 3D printers. It features an ARM Cortex-M4 CPU with a clock speed of 120 MHz, offering higher processing power and more precise control than 8-bit or even slower clock speed 32-bit boards. My other Ender 3 Pro inspired printers are running the 32-bit Creality 3D Printer Ender 3 Silent Motherboard V4.2.7. It is a solid motherboard, and I have had no real issues with it. The Silent Motherboard V4.2.7 that I have been running, as I have mentioned in a couple recent posts (here and here), are using a custom-configured Marlin v2.0.x firmware. Out of the box, the Manta E3EZ runs Smoothieware firmware, but, we won't be using that, nor will we be using Marlin, we will, instead, be using Klipper.

One of the key features of the E3EZ Manta board is its use of EZ2209 stepper motor drivers. These drivers offer advanced features such as stealthChop2 for silent operation, spreadCycle for dynamic current control, and stallGuard4 for stall detection. This allows for smoother and more precise movement of the printer's axes, resulting, in theory, in higher quality prints. The Silent Motherboard V4.2.7 uses TMC2225 stepper motor drivers; these are much quieter than the HR4988 used on the 8-bit motherboard that originally shipped with a stock Ender 3 Pro.

In a previous post on BIGTREETECH's CB1 compute module, I mentioned BIGTREETECH's Manta E3EZ board as being a great combination for 3D printing. Even though I have a CB1 from the previous review, felt strongly that it would be a good choice, we will, instead, be using a SOQuartz compute module. Here is a quick run down on the SOQuartz. The SOQuartz module from Pine64 is a powerful single-board computer designed for embedded systems and IoT applications. It is based on the Rockchip RK3566 SoC, which features a quad-core Arm Cortex-A55 processor running at up to 1.8GHz, along with a Mali-G52 2EE graphics processor. The module being used for this project comes with 4GB of LPDDR4 RAM and and will be paired with 16GB of external eMMC storage. It also features a wide range of connectivity options, including Gigabit Ethernet, Wi-Fi 5, Bluetooth 5.0, and support for up to two displays with resolutions of up to 4K@60Hz via HDMI and DisplayPort. Other features of the SOQuartz module include support for up to four USB 3.0 ports, a 40-pin GPIO header, and a dedicated AI accelerator for machine learning applications. With its high-performance specifications and versatile connectivity options, the SOQuartz module is a promising option for a wide range of embedded and IoT applications, like using it with a BIGTREETECH Manta E3EZ. The E3EZ will support just about any Raspberry Pi CM4 form factor. Why the SOQuartz? I have a strange adoration for things-Pine64. I like their boards and compute modules, I also love their Pinecil soldering iron (which happens to be powered by a RISCV processor). I have two ROCKPro64 single board computers running as network file storage on my home network; one even has four 10TB drives running in a (software)RAID5 configuration.

Let's get down to brass tacks and look more holistically at this project. We have already discussed the use of an Ender 3 Pro as our starting point, and using a BTT Manta E3EZ for control + Klipper; what else is going to be used? The following a list of parts, printers, primer and paint for this project.

Part Name Price
X-axis linear rail $110.76
Y-axis linear rails $49.27
Onyehn TL-Smoother Addon Module for Pattern Elimination Motor Filter Clipping Filter 3D Printer Motor Drivers Controller $11.99
BIGTREETECH Direct Nema17 Damper Stepper Motor Steel and Rubber Vibration Dampers with M3 Screw $17.99
AFUNTA 5 Pcs Flexible Couplings 5mm to 8mm Compatible with NEMA 17 Stepper Motors $10.99
Park Sung 3D Printer Heat Bed Leveling Parts,Silicone
Column Solid Mounts,Leveling Spring Replacement
[Gulfcoast Robotics] 235x235mm Aluminum Build Plate
and 24V 250W Silicone Heater 3-Point Heated Bed Upgrade for Creality Ender 3
PEI Sheet 235mmx235mm and Magnetic Sticker with Adhesive for Creality
Ender 3/Ender 3 Pro/Ender 3 V2/Ender 3 S1/Ender 3 S1 pro/Ender 3
neo/Ender 3 v2 neo/Ender 5/Ender 5 Pro/Voxelab Aquila 3D Printer
Zeberoxyz Upgrade 2020 Profile X-axis+4040 Double
Slot Profile Y-axis Synchronous Belt Stretch Straighten Tensioner for Creality Ender-3 Pro/Ender3 V2/CR-20 Pro
3D Printer Parts (X2020+Y4040)
BIGTREETECH EZ2209 V1.0 Stepper Motor Driver 5PCS
Stepstick Mute EZ2209 Compatible with SKR 3 EZ Manta E3 EZ 3D Printer Controller Main Board
BIGTREETECH Manta E3EZ V1.0 Mainboard 32 Bit
Silent Control Board Work with CB1/CM4 Support Klipper Drop-in Motherboard for Ender 3 Compatible with EZ2209
EZ5160 Stepper Motor Driver
Creality Sprite Extruder Pro, Direct Drive
Extruder Hotend Kit, 300℃ High Temperature Extruder Kit for Ender 3/ Ender 3 V2/ Ender 3 Pro/Ender 3 S1/ Ender 3
Max/CR-10 Smart Pro 3D Printers
Creality CR Touch Auto Bed Leveling Sensor Kit, Creality 3D Printer Bed Leveling Tool with Metal
Push Pin for Ender 3/Ender 3 V2/Ender 3 Pro/3 Max/Ender 5
pro/CR-10 with 32 Bit V4.2.2/V4.2.7 Mainboard
Official Creality New Update Ender 3 Dual Z-axis
Upgrade Kit with Metal Power Supply Holder, Stepper Motor and Lead Screw
for Ender 3 V2, Ender 3 Pro, Ender 3 3D Printer
Ruby Nozzles for 3D Printers MK8 E3D Prusa Ender3 (E3DV6, 0.4mm) $24.90
Used Creality Ender 3V2/Ender 3 Pro/Ender 3/Ender 3 Neo 3D Printer $171.20
Unrepaired Creality Ender 3 E 3D Printers Ender 3 Pro Upgrade $97.09
64GB eMMC Module $42.62
Threaded-Stud Rubber Bumper with Steel Base Plate - M8 x 1.25mm Size, 30mm OD,
15mm High, 220 lbs. Maximum Load - 3810N137 x4
Uncoated High-Speed Steel General Purpose Tap - Plug Chamfer, M8 x 1.25 mm Thread, 1-1/8" Thread Length - 8305A39 $9.20
Black-Oxide High-Speed Steel Drill Bit - 6.8mm Size, 109mm Overall Length - 2958A114 $4.92
Extra-Fine Marking Punch -
with 1/8" Point Diameter - 3451A32
Duttek Micro HDMI to HDMI Coiled Cable, HDMI to
Micro HDMI Coiled Cable, Extreme Slim/Thin Micro HDMI Male to HDMI Male Coiled Cable for 1080P, 4K, 3D,
and Audio Return Channel (1.2M/4FT)
Wells Lamont unisex adult 14inch PVC Coated Gloves, Green, 2 Count Pack of 1 US $8.20
Rust-Oleum 7793830 Stops Rust Spray Paint, 12 oz,
Satin Shell White
Custom Coat Self Etching Acid Etch Primer - 12.9 Ounce Spray Can - Gray $24.99
2 ALAZCO Soft-Grip Handle Heavy-Duty Tile Grout Brush
- Accid Proof Extra-Stiff Bristles - Narrow Brush for Hard to Reach Areas Multi-Purpose
Rubbermaid Commercial Products Standard Bus/Utility Box, 4.625-Gallon, Gray $14.99

Some of this was unnecessary. I did not need to buy two Ender 3 Pros, but the first one I bought was missing frame components and the second one's listing on eBay had photos of the actual contents. The second one will also give me an ample supply of spare parts for the other Ender 3 Pros I have in service. Other items that would be optional are the gloves, primer and paint. The intent is to use a caustic solution (like lye or Draino) to etch away at the anodizing on the aluminum. A coat of etching primer and then an off white paint job. So, what is the total of the above list? I'll just say that the total has crossed over into four-digit territory.

That's it for now. Look for parts two and three of this project build.

3D Printing Polycarbonate + Carbon Fiber

Polycarbonate plastics were first discovered by Dr. Hermann Schnell and Dr. Daniel Fox in 1953 at the General Electric Company. They were trying to develop a new type of material that could be used for electric insulators, but instead, stumbled upon a transparent and highly durable plastic. This new material was named "Lexan" by GE, and it quickly gained popularity due to its superior performance compared to other plastics at the time.

Polycarbonate is a type of thermoplastic polymer that is commonly used in a variety of applications due to its durability, transparency, and heat resistance. The chemical composition of polycarbonate is characterized by a repeating unit known as a carbonate group, which consists of three atoms: one carbon atom (C), one oxygen atom (O), and one additional oxygen atom that is double-bonded to the carbon atom (O=C-O). The chemical formula for polycarbonate can be represented as follows:


where "n" represents the number of repeating units in the polymer chain, and C6H4 refers to a phenylene group, which is a benzene ring (C6H5) with one hydrogen atom replaced by a carbon atom. The phenylene groups alternate with the carbonate groups in the polymer chain, resulting in a linear and highly branched structure. This molecular arrangement contributes to the unique properties of polycarbonate, such as its high impact strength, optical clarity, and resistance to UV radiation.

Polycarbonate has a relatively high glass transition temperature. The glass transition temperature is the temperature at which a polymer transitions from a glassy, rigid state to a rubbery, more flexible state, called Tg. Below the Tg, polycarbonate is in a glassy state and has a high modulus, meaning it is stiff and brittle. Above the Tg, polycarbonate transitions to a rubbery state and its modulus decreases, making it more flexible and capable of flowing or deforming under stress.

The Tg of polycarbonate typically ranges from about 145°C to 155°C. When polycarbonate is heated above its Tg, it becomes softer and more pliable, allowing it to flow and take on the shape of the mold or container it is placed in. This property makes polycarbonate suitable for a wide range of applications, including injection molding, thermoforming, and extrusion, where it can be melted, shaped, and cooled.

The Tg of polycarbonate also impacts its thermal stability. Polycarbonate has good thermal stability below its Tg, meaning it can withstand relatively high temperatures without significant degradation. However, above its Tg, polycarbonate can undergo molecular mobility and may be prone to degradation or reduced mechanical properties if exposed to prolonged elevated temperatures.

In the years that followed the discovery, polycarbonate plastics found widespread use in various industries. One of the early milestones was the development of bulletproof glass in the 1960s, which utilized the high impact resistance of polycarbonate to provide enhanced protection. Polycarbonate was also used to make eyeglass lenses, providing a lightweight and shatter-resistant alternative to glass lenses. Over the years, advancements in processing techniques and material formulations have led to further improvements in polycarbonate properties and expanded its applications in automotive parts, medical devices, and consumer goods, among others. The Lockheed Martin F-22 Raptor's canopy is constructed of specialty polycarbonate.

Enter, fused filament fabrication (FDM), or what most call, 3D printing. Using pure polycarbonate (whatever that actually means) as a 3D printing material can be extremely frustrating when using a desktop or kit 3D printer. You will need to be able to achieve temperatures that most retail printers are not capable of out of the box. You will need an all metal hotend that can handle temperatures at or above 300°C. Likewise, your build surface will need to be able to achieve temperatures exceeding 100°C.

If you are one of the few repeat readers of this site, you will know that I am a fan of Creality's Ender 3 Pro model of 3D printers. The Ender 3 Pro has been on the market for years, has a loyal following, and has a near endless array of aftermarket upgrades. Check out Scott Yu-Jan and his awesome Ender 3 Pro modifications. In previous posts (here, here and here), we were printing variants of nylon with fiber reinforced filaments, Like polycarbonate, nylon requires higher temperatures than normal PLA filaments. You can use the same aftermarket upgrades for our hotend, a Creality Sprite Pro. This has been a solid upgrade to my Ender 3 Pro printers; I would highly recommend it, even if you are only printing PLA. I have tried other direct drive Ender 3 Pro upgrades, and the Sprite Pro has shortest piece of internal bowden tube. The ease of loading filament is also a huge plus. There is a lower priced Creality Sprite that is not rated for high temperatures; you might be able to get by with this for printing nylon-based filaments, it would definitely work extremely well for printing with PLA filaments.

The other significant upgrade is the build surface. I upgraded to a Gulf Robotics' silicone encased all aluminum, three point build plate. This is an upgrade that might not be necessary, as the stock build plate as reach temperatures of 100°C, but, anecdotally, the heating is not as even as the Gulf Robotics' build plate. It heats up quicker, has more evenly distributed heat, and should be able to tolerate higher temperatures for longer periods of time compared to the stock build plate. As an addition to the upgraded build plate, I also decided to insulate the underside. I used Befenybay lightweight insulation. Silicone springs were also used. As an aside, I am planning on building and upgrading an Ender 3 Pro and tracking the build time as well as costs associated with making a high temperature, advanced materials printer. This will include setting Klipper firmware. Why Klipper instead of Marlin? You will have to come back to read why.

I will call this out in its own paragraph: you will need to update your Marlin firmware. This article will only get you so far. You will need to modify a few other settings.

Hotend maximum temperature
#define HEATER_0_MAXTEMP 340
Build surface maximum temperature
#define BED_MAXTEMP 170
Hotend temperature deviation amount
Disable runout prevention

Download High Temperature Marlin Firmware (April 16, 2023)

Source Code for Firmware


The first two temperature changes just might be too extreme, but I picked subjectively high values because I was running into temperature overshoot faults. Initially setting the hotend temperature to 320°C with an overshoot value of 20°C, when your gcode temperature was set to 300°C and with the variability of heating element, this routinely resulted in those temperature faults. By, increasing the hotend temperature and narrowing the overshoot value, this seemed to allow for higher temperatures without risking a temperature fault.

The last option, disabling runout prevention was necessary because I ended up destroying a Sprite Pro. How? Somehow, the printing stopped, and the hotend carriage got parked in one place. Every 10 seconds or so, a small amount of filament was extruded. Let this run over night, and you end up with a blob of reinforced polycarbonate that encases the nozzle, silicone sock, part of the cooling fan nozzle, as well as part of the heat sink.

In attempting to dislodge the blob of hardened polycarbonate, the two screws that hold the heat block (where the nozzle screws in) to the heat sink snapped. The whole Sprite Pro unit might be salvageable, and I do have a Sprite that can be used for parts, but I will need to expose enough of the screws to use a needle nosed plier to turn them.



Let's get down to the brass tacks on using polycarbonate filament. 3DXTech has an easy to print carbon fiber reinforced polycarbonate (or here). All of the reinforced filaments are relatively expensive compared to non-reinforced PLA or ABS filaments. You can start with easy to print polycarbonate filaments to save a little bit of money before taking the jump into reinforced filaments. Using non-reinforced filaments would also allow for the use of a 0.4mm nozzle, which should result in finer details coming through better than with a 0.6mm nozzle.

You will need an enclosure, but it is not necessary to have a heated enclosure. I have found the Gulf Robotics' upgraded build plate heater does a great job of increasing the enclosure's ambient temperature without needing another heating source.

Another tip for successful printing is to use a raft instead of a brim or skirt. Using a raft results in as much build surface contact as possible. Polycarbonate does not stick very well to most surfaces - it will gladly stick to itself. I used an adhesive - a Nano Polymer Adhesive (or here). It is not cheap. But, putting a few layers of adhesive onto the glass build surface, along with a temperature of 115°C and a raft, I have had little issues with prints becoming dislodged. PEI build plate surfaces are also said to have great adhesion properties for polycarbonate filaments.

Good luck and leave a comment if you print, successfully or unsuccessfully, with reinforced polycarbonate filaments.

3D Printing Nylon + Glass Fiber

Pure nylon may not always provide the required strength for certain applications. This is where glass reinforced nylon, carbon fiber reinforced nylon, and kevlar reinforced nylon comes in; it offers improved strength and durability while retaining the advantages of nylon. As we have already covered carbon fiber and Kevlar reinforced nylon, we will explore the discovery of glass reinforced nylon, its technical properties, and its uses in 3D printing.

The discovery of glass fiber reinforced nylon is attributed to several researchers and engineers who were working on improving the mechanical properties of nylon in the 1960s. One of the most notable contributors was Dr. Herman Mark, a polymer scientist who worked at the Polytechnic Institute of New York University. In the early 1960s, Herman and his team discovered that the addition of glass fibers to nylon significantly improved its strength and stiffness. This led to the development of glass fiber reinforced nylon, which was initially used in the automotive industry for components such as engine covers and air intake manifolds. Since then, glass fiber reinforced nylon has found its way into many different applications, including the 3D printing realm.

Glass reinforced nylon is a composite material made of nylon and glass fibers (pretty obvious, isn't it?). The glass fibers are usually added in the form of short fibers or long continuous strands, which are mixed with the nylon during the manufacturing process. The addition of glass fibers to nylon improves its mechanical properties, making it stronger, stiffer, and more resistant to wear and tear. The amount of glass fibers added to nylon can vary, with higher percentages resulting in higher strength and stiffness but lower ductility. Also, because glass is generally colorless, nylon + glass fiber is available in various colors compared to nylon + carbon fiber's only color of black.

The mechanical properties of glass reinforced nylon make it an ideal material for 3D printing applications. Glass reinforced nylon has excellent strength and stiffness. It also has good chemical resistance, making it resistant to solvents and chemicals, and can withstand high temperatures, making it suitable for applications such as automotive components and electronic enclosures.

One of the biggest challenges of 3D printing with nylon and glass fiber filament is getting the print settings just right. The material requires specific settings to achieve a successful print, and these settings can vary depending on the specific brand and type of filament being used. If the settings are not correct, the print may not adhere properly to the bed, causing warping or detachment during printing. Additionally, nylon with glass fiber filament is more prone to stringing and oozing during printing, which can leave unsightly and difficult-to-remove strands of filament on the printed object.

Another challenge is maintaining a consistent temperature throughout the printing process. Nylon and glass fiber filament require high temperatures to melt and print correctly, but if the temperature is too high or too low, the filament can become brittle and break during printing. Maintaining the right temperature can be difficult, especially for larger or more complex prints, as the filament may cool down or heat up unevenly in different parts of the object.

Post-processing can also be a challenge with nylon and glass fiber filament prints. Because of the material's strength and durability, it can be difficult to sand or smooth the surface of the printed object without damaging it. Additionally, the material is more difficult to paint or coat, which can limit the options for finishing the final product. Also, being chemically resistant to solvents and other chemicals also means it is difficult to vapor smooth objects made with the composites (or even just pure nylon alone).

With all of the background information on nylon with glass fiber out of the way, let's discuss the details of getting functional and great looking printed components.

I found that the Cura settings used for nylon + carbon fiber work very well. The only tweak I made was increasing the temperature of the print bed by 5° and this resulted in better adhesion. The other adjustment to the process that I was making sure the print bed was cleaned each time I printed - soap and hot water - as well as putting on a fresh thin layer of glue stick on the bed. Glue stick seemed to work better than the more expensive Magigoo for nylon. Since I already covered the Cura settings in the carbon fiber post, I won't duplicate them here.

3D Printing Nylon + Carbon Fiber


What even is carbon fiber? Carbon fiber is a lightweight and extremely strong material that is widely used in a variety of industries due to its exceptional properties. It was invented in the mid-20th century by a British scientist named Roger Bacon, who discovered that carbon fibers could be produced by heating rayon fibers at high temperatures in the absence of oxygen. The resulting carbon fibers were found to be much stronger and more durable than other materials at the time.

Carbon fiber is made by weaving together thin strands of carbon atoms, which are then impregnated with a resin or polymer to create a composite material. The process of creating carbon fiber typically involves several steps, including spinning, stabilization, carbonization, and finishing. In the spinning stage, precursor fibers are created by heating a polymer or other material until it becomes a liquid, which is then extruded through small holes to create long, thin filaments. These filaments are then woven together and placed in an oven to be stabilized, which involves heating the fibers to a high temperature to remove any residual moisture and strengthen them. The stabilized fibers are then carbonized, which involves heating them to an even higher temperature in the absence of oxygen to convert them into pure carbon. Finally, the carbon fibers are coated with a resin or other material to give them additional strength and durability.

The last sentence of the paragraph, "Finally, the carbon fibers are coated with a resin or other material to give them additional strength and durability.", is where we will pick up on the story. To make nylon filament with embedded carbon fiber, and make it useful in a fused filament fabrication (e.g. 3D printing) application, the carbon fibers cannot be stringy strands, they have to be chopped. These chopped fibers are then mixed with nylon - either Nylon 6 or Nylon 12 (each have their merits and properties) - to produce a usable and useful filament.

What's Needed

Now with the introduction out of the way, let's get into what you will need to successfully print with nylon + carbon fiber filament.

I am using what started out as a Creality Ender 3 Pro but, since purchasing it, I have thrown a whole lot of upgrades at it. If you are not wanting to drop the amount of money into your Ender, there is three upgrades needed for printing Nylon + carbon fiber (this also applies to all fiber embedded nylon filaments).

First, you will need to have an all metal hotend. Micro-Swiss makes a drop-in all metal hotend that, as of this writing, is about $65. If you have a little more money to spend, I would recommend Creality Sprite Extruder Pro Upgrade. For $45 more, you can get a direct drive all metal hotend that can survive high temperatures. I like the Creality Sprite extruder because the shortness between the drive gears and the top of the hotend.

Second, you will need update the Marlin Firmware on your Ender. Previously, I wrote a post on what modifications I have made to the firmware that is running on my Ender 3 Pros.

Third, you will need a tougher nozzle. I use a 6mm ruby tipped nozzle. I use a 6mm nozzle because I am concerned with fibers getting jammed up in the nozzle if the extrusion hole is too narrow. I have read other articles on people who have successfully printed with a 4mm nozzle; if you are feeling subjectively adventurous, try out a 4mm nozzle and report back with a comment on this article.

For around $90, you can get the necessary upgrades to successfully print with fibrous nylon on your Ender. My personal preference is for a few other upgrades that make printing in general easier. I would highly recommend getting a BL Touch or CR Touch for auto-leveling; and a second Z-axis drive - it just makes printing a lot more stable.

On the slicer front, I use UltiMaker's Cura. There are a number of settings I changed:

PK       ! T��c9  9  .   creality_ender3pro_nylon_+_fiber_-_100%_infill[general]
version = 4
name = Nylon + Fiber - 100% Infill
definition = creality_base

type = quality_changes
quality_type = super
setting_version = 21

adhesion_type = brim
layer_height = 0.25
layer_height_0 = 0.25
material_bed_temperature = 80
support_enable = True
support_type = buildplate

PK       ! �z^*�  �  7   creality_base_extruder_0_#2_nylon_+_fiber_-_100%_infill[general]
version = 4
name = Nylon + Fiber - 100% Infill
definition = creality_base

type = quality_changes
quality_type = super
intent_category = default
position = 0
setting_version = 21

alternate_extra_perimeter = True
cool_fan_speed = 50
infill_line_distance = 1.5
infill_pattern = trihexagon
infill_randomize_start_location = True
infill_sparse_density = 100
ironing_enabled = True
ironing_monotonic = True
line_width = 0.55
material_alternate_walls = True
material_print_temperature = 270.0
retract_at_layer_change = True
skin_monotonic = True
speed_print = 55.0
support_angle = 35
support_brim_line_count = 32
support_brim_width = 8
support_pattern = triangles
top_bottom_pattern = zigzag
wall_thickness = 1.75
zig_zaggify_infill = True

PK       ! T��c9  9  .           �    creality_ender3pro_nylon_+_fiber_-_100%_infillPK       ! �z^*�  �  7           ��  creality_base_extruder_0_#2_nylon_+_fiber_-_100%_infillPK      �   �    

You can also download the configuration export file.

Things to note...

  1. Layer height should be no less than 0.22mm
  2. Reduce the fan speed to help with prevention of warping
  3. To provide (in theory) better layer adhesion, reduce the line width slightly
  4. And of course, make sure to up your hotend temperature and bed temperature. I have the hotend at 270° and the bed temperature at 80°.

Final Thoughts

I am using nylon + carbon fiber filament from Amazon. I would also recommend 3DXTech's nylon + carbon fiber filament.

As for what will print well and what will not: sharp details are not going to be created, printing small models (something smaller than a 3DBenchy) is not going to turn out well, using Octolapse will result into somewhat stringier results, use a very clean bed with a fill layer of glue stick applied, and, of course, have a very level bed.

Post a comment if you have tried printing nylon + carbon fiber on an Ender printer.