3D Printing Nylon + Kevlar

Intro

I am not new to 3D printing, but I am by far, not an expert. I have only been creating objects for the last couple of years. Up to this point, I have only been using filament made from polylactic acid, or more commonly know by its initials: PLA. It is probably the most common filament used by hobby-printers. It comes in a million different colors (ok, I haven't actually counted the colors), dozens of finishes - matte, gloss, metallic, satin sheen and many more. It is relatively easy to print, does not require an all metal hotend, use of a standard 0.4mm brass nozzle is the norm, does not require high temperatures and it's relatively inexpensive.

Printing with Nylon with embedded Kevlar is none of those things. Before we get to the filament, let's take a look at Kevlar and nylon and a bit of their histories and properties.

Kevlar is a high-performance synthetic material that is known for its exceptional strength and durability. It was developed by Stephanie Kwolek at the DuPont company in 1965. It is a popular material with a wide range of applications, including body armor, tires, aerospace, fire fighting equipment and sporting equipment. It is a type of aramid fiber, which is a class of synthetic polymer materials that are characterized by their high strength and heat resistance. Aramid fibers are composed of long chains of molecules that are held together by strong chemical bonds, which give them their exceptional strength and durability. Kevlar, in particular, is known for its high tensile strength - five times greater than steel, and its high temperature resistance properties - having a melting point of 600° C.

The nylon used in Nylon + Kevlar filament is called PA6, or Nylon 6, and as a filament material for 3D printing, it is popular due to its ability to produce high-quality, precise prints with a smooth surface finish. One of the most significant properties of PA6 nylon filament is its high tensile strength, making it ideal for producing functional, light-load-bearing parts. Post-printing characteristics of PA6 nylon filament include its low water absorption and this property ensures that the printed parts maintain their dimensional stability even in humid environments, reducing the risk of warping or deformation. One of the primary issues with PA6 nylon filament (e.g. pre-printing) is its tendency to absorb moisture from the air. Moisture can degrade the filament's quality, leading to decreased strength and brittleness, which can affect the performance of printed parts. Additionally, PA6 nylon filament has a relatively low melting point compared to other high-performance materials such as ABS or polycarbonate, making it easier to print on a wider range of 3D printers.

Filament

The filament is dark grey in color with a rough texture. With nylon being hygroscopic, and particularly PA6 nylon, we need to deal with moisture in the filament. I am using an Ivation Countertop Dehydrator Drying Machine and set to highest temperature, 158° F or 70° C. I left the filament in the dehydrator for about sixteen hours.

Printing

The print quality of objects made from Nylon + Kevlar filament is questionable at best. I use UltiMaker Cura for model slicing and, unfortunately, Cura does not have default settings for Nylon with a 0.6 mm nozzle. It took many iterations of adjusting parameters in Cura to arrive upon something that was close an ok quality print. Here is the configuration profile that I used. The configuration is a bit of a mess; the material is PLA and the nozzle size is 0.8 mm; in the available profiles, you should get a profile named "Nylon Kevlar"

I am using a 0.6 mm ruby tipped nozzle; if you use a vanilla brass nozzle, the kevlar fibers will quickly chew into the filament hole making it be no longer a perfectly round circle.

I had to modify the Marlin firmware on my heavily modified Ender 3 Pro to allow the hotend temperature to get up to at least 270° C and bed up to at least 80° C. I attempted printing at 255° C and the filament jammed in the hotend. I settled upon using 270° C for the hotend and 80° C for the bed.

I printed eight Benchys (there are nine in the photo; one is a PLA print for comparison), each with different Cura settings. It was a trial and error of adjusting single variables and then printing a Benchy. The quality of a print, compared to a perfectly tuned Ender 3 Pro, that uses PLA is a stark difference. I was unable to get anything that remotely appeared to be a smooth surface. All surfaces have a rough, sandpaper-like feel. YouTuber 3DP Iceland made a brief video about Nylon + Kevlar, and his results were similar to mine: rough surfaces, and very stringy results.

The first round tests involved tuning temperatures. As I mentioned, 270° C was settled on for the hotend, and 80° C for the bed. There were fewer stringing at that temperature for the hotend. Second and third rounds involved adjusting retraction of filament on moves; this too reduced stringiness. The rest of the tunings were layer height, flow, extruder movement speed, and so on.

One of the other settings that I found was just about a must use: a raft instead of a brim or skirt. I used Magigoo for better adhesion. For longer (never successful) prints, using a raft proved to not work either. The edges of the raft curled up from the bed; using a wider, tighter brim might be more helpful.

Printing with this filament is very frustrating at times. Good bed adhesion is critical. A clean, wide enough nozzle is very important. Correctly calibrated nozzle height and leveled bed is important.

This is probably now one of my least favorite materials to print with;


Marlin Firmware - Modified Ender 3 Pro

Just about the only thing original and stock on my two Creality Ender 3 Pro 3D printers are the extruded aluminum frames and the control interface with its infinite-turn control knob. Everything else has been replaced; mainboard, extruder hot end and direct filament drive, Z-axis upgrade with additional stepper motor, auto bed leveling and, of course, the firmware and the addition of printer management software, Octo Print via a Raspberry Pi 4b. Oh, and a web camera. The incredibly cluttered photo to left is one of my two heavily upgraded Ender 3 Pro printers.

If you are new to the 3D printer scene, and in particular the world of upgrades and modifications to kit-printers, let's step back and have an brief overview. I won't get into the super-weedy-details because that has likely been covered ad nauseam.

The gist of 3D printing is, you have filament; it can be made of a whole host of materials; everything from nylon with carbon fiber embedded in it, to the more mundane, polylactic acid or more commonly called PLA. This filament is softened enough to flow by way of the hot end and is pushed out of a precision nozzle. This hot end is most often mounted on a series X and Z-axis rails. A heated bed is mounted on the Y-axis. All the movement is made possible by the use of stepper motors. The motors, the hotend and bed temperatures are all controlled by a mainboard.

Upgrades

The upgraded mainboard has a STM32 F103 RET6 microcontroller. The upgrade gives you a 32 bit processor versus the original 8 bit -- this allows for more complicated firmware installs. The board also has improved, silent stepper motor controllers. In order to fully take advantage of this motherboard and accessories like the CR Touch or BL Touch, you will need configure and recompile the Marlin Firmware. We get to that later in this post.

The upgrades listed above are what I eventually arrived upon. There was a Micro Swiss Direct Drive Extruder.

Upgrade Costs Breakdown
Part Cost
Micro Swiss Direct Drive Extruder $99.75
Creality Sprite Direct Drive Extruder Pro Kit $109.99
Micro-Swiss All Metal Hotend Kit $63.50
Ender 3 Dual Z-axis Upgrade Kit $35.79
Upgrade X-axis Belt Tensioner $15.98
Ender 3 Dual Z-axis Upgrade Kit $35.79
Spring Steel Flexible Build Surface Magnetic Removable Bed Sheet $15.98 (2x)
Creality Ender 3 Pro 32-bit Silent Board Motherboard V4.2.7 $42.99
Raspberry Pi 4b - 2GB $45.00
DC 6V 9V 12V 24V to DC 5V 5A Buck Converter Module, 9-36V Step Down to USB 5V $42.99
Logitech C920x HD Pro Webcam $69.99
Creality BLTouch V3.1 Auto Bed Leveling Sensor Kit $47.99
Base model Ender 3 Pro $236.00
Total $877.72

UPDATE 2023/02/25: I purchased a Creality Sprite Extruder Pro ($109.99) This is an improvement on the Creality Sprite Extruder; it allows for filament temperatures up to 300℃. I have a longer term project in mind that will require printing with material at or above 260℃.

As you can see, a base model Ender 3 Pro costs $236.00, but throw in an armful of higher end upgrades (for the retail market), and you suddenly have a setup that has cost nearly $900.00. Yikes! Are all of these upgrades necessary? I would have to say, No. The Creality Direct Drive extruder is well worth the money - never again deal with bowden tubes. The other two must upgrades are the mainboard and adding a CR Touch or BL Touch auto-leveling sensor. Runners up is the dual Z-axis; it really stabilizes the frame.

Firmware

In order to take advantage of a CR Touch or BL Touch, you will need to configure the firmware to use it. The probe-to-offset also needs to be changed when using the Sprite Direct Drive as the nozzle is a slight different location than the stock nozzle. I won't go into all the details of, but you can compare Configuration_og.h (the original) and Configuration.h as well as Configuration_adv_og.h and Configuration_adv.h. The changes range from enabling CR Touch/BL Touch and enabling a comprehensive bed leveling system, to adjusting the position of the nozzle and enabling thermal safety features.

git clone https://github.com/ajokela/ender3pro_marlin-2.0.x.git

Open Visual Studio Code, and Open Folder. Navigate to where you cloned the repository to and open it.

If you are wanting configuration and compile your own firmware, checkout Marlin and Platform.io. It will get your started. Once Platform.io is installed, you can clone the repo and open it in Visual Code.

Here are the things that were changed in Configuration.h and Configuration_adv.h

Configuration.h
#define STRING_CONFIG_H_AUTHOR "(Alex, Ender-3 Pro)"
Who made the changes.
#define CUSTOM_MACHINE_NAME "Ender-3 Pro 4.2.7 - fw v2.0.9.3 - 2023-02-23"
I like to put the date and version numbers in firmware so it is easy to identify a what and a when
#define HEATER_0_MAXTEMP 315
You will want to be careful with this setting; it is the temperature of the hotend in celsius; Needed higher than default for printing nylon and PET-G. Because of HOTEND_OVERSHOOT, maximum temperature will always be MAXTEMP - HOTEND_OVERSHOOT
DO NOT SET AT THIS IF YOU HAVE A STOCK HOTEND
#define HOTEND_OVERSHOOT 20
#define BED_OVERSHOOT    15
(°C) Forbid temperatures over MAXTEMP - OVERSHOOT for hotend and (°C) Forbid temperatures over MAXTEMP - OVERSHOOT for bed
#define S_CURVE_ACCELERATION
Smoother curve motions
//#define Z_MIN_PROBE_USES_Z_MIN_ENDSTOP_PIN
Comment out because we will be using a CR-Touch or BL-Touch
#define USE_PROBE_FOR_Z_HOMING
Force the use of the probe for Z-axis homing
#define BLTOUCH
Enable BL Touch/CR Touch
#define NOZZLE_TO_PROBE_OFFSET { -10.0, -10.0, 0 }
Move the offset for the Sprite Direct Drive hotend
#define PROBING_MARGIN 15
A little more buffer around the perimeter
#define MULTIPLE_PROBING 2
#define EXTRA_PROBING    1
Add extra probings to eliminate outliers
#define PREHEAT_BEFORE_PROBING
#if ENABLED(PREHEAT_BEFORE_PROBING)
  #define PROBING_NOZZLE_TEMP  200   // (°C) Only applies to E0 at this time
  #define PROBING_BED_TEMP     60
#endif
Require minimum nozzle and/or bed temperature for probing; bump temperature to match pre-probing temperature
#define Y_BED_SIZE 210
#define Z_MAX_POS X_BED_SIZE
Adjust bed size; I ran into problems where the extruder would overshoot the bed.
#define AUTO_BED_LEVELING_UBL
Unified Bed Leveling. A comprehensive bed leveling system combining the features and benefits of other systems. UBL also includes integrated Mesh Generation, Mesh Validation and Mesh Editing systems.
#define ENABLE_LEVELING_AFTER_G28
Always enable leveling immediately after G28.
#define G26_MESH_VALIDATION
Enable the G26 Mesh Validation Pattern tool.
#define GRID_MAX_POINTS_X 6
#define UBL_HILBERT_CURVE
#define UBL_MESH_WIZARD
Use Hilbert distribution for less travel when probing multiple points. Run several commands in a row to get a complete mesh.
#define LCD_BED_LEVELING
Add a bed leveling sub-menu for ABL or MBL.
#define Z_SAFE_HOMING
Moves the Z probe (or nozzle) to a defined XY point before Z homing.
#define PREHEAT_1_TEMP_HOTEND 200
#define PREHEAT_1_TEMP_BED     60
Bump up the preheat temperatures of hotend and bed
Configuration_adv.h
#define THERMAL_PROTECTION_PERIOD 120        // Seconds
#define THERMAL_PROTECTION_HYSTERESIS 10     // Degrees Celsius
False positives with Thermal Runaway
#define EXTRUDER_RUNOUT_PREVENT
#if ENABLED(EXTRUDER_RUNOUT_PREVENT)
  #define EXTRUDER_RUNOUT_MINTEMP 195
  #define EXTRUDER_RUNOUT_SECONDS 30
  #define EXTRUDER_RUNOUT_SPEED 1500  // (mm/min)
  #define EXTRUDER_RUNOUT_EXTRUDE 5   // (mm)
#endif
Extruder runout prevention. If the machine is idle and the temperature over MINTEMP then extrude some filament every couple of SECONDS.
#define HOTEND_IDLE_TIMEOUT
#if ENABLED(HOTEND_IDLE_TIMEOUT)
  #define HOTEND_IDLE_TIMEOUT_SEC (10*60)   // (seconds) Time without extruder movement to trigger protection
  #define HOTEND_IDLE_MIN_TRIGGER   195     // (°C) Minimum temperature to enable hotend protection
  #define HOTEND_IDLE_NOZZLE_TARGET   0     // (°C) Safe temperature for the nozzle after timeout
  #define HOTEND_IDLE_BED_TARGET      0     // (°C) Safe temperature for the bed after timeout
#endif
Hotend Idle Timeout and Prevent filament in the nozzle from charring and causing a critical jam.
#define PROBE_OFFSET_WIZARD
Add Probe Z Offset calibration to the Z Probe Offsets menu
#define PROBE_OFFSET_WIZARD_START_Z -4.0
Enable to init the Probe Z-Offset when starting the Wizard. Use a height slightly above the estimated nozzle-to-probe Z offset.
#define PROBE_OFFSET_WIZARD_XY_POS { X_CENTER, Y_CENTER }
Set a convenient position to do the calibration (probing point and nozzle/bed-distance).
#define LCD_SET_PROGRESS_MANUALLY
Add an 'M73' G-code to set the current percentage
#define USE_M73_REMAINING_TIME
#define ROTATE_PROGRESS_DISPLAY
Use remaining time from M73 command instead of estimation; and Display (P)rogress, (E)lapsed, and (R)emaining time
#define LCD_PROGRESS_BAR
Show a progress bar on HD44780 LCDs for SD printing
#define BINARY_FILE_TRANSFER
Add an optimized binary file transfer mode, initiated with 'M28 B1'
#define BABYSTEP_DISPLAY_TOTAL

#define BABYSTEP_ZPROBE_OFFSET
#if ENABLED(BABYSTEP_ZPROBE_OFFSET)
  #define BABYSTEP_ZPROBE_GFX_OVERLAY
#endif
Display total babysteps since last G28
Combine M851 Z and Babystepping
Enable graphical overlay on Z-offset editor
#define HOST_ACTION_COMMANDS
#if ENABLED(HOST_ACTION_COMMANDS)
  #define HOST_PAUSE_M76             
  #define HOST_PROMPT_SUPPORT        
  #if ENABLED(HOST_PROMPT_SUPPORT)
    #define HOST_STATUS_NOTIFICATIONS
  #endif
  #define HOST_START_MENU_ITEM
  #define HOST_SHUTDOWN_MENU_ITEM
#endif
Tell the host to pause in response to M76
Initiate host prompts to get user feedback
Send some status messages to the host as notifications
Add a menu item that tells the host to start
Add a menu item that tells the host to shut down

Even with all of this add-ons and modifications, the printer remains finicky. It is constantly needing adjustments which is expected to an extent when you are dealing with moving material and high heat.

Does it print well? It depends. It depends upon the nozzle wear, the flexibility and moisture content of the filament, and the type of the filament. These are all variables that any 3d printer would encounter. I just don't know how big of a deal these would be to another printer. I have also two Creality CR-6 SE printers, and they worked well until they did not. Maybe someday I will get a higher-end printer and be able to do more comparisons.

Download most recent compiled firmware (v2.0.9.3)

BIGTREETECH CB1 - Review

A commenter on the previous review of Raspberry Pi CM4 and pin compatible modules brought to my attention that there exists a fifth module: BIGTREETECH CB1.

My hot take on this system on a module is it is underwhelming. The two call outs are the memory size - 1 gigabyte - and the ethernet - 100 megabits only. The other four modules previously tested all had 4 gigabytes of memory and all had 1 gigabit ethernet.

Geekbench Metrics
Module Single CPU Metrics Multi-CPU Metrics
Raspberry Pi CM4 228 644
Radxa CM3 163 508
Pine64 SOQuartz 156 491
Banana Pi CM4 295 1087
BIGTREETECH CB1 91 295
Features Comparison
Raspberry Pi CM4 Radxa CM3 Pine64 SOQuartz Banana Pi CM BIGTREETECH CB1
Specifications Specifications Specifications Specifications Specifications
Core Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz Rockchip RK3566, Quad core Cortex-A55 (ARM v8) 64-bit SoC @ 2.0GHz Rockchip RK3566, Quad core Cortex-A55 (ARM v8) 64-bit SoC @ 1.8GHz and Embedded 32-bit RISC-V CPU Amlogic A311D Quad core ARM Cortex-A73 and dual core ARM Cortex-A53 CPU Allwinner H616, Cuad core ARM Cortex-A53 (ARM v8) 64-bit SoC @ 1.5 GHz
NPU - 0.8T NPU 0.8 TOPS Neural Network Acceleration Engine 5.0 TOPS -
GPU - Mali G52 GPU Mali-G52 2EE Bifrost GPU Mali-G52 MP4 (6EE) GPU Mali-G31 MP2
Memory 1GB, 2GB, 4GB or 8GB LPDDR4 1GB, 2GB, 4GB or 8GB LPDDR4 2GB, 4GB, 8GB LPDDR4 4GB LPDDR4 1GB DDR3L
eMMC On module - 0GB to 32GB On module - 0GB to 128GB External - 16GB to 128GB On module - 16GB to 128G) -
Network 1Gbit Ethernet - Option for WiFi5, Bluetooth 5.0 1Gbit Ethernet - Option for WiFi5, Bluetooth 5.0 1Gbit Ethernet - WiFi 802.11 b/g/n/ac, Bluetooth 5.0 1Gbit Ethernet 100Mbit Ethernet - 100Mbit WiFi
PCIe 1-lane 1-lane 1-lane 1-lane -
HDMI 2x HDMI 1x HDMI 1x HDMI 1x HDMI 1x HDMI
GPIO 28 pin 40 pin 28 pin 26 pin 40 pin
Extras - - - SATA ports, one shared with USB 3, one shared with PCIe; Audio Codec -
Geekbench Score - Single CPU 228 163 156 295 91
Geekbench Score - Multi CPU 644 508 491 1087 295
Price of Tested* $65 $69 $49 $105 $40
Power Consumption 7 watts N/A 2 watts N/A N/A



If you are thinking, what could this comparatively underwhelming module be used for? First, let's take a look at BIGTREETECH. If you have been into the 3D printer kit scene, you might be familiar with the manufacturer. BIGTREETECH is known for its 3D printer mainboards and other 3D printing related electronics. The CB1 could be easily dropped in in-place for a Raspberry Pi for your Creality Ender 3 Pro or other printer kit. You will need a carrier board for it, but it will work.

OctoPrint or Klipper will run just fine on this module. You will most certainly not need 1Gbit ethernet for printing when most 3D printers print fractions of a millimeter per minute; transmission of gcode will not max out the bandwidth. Likewise for needing more memory; OctoPrint or Klipper will certainly be more responsive with more memory, but 1GB will work just fine.

One thing that this mostly underwhelming module has going for itself is HDMI. It is capable of pumping out 60 fps 4k video. If you are looking for a module that can do this, pick the CB1. For only $40, it is a bargain compared to the RPi CM4 and compatible modules.

Disk Images for the CB1

Information and instructions on WiFi setup

For some of the CM4 pin compatible modules, like the Radxa CM3, an eMMC flash writing utility that I was only able to get working on MS Windows was needed. The CB1 is straightforward in comparison. Simply download an image (link above), and use balenaEtcher or Raspberry Pi Imager or dd to write the image to a micro SD card. The image I ultimately used comes with Linux kernel v5.16.1. Like so many Linux distributions for Arm systems, this kernel is BSP, or Board Specific Package. It is a fork from mainline Linux and it is specifically for the CB1 and its associated Arm processor. Given that this is a niche module, and short of a lot of demand for it, the kernel will likely drift as mainline Linux progresses, eventually becoming outdated. But for now, it is a contemporary, relatively new kernel by comparison; put in constrast with semi-official distribution kernel for the Banana Pi CM4, which comes with v4.9.x, was released in December of 2016.

If you stumbled upon this post by way of some 3D printer-related search, and you are just wanting to write an image to a micro sd card and get on with printing awesome stuff on your printer...here is a video with instructions.

If you do not need much computing or memory, you are mostly interested in a simple 3D printer manager or a barebones HDMI streamer, the CB1, for its price, is pretty good. There even is a drop-in replacement for Ender 3 mainboards, the BIGTREETECH Manta E3EZ V1.0 Mainboard 32 Bit Silent Control Board. This gives you OctoPrint or Klipper, for print management, plus Marlin Firmware, for printer control and gcode execution, all-in-one board for about $65. This is a great deal give the much griped about availability of Raspberry Pi modules and boards, and secondary market prices, for the small order and maker crowds.

Finally, Polycube compiles on runs successfully on this module, I will eventually include it in a network routing comparison of Raspberry Pi CM4 pin compatible modules.

Building a Kernel and Disk Image for the Radxa CM3

With my eventual goal of testing out network and router capabilities of four compute modules that are pin compatible with the Raspberry Pi CM4, I have been doing setup work. My last few postings (here, here and here) on getting Polycube, a drop-in replacement for iptables and a number of other utilities that uses eBPF instead of the usual netfilter-based mechanisms. The objective is to test out netfilterand ebpf routing on the four modules (giving me a collection of eight test sets).

I have Polycube compiled and appearing to function on the Raspberry Pi CM4, the Pine64 SOQuartz module, and the code compiled and runnable on the Radxa CM3. There is one problem with running Polycube on the CM3: the SYSCALL for eBPF was not compiled into the kernel. Even though the code successfully compiled to an executable binary, the necessary kernel hooks are not present. The solution: compile a new kernel and create a new disk image.

If you are a person who is interested in tiny computers of various flavors, you will have noticed that there are an abundance of different distributions out on the internet. An example, for the Pine64 Quartz64 model A, there are at least three different variant distributions - Plebian Linux, DietPI, and balbes150's Armbian. They all have one thing in common, they all use Debian packages and are in one sense or another, a derivative of Debian and the Debian ecosystem. If you have used Ubuntu, you have used a distribution that leverages Debian architecture and infrastructure.

The available distributions for Radxa CM3 also use Debian ecosystem components; everything from being able to utilize other arm64 packages, to using the build infrastructure for bundling up things into a handy disk image that can be burned/written to media.

Many single board computer distributions are what is called a "board support package", or BSP for short. A BSP includes low level boot programs (a first stage bootloader, prebuilt binaries and Arm Trustzone Firmware) a boot program (a second stage bootloader , like u-boot or Tianocore EFI), an operating system and the compatible drivers for that are specific to the board. The BSP is a unique bundling of software that is specific to a given board or family of boards. Often times, the Linux kernel that is included with a given BSP has been modified and new drivers have been added. The kernel is essentially a fork and no longer tracks the "main branch" of Linux kernel development and any upstream changes in the main branch maybe difficult or impossible to incorporate. The kernel is, therefore, a snapshot in time that all too often fades into obscurity because of lack of attention from the developers or a broader community (if a community exists).

Despite not having the following and community backing like that of Raspberry Pi, Radxa does have well maintained series of BSP distributions. Many do have their kernels pegged to a specific version within the Linux kernel repository, but much of the userland software is not usually tied to specific features found in specific versions -- unless the software is something like Polycube.

Radxa does a great job of providing build frameworks for both configuring and compiling a new kernel, as well as downloading packages and building a disk image. Let's get started.


The following information is based on this.

  1. As a pregame note, I made a virtual machine using VirtualBox. Specifically, Debian 11 for build the new kernel in order to prevent any unnecessary contaminations of packages, dependencies or the like on my laptop. The building of the distribution image uses Docker and will not pose any issues.

  2. Clone the Github repository rockchip-bsp and specifically the stable-4.19-rock3 branch. The pull in any submodules.

    git clone -b stable-4.19-rock3 https://github.com/radxa/rockchip-bsp.git
    cd rockchip-bsp
    git submodule init
    git submodule update

    The stable-4.19-rock3 branch has support for the following boards:

    • ROCK 3A
    • ROCK 3B
    • Radxa CM3 IO
    • Radxa E23
    • Radxa E25
    • Radxa CM3 RASPCM4IO

    Cloning the repository and checking out the stable-4.19-rock3 branch produces the following directories:

    • build: Some script files and configuration files for building u-boot, kernel and rootfs.
    • kernel: kernel source code, current version is 4.19.193..
    • rkbin: Prebuilt Rockchip binaries, include first stage loader and Arm TrustZone Firmware.
    • rootfs: Bootstrap a Debian based rootfs, support architecture armhf and arm64, supports Debian Jessie, Stretch and Buster.
    • u-boot: u-boot as the second stage bootloader

    There are a few things to note. First, our kernel is version 4.19.193. Polycube requires at minimum v4.15. With v4.19, we are covered. Second, this repository/project contains scripts to bootstrap and build a disk image. We will not be using this functionality. The supported Debian distributions are too old. We have been using at least Debian bullseye for all of our Polycube testing.

  3. Install a Linaro toolchain. This is used for compiling code on an x86/amd64 and producing arm64 binaries.

    wget https://releases.linaro.org/components/toolchain/binaries/7.3-2018.05/aarch64-linux-gnu/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz
    sudo tar xvf gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu.tar.xz  -C /usr/local/

    Linaro has driven open source software development on Arm since 2010, providing the tools, Linux kernel quality and security needed for a solid foundation to innovate on. Linaro works with member companies and the open source community to maintain the Arm software ecosystem and enable new markets on Arm architecture.

  4. In your user's .bashrc file, append the following line:

    export PATH="/usr/local/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin:$PATH"
    Then source .bashrc to update your PATH variable.
    source ~/.bashrc

    Verify that the Linaro GCC toolchain is visable from your PATH

    which aarch64-linux-gnu-gcc
    /usr/local/gcc-linaro-7.3.1-2018.05-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-gcc
  5. Install a few packages:

    sudo apt-get install gcc-aarch64-linux-gnu \
                  device-tree-compiler libncurses5 libncurses5-dev \
                  build-essential libssl-dev mtools bc python dosfstools
  6. Build u-boot for Radxa CM3 and specifically for use with a Raspberry Pi CM4 carrier/io board.

    ./build/mk-uboot.sh rk3566-radxa-cm3-raspcm4io

    There should be files in out/u-boot

    ls -l out/u-boot
    total 2132
    -rw-rw-r-- 1 alex alex  299008 Feb  1 22:43 idbloader.img
    -rw-rw-r-- 1 alex alex  453056 Feb  1 22:43 rk356x_spl_loader_ddr1056_v1.10.111.bin
    -rw-rw-r-- 1 alex alex 1426944 Feb  1 22:43 u-boot.itb
  7. Configure a new kernel. If you have ever cloned the Linux source code repository or unarchived a tar-file of the source and then configured kernel and then compiled it, the following step will be familiar. The build process has been remarkably similar for better part of twenty-five years. I had not configured and compiled a kernel from source in a very long time; the kernel configuration process was remarkably familiar.

    cd kernel
    export ARCH=arm64
    export CROSS_COMPILE=aarch64-linux-gnu-
    make rockchip_linux_defconfig

    There will be a file named .config, you can either edit this by hand (if you have an idea of what you are doing and need to do) or you can use a handy menu-driven interface. Either way, for my specific needs of enabling eBPF, I simply opened .config in an editor, and searched for references to BPF.

    If you want to try the menu-driven method, execute the following:

    make menuconfig

    Save your new configuration (run this whether you editted by hand or used menuconfig)

    make savedefconfig
    cp defconfig arch/arm64/configs/rockchip_linux_defconfig
    cd ..


  8. Build a kernel

    ./build/mk-kernel.sh rk3566-radxa-cm3-raspcm4io
    This will kick off the compilation of the kernel; obviously, depending upon your build machine, it might take a while.

    You will likely be presented with some configuration questions:

    Give that I am not entirely versed in things-kernel, I answered y to all of the questions. Leave a comment below if you have some insight into the questions that are presented during the build process.

  9. Pack up your new kernel and associated headers into Debian package files (e.g. .deb). The parameters for pack-kernel.sh are: 1) the name of the kernel configuration file (from step #7); 2) ebpf is a release value, it should be something useful.

    ./build/pack-kernel.sh -d rockchip_linux_defconfig -r ebpf
    This will compile the kernel, again, but this appears to be necessary because this steps does not configure the appropriate chip and board as in the previous step.

    ls out/packages/
    linux-4.19.193-ebpf-rockchip-g67a0c0ce87a0_4.19.193-ebpf-rockchip_arm64.changes
    linux-headers-4.19.193-ebpf-rockchip-g67a0c0ce87a0_4.19.193-ebpf-rockchip_arm64.deb
    linux-image-4.19.193-ebpf-rockchip-g67a0c0ce87a0_4.19.193-ebpf-rockchip_arm64.deb
    linux-image-4.19.193-ebpf-rockchip-g67a0c0ce87a0-dbg_4.19.193-ebpf-rockchip_arm64.deb
    linux-libc-dev_4.19.193-ebpf-rockchip_arm64.deb

    These Debian packages will be needed when we build a Debian bullseye distribution.

  10. You will also need to copy rk3566-radxa-cm3-rpi-cm4-io.dtb from out/kernel directory; this device table is needed when writing a new disk image to the CM3.

    If you do want to assemble an older distribution (Debian buster or stretch), you can follow steps for Make rootfs image found here. I have a pre-built Debian buster with Desktop disk image available here

  11. Change directories to place outside of the rockchip-bsp directory, and now, clone the Radxa rbuild tool

    git clone https://github.com/radxa-repo/rbuild.git

    You will need docker and associated software packages. Installing these tools should be straightforward and there are dozens if not hundreds of howtos widely available to assist you. If you do not have docker command line tools installed and you looking for a quick guide, follow these instructions before proceding.

  12. Make a directory for your kernel packages; copy kernel packages

    cd rbuild
    Make a directory for the kernel packages; I will be using docker outside of the virtual machine that I used to build the kernel packages. You are free to use the VM for building the bullseye disk image, I ran into issues and decided to use my laptop to directly use docker. I used scp to copy the kernel packages from the VM into a directory named kernel that is in the rbuild directory containing the cloned repo.
  13. Run rbuild to magically assemble a disk image for you; this will take a while, best to grab some coffee, or lunch, or just go home for the day. There is also a strong chance of having network timeouts while downloading necessary files. I ended up having at least five times where a package download failed and killed the whole build process. On a my Dell XPS Developer Edition laptop, in a VirtualBox VM, the process took over eight hours. It should be noted that even if there is a timeout, by specifying the -r parameter to rbuild, this is caching the necessary Debian packages.

    ./rbuild -r -k kernel/linux-image-4.19.193-ebpf-rockchip-g67a0c0ce87a0_4.19.193-ebpf-rockchip_arm64.deb radxa-cm3-rpi-cm4-io cli

    ls -l
    total 1434692
    -rw-rw-r-- 1 alex alex       3322 Feb  1 22:17 action.yaml
    drwxrwxr-x 6 alex alex       4096 Feb  2 09:38 common
    drwxrwxr-x 2 alex alex       4096 Feb  1 22:17 configs
    drwxrwxr-x 2 alex alex       4096 Feb  1 22:38 kernel
    -rw-r--r-- 1 alex alex 6442450944 Feb  2 11:48 radxa-cm3-rpi-cm4-io_debian_bullseye_cli.img
    -rw-rw-r-- 1 alex alex        175 Feb  2 11:48 radxa-cm3-rpi-cm4-io_debian_bullseye_cli.img.sha512
    -rwxrwxr-x 1 alex alex      18869 Feb  1 22:17 rbuild
    -rw-rw-r-- 1 alex alex       1542 Feb  1 22:17 README.md
  14. And there we have it. radxa-cm3-rpi-cm4-io_debian_bullseye_cli.img is your new disk image, complete with a custom compiled kernel with eBPF enabled. We can compress the disk image with xz to make it more manageable.

    xz -z -v radxa-cm3-rpi-cm4-io_debian_bullseye_cli.img
    radxa-cm3-rpi-cm4-io_debian_bullseye_cli.img (1/1)
      3.0 %     5,938.2 KiB / 185.7 MiB = 0.031    10 MiB/s       0:18   9 min 50 s
  15. You can download the kernel and disk image that was built during the writing of this post: https://cdn.tinycomputers.io/radxa-rock3/debian-buster-linux-4.19.193-2a-eBPF-rockchip-rk3566-radxa-cm3-rpicm4io.img.xz

    The Device Table file built during the writing of this post: https://cdn.tinycomputers.io/radxa-rock3/linux-image-4.19.193-ebpf-rockchip-g67a0c0ce87a0_4.19.193-ebpf-rockchip_arm64.dtb

  16. Instructions on writing the disk image to eMMC on the Radxa CM3, you can follow the instructions on my previous post, Raspberry Pi CM4 and Pin Compatible Modules

More Information on Radxa's build scripts, rbuild documentation and its github repo

Polycube - Complete Installation on Raspberry Pi CM4

  1. Adding backports and stretch package locations to /etc/apt/source.list

    deb https://deb.debian.org/debian bullseye-backports main contrib non-free
    deb https://deb.debian.org/debian/ stretch-backports main contrib non-free
    deb https://deb.debian.org/debian/ stretch main contrib non-free
  2. Update local cache

    sudo apt update
  3. Install packages

    sudo apt-get -y install git build-essential cmake bison flex \
           libelf-dev libllvm9 llvm-9-dev libclang-9-dev libpcap-dev \
           libnl-route-3-dev libnl-genl-3-dev uuid-dev pkg-config \
           autoconf libtool m4 automake libssl-dev kmod jq bash-completion  \
           gnupg2 golang-go-1.19 tmux bc libfl-dev libpcre2-dev libpcre3-dev
  4. Add go to your $PATH in .bashrc; this needs to be done for root user, as well.

    export PATH=/usr/lib/go-1.19/bin:$PATH
  5. Verify go is in $PATH

    go version
    go version go1.19.4 linux/arm64
  6. Install pistache - needed for the RESTful control daemon, polycubed

    git clone https://github.com/oktal/pistache.git
    cd pistache
    # known working version of pistache
    git checkout 117db02eda9d63935193ad98be813987f6c32b33
    git submodule update --init
    mkdir -p build && cd build
    cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release -DPISTACHE_USE_SSL=ON ..
    make -j $(getconf _NPROCESSORS_ONLN)
    sudo make install
  7. Install libtins

    cd
    git clone --branch v3.5 https://github.com/mfontanini/libtins.git
    cd libtins
    mkdir -p build && cd build
    cmake -DLIBTINS_ENABLE_CXX11=1 \
     -DLIBTINS_BUILD_EXAMPLES=OFF -DLIBTINS_BUILD_TESTS=OFF \
     -DLIBTINS_ENABLE_DOT11=OFF -DLIBTINS_ENABLE_PCAP=OFF \
     -DLIBTINS_ENABLE_WPA2=OFF -DLIBTINS_ENABLE_WPA2_CALLBACKS=OFF ..
    make -j $(getconf _NPROCESSORS_ONLN)
    sudo make install
    sudo ldconfig
  8. Install libyang

    cd
    git clone https://github.com/CESNET/libyang.git
    cd libyang
    git checkout libyang1
    mkdir build; cd build
    cmake ..
    make
    sudo make install
  9. Clone polycube repository that contains the necessary changes to config.cpp

    cd
    git clone https://github.com/ajokela/polycube
    cd polycube
    git submodule update --init --recursive
  10. Build prometheus-cpp

    cd src/libs/prometheus-cpp
    mkdir build; cd build
    cmake .. -DBUILD_SHARED_LIBS=ON
    make -j4
    sudo make install
  11. Configure polycube
    cd; cd polycube
    mkdir build; cd build
    cmake .. -DBUILD_SHARED_LIBS=ON \
               -DENABLE_PCN_IPTABLES=ON     \
               -DENABLE_SERVICE_BRIDGE=ON   \
               -DENABLE_SERVICE_DDOSMITIGATOR=OFF \
               -DENABLE_SERVICE_FIREWALL=ON   \
               -DENABLE_SERVICE_HELLOWORLD=OFF     \
               -DENABLE_SERVICE_IPTABLES=ON     \
               -DENABLE_SERVICE_K8SFILTER=OFF     \
               -DENABLE_SERVICE_K8SWITCH=OFF     \
               -DENABLE_SERVICE_LBDSR=OFF     \
               -DENABLE_SERVICE_LBRP=OFF     \
               -DENABLE_SERVICE_NAT=ON     \
               -DENABLE_SERVICE_PBFORWARDER=OFF     \
               -DENABLE_SERVICE_ROUTER=ON     \
               -DENABLE_SERVICE_SIMPLEBRIDGE=ON     \
               -DENABLE_SERVICE_SIMPLEFORWARDER=ON     \
               -DENABLE_SERVICE_TRANSPARENTHELLOWORLD=OFF \
               -DENABLE_SERVICE_SYNFLOOD=OFF     \
               -DENABLE_SERVICE_PACKETCAPTURE=OFF     \
               -DENABLE_SERVICE_K8SDISPATCHER=OFF
    
  12. Build polycube (this will take a while; you might want to use tmux)

    tmux
    make -j4
    To detach from the tmux terminal, press CTL+b, d

    To reattached, execute:

    tmux attach -t 0


    Grab a coffee and go stare at your phone for a while.

  13. If all goes well, you should see the following:

    [100%] Building CXX object src/polycubed/src/CMakeFiles/polycubed.dir/load_services.cpp.o
    [100%] Building CXX object src/polycubed/src/CMakeFiles/polycubed.dir/version.cpp.o
    [100%] Linking CXX executable polycubed
    [100%] Built target polycubed
  14. Try to execute polycubed; we should get some sort of error(s)
    sudo src/polycubed/src/polycubed
    [2023-01-27 13:57:22.022] [polycubed] [info] loading configuration from /etc/polycube/polycubed.conf
    [2023-01-27 13:57:22.023] [polycubed] [warning] default configfile (/etc/polycube/polycubed.conf) not found, creating a new with default parameters
    terminate called after throwing an instance of 'spdlog::spdlog_ex'
      what():  Failed opening file /var/log/polycube/polycubed.log for writing: No such file or directory
    Aborted
    
  15. This is progress and we can handle this by making a directory.
    sudo mkdir /var/log/polycube
  16. Run polycubed again, and you should run into kernel header files not being foundation
    sudo src/polycubed/src/polycubed 
    [2023-01-27 14:01:21.048] [polycubed] [info] loading configuration from /etc/polycube/polycubed.conf
    [2023-01-27 14:01:21.051] [polycubed] [info] configuration parameters:
    [2023-01-27 14:01:21.051] [polycubed] [info]  loglevel: info
    [2023-01-27 14:01:21.051] [polycubed] [info]  daemon: false
    [2023-01-27 14:01:21.052] [polycubed] [info]  pidfile: /var/run/polycube.pid
    [2023-01-27 14:01:21.052] [polycubed] [info]  port: 9000
    [2023-01-27 14:01:21.052] [polycubed] [info]  addr: localhost
    [2023-01-27 14:01:21.052] [polycubed] [info]  logfile: /var/log/polycube/polycubed.log
    [2023-01-27 14:01:21.052] [polycubed] [info]  cubes-dump-file: /etc/polycube/cubes.yaml
    [2023-01-27 14:01:21.052] [polycubed] [info]  cubes-dump-clean-init: false
    [2023-01-27 14:01:21.052] [polycubed] [info]  cubes-dump-enable: false
    [2023-01-27 14:01:21.052] [polycubed] [info] polycubed starting...
    [2023-01-27 14:01:21.052] [polycubed] [info] version v0.9.0+ [git: (branch/commit): master/75da2773]
    modprobe: FATAL: Module kheaders not found in directory /lib/modules/5.15.61-v8+
    Unable to find kernel headers. Try rebuilding kernel with CONFIG_IKHEADERS=m (module) or installing the kernel development package for your running kernel version.
    chdir(/lib/modules/5.15.61-v8+/build): No such file or directory
    [2023-01-27 14:01:21.092] [polycubed] [error] error creating patch panel: Unable to initialize BPF program
    [2023-01-27 14:01:21.093] [polycubed] [critical] Error starting polycube: Error creating patch panel
  17. We need to get the Raspberry Pi Linux kernel source.
    cd /usr/src
    sudo git clone --depth=1 https://github.com/raspberrypi/linux
    We need to see what kernel version the Raspberry Pi
    uname -a
    Linux polycube-network 5.15.61-v8+ #1579 SMP PREEMPT Fri Aug 26 11:16:44 BST 2022 aarch64 GNU/Linux
    We are using 5.15.61-v8+. We will need to checkout the correct branch of the kernel. First move linux to linux-upstream-5.15.61-v8+
    sudo mv linux linux-upstream-5.15.89-v8+
    cd linux-upstream-5.15.89-v8+
    Now, checkout the correct branch. It takes a format like rpi-5.15.y which corresponds to version 5.15.89
    sudo git checkout rpi-5.15.y
  18. Make a symlink from within /lib/modules to our source directory
    cd /lib/modules/5.15.61-v8+
    sudo ln -s /usr/src/linux-upstream-5.15.89-v8+ build
  19. Build a new kernel to auto-generate the necessary header files.

    cd /usr/src/linux-upstream-5.15.89-v8+
    sudo make ARCH=arm64 bcm2711_defconfig
    sudo make -j4
    Grab another cup of coffee and stare at your phone for a while; this will take some time to complete. Doing this in situ will be slower than cross-compiling on a faster laptop or desktop, but the point of these instructions is not to productionize the process, it is to show how to make polycube compile and run on Arm-based systems, specifically, Raspberry Pi 4b or CM4 systems.

    It might be unnecessary to completely build recompile a kernel; maybe experiment a bit with it. 19. Installing and running. Make sure go is available in your PATH for root user; it is needed to compile polycubectl.

    sudo su -
    cd ~pi/polycube/build
    make install
    make install should finished a message of Installation completed successfully. Now we can run polycubed and it will find all the associated shared libraries for the functionality we will be investigating in the next post.
    sudo polycubed
    You should get output that looks like this:
    [2023-01-27 17:13:46.791] [polycubed] [info] loading configuration from /etc/polycube/polycubed.conf
    [2023-01-27 17:13:46.793] [polycubed] [info] configuration parameters:
    [2023-01-27 17:13:46.793] [polycubed] [info]  loglevel: info
    [2023-01-27 17:13:46.793] [polycubed] [info]  daemon: false
    [2023-01-27 17:13:46.793] [polycubed] [info]  pidfile: /var/run/polycube.pid
    [2023-01-27 17:13:46.793] [polycubed] [info]  port: 9000
    [2023-01-27 17:13:46.793] [polycubed] [info]  addr: localhost
    [2023-01-27 17:13:46.793] [polycubed] [info]  logfile: /var/log/polycube/polycubed.log
    [2023-01-27 17:13:46.794] [polycubed] [info]  cubes-dump-file: /etc/polycube/cubes.yaml
    [2023-01-27 17:13:46.794] [polycubed] [info]  cubes-dump-clean-init: false
    [2023-01-27 17:13:46.794] [polycubed] [info]  cubes-dump-enable: false
    [2023-01-27 17:13:46.794] [polycubed] [info] polycubed starting...
    [2023-01-27 17:13:46.794] [polycubed] [info] version v0.9.0+ [git: (branch/commit): master/75da2773]
    prog tag mismatch 3e70ec38a5f6710 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    prog tag mismatch 1e2ac42799daebd8 1
    WARNING: cannot get prog tag, ignore saving source with program tag
    [2023-01-27 17:14:03.905] [polycubed] [info] rest server listening on '127.0.0.1:9000'
    [2023-01-27 17:14:03.906] [polycubed] [info] rest server starting ...
    [2023-01-27 17:14:04.010] [polycubed] [info] service bridge loaded using libpcn-bridge.so
    [2023-01-27 17:14:04.050] [polycubed] [info] service firewall loaded using libpcn-firewall.so
    [2023-01-27 17:14:04.149] [polycubed] [info] service nat loaded using libpcn-nat.so
    [2023-01-27 17:14:04.277] [polycubed] [info] service router loaded using libpcn-router.so
    [2023-01-27 17:14:04.340] [polycubed] [info] service simplebridge loaded using libpcn-simplebridge.so
    [2023-01-27 17:14:04.370] [polycubed] [info] service simpleforwarder loaded using libpcn-simpleforwarder.so
    [2023-01-27 17:14:04.413] [polycubed] [info] service iptables loaded using libpcn-iptables.so
    [2023-01-27 17:14:04.553] [polycubed] [info] service dynmon loaded using libpcn-dynmon.so
    [2023-01-27 17:14:04.554] [polycubed] [info] loading metrics from yang files
    

Polycube on Arm-based SBC: Follow-up #2 (WIP)

After emailing three of the committers to the original Polycube project, and receiving short replies from each of that basically said, polycube was never tested on an arm-based system will likely not work without significant efforts as well as, I believe the [polycube] project is no longer active, I wanted to follow through and test the former statement and really see how much effort would it take to get a compiled binary of polycubed running on an Arm-based system.

With my previous Work In Progress, I appeared to be able to successfully build and compile an executable, but when run, the program did nothing but consume 100% of one core of the Raspberry Pi's processes.

What does this mean? A hung process, consuming 100% of one core; that feels to me like it is getting stuck in a loop without having an exit/break condition met. I started by doing what any ham-handed developer would do: I started at main() in polycubed.cpp and started to put std::cerr << "Code gets to this spot #1" << std:endl; into the code.

I narrowed this initial issue of the process hang to the following:

try {

    if (!config.load(argc, argv)) {
        exit(EXIT_SUCCESS);
    }

    std::cerr << "Configs loaded..." << std::endl;

} catch (const std::exception &e) {

    // The problem of the error in loading the config file may be due to
    // polycubed executed as normal user
    if (getuid())
        logger->critical("polycubed should be executed with root privileges");

    logger->critical("Error loading config: {}", e.what());
    exit(EXIT_FAILURE);
}

Both of the cerr statements that I added were never getting called. This narrowed down the issue to config.load(argc, argv).

Looking at config.cpp and specifically at the method, load(int argc, char *argv[]), you will find the following:

bool Config::load(int argc, char *argv[]) {
  logger = spdlog::get("polycubed");

  int option_index = 0;
  char ch;

  // do a first pass looking for "configfile", "-h", "-v"
  while ((ch = getopt_long(argc, argv, "l:p:a:dhv", options, &option_index)) !=
         -1) {
    switch (ch) {
    case 'v':
      show_version();
      return false;
    case 'h':
      show_usage(argv[0]);
      return false;
    case 4:
      configfile = optarg;
      break;
    }
  }

  load_from_file(configfile);
  load_from_cli(argc, argv);
  check();

  if (cubes_dump_clean_init) {
    std::ofstream output(cubes_dump_file);
    if (output.is_open()) {
      output << "{}";
      output.close();
    }
  }

  return true;
}

Through some amateur debugging statements, I determined that while (( ch = getopt_long..) != -1) was never ceasing. The while loop never exited. Why would this statement work flawlessly on Intel amd64-based systems and not on Arm64 systems? I am still stumped as why it would matter. However, implementing the while look as the following got me slightly further in the start-up process:

  while(true) {
    const auto ch = getopt_long(argc, argv, "l:p:a:dhv", options, &option_index);

    switch (ch) {
    case 'v':
      show_version();
      return false;
    case 'h':
      show_usage(argv[0]);
      return false;
    case 4:
      configfile = optarg;
      break;
    }

    if(-1 == ch) {
      break;
    }
  }

Maybe someone with more systems experience and C++ knowledge might have an idea as to why these two blocks of code behave differently when run on different architectures.

Anyway, being able to get a little farther into the start-up process was a sign I should keep looking into the issue. Using my Bush-league skills of debugging (e.g. liberal use of std::cerr), I determined that things were getting bound up on:

load_from_cli(argc, argv);

A look at that method reveals another, similar, while statement:

void Config::load_from_cli(int argc, char *argv[]) {
  int option_index = 0;
  char ch;
  optind = 0;
  while ((ch = getopt_long(argc, argv, "l:p:a:dhv", options, &option_index)) !=
         -1) {
    switch (ch) {
    case 'l':
      setLogLevel(optarg);
      break;
    case 'p':
      setServerPort(optarg);
      break;
    case 'd':
      setDaemon(optarg ? std::string(optarg) : "true");
      break;
    case 'a':
      setServerIP(optarg);
      break;
    case 'c':
      setCertPath(optarg);
      break;
    case 'k':
      setKeyPath(optarg);
      break;
    case '?':
      throw std::runtime_error("Missing argument, see stderr");
    case 1:
      setLogFile(optarg);
      break;
    case 2:
      setPidFile(optarg);
      break;
    case 5:
      setCACertPath(optarg);
      break;
    case 6:
      setCertWhitelistPath(optarg);
      break;
    case 7:
      setCertBlacklistPath(optarg);
      break;
    case 8:
      setCubesDumpFile(optarg);
      break;
    case 9:
      setCubesDumpCleanInit();
      break;
    case 10:
      //setCubesNoDump();
      setCubesDumpEnabled();
      break;
    }
  }
}

Again, I determined that while (( ch = getopt_long..) != -1) was never breaking from the while loop. Changing it to:

  while(true) {

    const auto ch = getopt_long(argc, argv, "l:p:a:dhv", options, &option_index);

    ...

    if(-1 == ch) {
      break;
    }

  }

This did the trick, as it had done with the previous while loop. I was able to execute polycubed but ran into a new error:

[2023-01-26 15:25:19.131] [polycubed] [info] configuration parameters:
[2023-01-26 15:25:19.131] [polycubed] [info]  loglevel: info
[2023-01-26 15:25:19.131] [polycubed] [info]  daemon: false
[2023-01-26 15:25:19.131] [polycubed] [info]  pidfile: /var/run/polycube.pid
[2023-01-26 15:25:19.131] [polycubed] [info]  port: 9000
[2023-01-26 15:25:19.131] [polycubed] [info]  addr: localhost
[2023-01-26 15:25:19.131] [polycubed] [info]  logfile: /var/log/polycube/polycubed.log
[2023-01-26 15:25:19.131] [polycubed] [info]  cubes-dump-file: /etc/polycube/cubes.yaml
[2023-01-26 15:25:19.132] [polycubed] [info]  cubes-dump-clean-init: false
[2023-01-26 15:25:19.132] [polycubed] [info]  cubes-dump-enable: false
[2023-01-26 15:25:19.132] [polycubed] [info] polycubed starting...
[2023-01-26 15:25:19.132] [polycubed] [info] version v0.9.0
modprobe: FATAL: Module kheaders not found in directory /lib/modules/5.15.84-v8+
Unable to find kernel headers. Try rebuilding kernel with CONFIG_IKHEADERS=m (module)
chdir(/lib/modules/5.15.84-v8+/build): No such file or directory
[2023-01-26 15:25:19.180] [polycubed] [error] error creating patch panel: Unable to initialize BPF program
[2023-01-26 15:25:19.188] [polycubed] [critical] Error starting polycube: Error creating patch panel

Next, I grabbed the linux kernel source from Raspberry Pi's github and setup a symlink for polycubed to find kernel headers:

git clone --depth=1 https://github.com/raspberrypi/linux.git
mv linux linux-upstream-5.15.89-v8+
sudo ln -s /usr/src/linux-upstream-5.15.89-v8+ /lib/modules/5.15.89-v8+/build
sudo ~/polycube/build/src/polycubed/src/polycubed

This results in:

[2023-01-26 15:40:19.035] [polycubed] [info] configuration parameters:
[2023-01-26 15:40:19.035] [polycubed] [info]  loglevel: trace
[2023-01-26 15:40:19.035] [polycubed] [info]  daemon: false
[2023-01-26 15:40:19.036] [polycubed] [info]  pidfile: /var/run/polycube.pid
[2023-01-26 15:40:19.036] [polycubed] [info]  port: 9000
[2023-01-26 15:40:19.036] [polycubed] [info]  addr: localhost
[2023-01-26 15:40:19.036] [polycubed] [info]  logfile: /var/log/polycube/polycubed.log
[2023-01-26 15:40:19.036] [polycubed] [info]  cubes-dump-file: /etc/polycube/cubes.yaml
[2023-01-26 15:40:19.036] [polycubed] [info]  cubes-dump-clean-init: false
[2023-01-26 15:40:19.036] [polycubed] [info]  cubes-dump-enable: false
[2023-01-26 15:40:19.036] [polycubed] [info] polycubed starting...
[2023-01-26 15:40:19.036] [polycubed] [info] version v0.9.0
bpf: Failed to load program: Invalid argument
jump out of range from insn 9 to 37
processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0

[2023-01-26 15:40:46.751] [polycubed] [error] cannot load ctrl_rx: Failed to load controller_module_rx: -1
[2023-01-26 15:40:46.800] [polycubed] [critical] Error starting polycube: cannot load controller_module_rx

It is entirely possible that I am including the wrong version of bcc;

BPF Compiler Collection (BCC)

BCC is a toolkit for creating efficient kernel tracing and manipulation programs, and includes several useful tools and examples. It makes use of extended BPF (Berkeley Packet Filters), formally known as eBPF, a new feature that was first added to Linux 3.15. Much of what BCC uses requires Linux 4.1 and above.


I decided to step back, and grab a clean copy of polycubed from github.

pi@raspberrypi:~/polycube $ git submodule update --init --recursive
pi@raspberrypi:~/polycube/build $ cmake ..  -DENABLE_PCN_IPTABLES=ON \
                                            -DENABLE_SERVICE_BRIDGE=ON \    
                                            -DENABLE_SERVICE_DDOSMITIGATOR=OFF \     
                                            -DENABLE_SERVICE_FIREWALL=ON    \
                                            -DENABLE_SERVICE_HELLOWORLD=OFF   \
                                            -DENABLE_SERVICE_IPTABLES=ON    \
                                            -DENABLE_SERVICE_K8SFILTER=OFF    \
                                            -DENABLE_SERVICE_K8SWITCH=OFF    \
                                            -DENABLE_SERVICE_LBDSR=OFF    \
                                            -DENABLE_SERVICE_LBRP=OFF  \
                                            -DENABLE_SERVICE_NAT=ON   \
                                            -DENABLE_SERVICE_PBFORWARDER=ON   \
                                            -DENABLE_SERVICE_ROUTER=ON    \
                                            -DENABLE_SERVICE_SIMPLEBRIDGE=ON    \
                                            -DENABLE_SERVICE_SIMPLEFORWARDER=ON    \
                                            -DENABLE_SERVICE_TRANSPARENTHELLOWORLD=OFF   \
                                            -DENABLE_SERVICE_SYNFLOOD=OFF   \
                                            -DENABLE_SERVICE_PACKETCAPTURE=OFF     -DENABLE_SERVICE_K8SDISPATCHER=OFF
-- The C compiler identification is GNU 10.2.1
-- The CXX compiler identification is GNU 10.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Version is v0.9.0+ [git: (branch/commit): master/a143e3c0-dirty]
-- Latest recognized Git tag is v0.9.0
-- Git HEAD is a143e3c0325400dad7b9ff3406848f5a953ed3d1
-- Revision is 0.9.0-a143e3c0
-- Performing Test HAVE_NO_PIE_FLAG
-- Performing Test HAVE_NO_PIE_FLAG - Success
-- Performing Test HAVE_REALLOCARRAY_SUPPORT
-- Performing Test HAVE_REALLOCARRAY_SUPPORT - Success
-- Found LLVM: /usr/lib/llvm-9/include 9.0.1 (Use LLVM_ROOT envronment variable for another version of LLVM)
-- Found BISON: /usr/bin/bison (found version "3.7.5")
-- Found FLEX: /usr/bin/flex (found version "2.6.4")
-- Found LibElf: /usr/lib/aarch64-linux-gnu/libelf.so  
-- Performing Test ELF_GETSHDRSTRNDX
-- Performing Test ELF_GETSHDRSTRNDX - Success
-- Could NOT find LibDebuginfod (missing: LIBDEBUGINFOD_LIBRARIES LIBDEBUGINFOD_INCLUDE_DIRS)
-- Using static-libstdc++
-- Could NOT find LuaJIT (missing: LUAJIT_LIBRARIES LUAJIT_INCLUDE_DIR)
-- jsoncons v0.142.0
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Performing Test COMPILER_HAS_HIDDEN_VISIBILITY
-- Performing Test COMPILER_HAS_HIDDEN_VISIBILITY - Success
-- Performing Test COMPILER_HAS_HIDDEN_INLINE_VISIBILITY
-- Performing Test COMPILER_HAS_HIDDEN_INLINE_VISIBILITY - Success
-- Performing Test COMPILER_HAS_DEPRECATED_ATTR
-- Performing Test COMPILER_HAS_DEPRECATED_ATTR - Success
-- The following OPTIONAL packages have been found:

 * BISON
 * FLEX
 * Threads

-- The following REQUIRED packages have been found:

 * LibYANG
 * LLVM
 * LibElf

-- The following OPTIONAL packages have not been found:

 * LibDebuginfod
 * LuaJIT

-- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.2")
-- Found OpenSSL: /usr/lib/aarch64-linux-gnu/libcrypto.so (found version "1.1.1n")  
-- Checking for module 'libnl-3.0'
--   Found libnl-3.0, version 3.4.0
-- Checking for module 'libnl-genl-3.0'
--   Found libnl-genl-3.0, version 3.4.0
-- Checking for module 'libnl-route-3.0'
--   Found libnl-route-3.0, version 3.4.0
-- Checking for module 'libtins'
--   Found libtins, version 3.5
-- Found nlohmann_json: /home/pi/polycube/cmake/nlohmann_json/Findnlohmann_json.cmake (Required is at least version "3.5.0")
-- Checking for module 'systemd'
--   Found systemd, version 247
-- systemd services install dir: /lib/systemd/system
-- Configuring done
-- Generating done
-- Build files have been written to: /home/pi/polycube/build
cd ../src/libs/prometheus-cpp
mkdir build; cd build
cmake .. -DBUILD_SHARED_LIBS=ON
make
sudo make install

I made changes to config.cpp to deal with our issue with getopt_long and the while loop. The changes are in my polycube clone.

I also did not have to add any of the #include lines that I had added during my first attempt on a SOQuartz module.

sudo src/polycubed/src/polycubed
[2023-01-26 20:58:06.453] [polycubed] [info] loading configuration from /etc/polycube/polycubed.conf
[2023-01-26 20:58:06.456] [polycubed] [info] configuration parameters:
[2023-01-26 20:58:06.456] [polycubed] [info]  loglevel: info
[2023-01-26 20:58:06.456] [polycubed] [info]  daemon: false
[2023-01-26 20:58:06.456] [polycubed] [info]  pidfile: /var/run/polycube.pid
[2023-01-26 20:58:06.456] [polycubed] [info]  port: 9000
[2023-01-26 20:58:06.456] [polycubed] [info]  addr: localhost
[2023-01-26 20:58:06.456] [polycubed] [info]  logfile: /var/log/polycube/polycubed.log
[2023-01-26 20:58:06.456] [polycubed] [info]  cubes-dump-file: /etc/polycube/cubes.yaml
[2023-01-26 20:58:06.456] [polycubed] [info]  cubes-dump-clean-init: false
[2023-01-26 20:58:06.457] [polycubed] [info]  cubes-dump-enable: false
[2023-01-26 20:58:06.457] [polycubed] [info] polycubed starting...
[2023-01-26 20:58:06.457] [polycubed] [info] version v0.9.0+ [git: (branch/commit): master/a143e3c0-dirty]
prog tag mismatch 3e70ec38a5f6710 1
WARNING: cannot get prog tag, ignore saving source with program tag
prog tag mismatch 1e2ac42799daebd8 1
WARNING: cannot get prog tag, ignore saving source with program tag
[2023-01-26 20:58:23.636] [polycubed] [info] rest server listening on '127.0.0.1:9000'
[2023-01-26 20:58:23.637] [polycubed] [info] rest server starting ...
[2023-01-26 20:58:23.740] [polycubed] [info] service bridge loaded using libpcn-bridge.so
[2023-01-26 20:58:23.779] [polycubed] [info] service firewall loaded using libpcn-firewall.so
[2023-01-26 20:58:23.882] [polycubed] [info] service nat loaded using libpcn-nat.so
[2023-01-26 20:58:24.012] [polycubed] [info] service pbforwarder loaded using libpcn-pbforwarder.so
[2023-01-26 20:58:24.145] [polycubed] [info] service router loaded using libpcn-router.so
[2023-01-26 20:58:24.210] [polycubed] [info] service simplebridge loaded using libpcn-simplebridge.so
[2023-01-26 20:58:24.239] [polycubed] [info] service simpleforwarder loaded using libpcn-simpleforwarder.so
[2023-01-26 20:58:24.282] [polycubed] [info] service iptables loaded using libpcn-iptables.so
[2023-01-26 20:58:24.412] [polycubed] [info] service dynmon loaded using libpcn-dynmon.so
[2023-01-26 20:58:24.412] [polycubed] [info] loading metrics from yang files

The daemon successfully runs. I do, however, need to capture the work I did in getting the linux kernel source headers in place for the daemon to find to compile the eBPF code into byte code.

  1. Clone the Linux repository from Raspberry Pi, https://github.com/raspberrypi/linux, into /usr/src on the Raspberry Pi
  2. In /lib/modules/5.15.84-v8+/ make a symlink named build and point it to /usr/src/linux-upstream-5.15.89-v8+

That will be it for the Work In Progress posts on polycube; I could attempt to recreate the steps taken, but I feel my notes across three posts should be enough. It is also isn't like polycube deployments are in hot demand. There is a strong likely hood that I am the first and only person who has run it on Arm-based hardware. The next post on polycube will be actually using it and in particular, the drop in replacement for iptables; that is what I am most interested in.

Windows 3.1 on Raspberry Pi CM4

I got my start with computers in the late 1980s on an Apple IIe. By 1990, my father had been bringing home a laptop from his work. When he was not working, I would use Microsoft QBasic (here is a JavaScript implementation of QBasic). Three years later, we had a Gateway 2000 desktop computer. It sported an Intel 486 50Mhz with 24MB of ram and about 512MB of disk space. Also in 1993, I was able to get a real copy of Visual Basic 3 from a friend who had gone off to college; he bought it for me from the campus bookstore.

Fast forward thirty years, and here, in 2023, I'm all about single board computers, and in particular, Arm-based SBCs.

Can one run software that was written thirty years ago that was intended to run on a completely different architecture? The answer is yes, and it is damn simple, too.

sudo apt install dosbox

Download Windows 3.11 from archive.org.

Unzip the archive

Run dosbox

dosbox

Mount the Windows 3.11 directory as drive c:

mount c /home/pi/win3.11
c:
setup

Follow the instructions on the screen.

Installing Visual Basic 3.0 is also simple. Download an ISO from archive.org.

Mount the ISO to a directory in your home directory on the Raspberry Pi, copy the contents and execute in Windows 3.11.

mkdir cdrom
sudo mount -o loop VBPRO30.ISO cdrom
mkdir win3.11/cdrom; cp -R cdrom/* win3.11/cdrom/; chmod -R 755 win3.11/cdrom

I found I needed to restart dosbox in order for the new directory to show up. Repeat mounting /home/pi/win3.11 in dosbox.

mount c /home/pi/win3.11
c:
cd Windows
win

Navigate with File Manager to c: drive, open the cdrom folder, go to DISK1 and execute SETUP.EXE

As a helpful note, to release the mouse from dosbox, simply press CTRL+F10

You might be asking, what's the point of this exercise? - It is because it can be done.

Polycube on Arm-based SBC: Follow-up #1 (WIP)

My first attempt at getting Polycube compiling and running on an Arm-based single board computer had the compile part be successful, the running part was not. For that test, I used a Pine64 SOQuartz system on a module, which is pin compatible with a Raspberry Pi CM4. Using Plebian Linux, there were a number of hoops to jump through; not limited to having to #include extra header files, compile certain pre-requisite libraries as well as including Debian packages from a few different versions of Debian.

But, compiling eventually succeeded, producing executables. Enter a Raspberry Pi CM4 running 64 bit Raspberry Pi OS.

The Process

The build process is more straight forward than with Plebian Linux. There is, however, still the need to use packages from several different versions of Debian. This is because Polycube depends upon older packages. These dependencies on old software and fairly brittle build pipeline really does hamper the use and adoption of Polycube.

After having what appears to be all the pre-requisite dependencies, the build ultimately fails on not finding a struct that has not been declared.

I ended up having to get a particular version of libyang; specifically v1.0.255. That got me past that error.

Following the rest of my instructions is more or less what I did to successfully compile polycubed. This is where things end just the same as my attempts with a SOQuartz module: running polycubed just hangs and does nothing more than consuming 100% of one core.

Polycube on Arm-based SBCs: Replacement for IPTables - WIP

The Basics - Work in Progress (WIP)

SPOILER ALERT: I have not been able to successfully execute the polycubed daemon. This effort is still a WIP.

Originally called extended Berkeley Packet Filtering, it has since been simply referred to as eBPF. Vanilla BPF has been around for decades and has been used as a packet filter, but eBPF is a more recent creation. And it has distinct advantages over existing Linux networking methods when used at industrial or commercial scales.

Historically, the operating system has always been an ideal place to implement observability, security, and networking functionality due to the kernel’s privileged ability to oversee and control the entire system. At the same time, an operating system kernel is hard to evolve due to its central role and high requirement towards stability and security. The rate of innovation at the operating system level has thus traditionally been lower compared to functionality implemented outside of the operating system.

eBPF changes this formula fundamentally. By allowing to run sandboxed programs within the operating system, application developers can run eBPF programs to add additional capabilities to the operating system at runtime. The operating system then guarantees safety and execution efficiency as if natively compiled with the aid of a Just-In-Time (JIT) compiler and verification engine. This has led to a wave of eBPF-based projects covering a wide array of use cases, including next-generation networking, observability, and security functionality.

Why eBPF? Because it has been gaining use within the world of containerization of work loads. Within the Linux world, this is usually Docker (I wrote a post on running Docker on a Pine64 Quartz Model A). Observability is huge within the world of commercial-scale deployments of containers. Containers allow for millions or tens of millions of virtual workloads to run on thousands and millions of physical systems. You need insight into what your systems are doing and how systems are performing. eBPF, for tiny computers, is overkill. The level complexity it introduces and requires far outweighs the benefits of at such a small scale. But, using it within the confines of tiny computers could be a good introduction to this very powerful technology.

We will be attempting to use Polycube for our eBPF needs.

Polycube is an open source software framework that provides fast and lightweight network functions such as bridges, routers, firewalls, and others.

Polycube services, called cubes, can be composed to build arbitrary service chains and provide custom network connectivity to namespaces, containers, virtual machines, and physical hosts.

For more information, jump to the project Documentation.

There are two ways in which to run Polycube: 1) via Docker; 2) "baremetal". Docker would be simplest if we were using stable builds of Debian or Ubuntu, but we are not.

For the purposes of this article, I am using a Pine64 SOQuartz module. I have tried both Plebian Linux as well as DietPi.

I will walk through "baremetal" because there are not Docker images available for arm64 architecture.

Installing Polycube - baremetal

1. Install polycube and dependencies:

# add stretch packages to apt
# /etc/apt/sources.list
deb https://deb.debian.org/debian/ stretch-backports main contrib non-free
deb https://deb.debian.org/debian/ stretch main contrib non-free
deb https://deb.debian.org/debian/ stretch-backports-sloppy main contrib non-free
deb https://deb.debian.org/debian/ bookworm main contrib non-free
Update packages cache
sudo apt update
Install Polycube dependencies
sudo apt-get install golang-go git build-essential cmake bison flex libelf-dev libpcap-dev \
        libnl-route-3-dev libnl-genl-3-dev uuid-dev pkg-config autoconf libtool m4 \
        automake libssl-dev kmod jq bash-completion gnupg2 libpcre3-dev clang-5.0 \
        clang-format-5.0 clang-tidy-5.0 libclang-5.0-dev libfl-dev \
        iperf luajit arping netperf
Verify golang is at v1.16 or newer
go version
go version go1.19.3 linux/arm64
Install pistache - needed for the RESTful control daemon
git clone https://github.com/oktal/pistache.git
cd pistache
# known working version of pistache
git checkout 117db02eda9d63935193ad98be813987f6c32b33
git submodule update --init
mkdir -p build && cd build
cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release -DPISTACHE_USE_SSL=ON ..
make -j $(getconf _NPROCESSORS_ONLN)
sudo make install
Install libtins
cd
git clone --branch v3.5 https://github.com/mfontanini/libtins.git
cd libtins
mkdir -p build && cd build
cmake -DLIBTINS_ENABLE_CXX11=1 \
 -DLIBTINS_BUILD_EXAMPLES=OFF -DLIBTINS_BUILD_TESTS=OFF \
 -DLIBTINS_ENABLE_DOT11=OFF -DLIBTINS_ENABLE_PCAP=OFF \
 -DLIBTINS_ENABLE_WPA2=OFF -DLIBTINS_ENABLE_WPA2_CALLBACKS=OFF ..
make -j $(getconf _NPROCESSORS_ONLN)
sudo make install
sudo ldconfig
Install libyang version 1
cd
git clone https://github.com/CESNET/libyang.git
cd libyang
git checkout libyang1
mkdir build; cd build
cmake ..
make
sudo make install

2. Clone polycube repository;

Note: you will need at least 3GB of disk space to be on the safe side for intermediate build artifacts

cd
git clone https://github.com/polycube-network/polycube.git
cd polycube
git submodule update --init --recursive
Configure build for prometheus-cpp
cd polycube/src/libs/prometheus-cpp
mkdir build && build
cmake .. -DBUILD_SHARED_LIBS=ON
We need to make one modification to one source file
 # edit the ../core/src/histogram.cc
 vi ../core/src/histogram.cc

Below #include <iterator>, add the following:

#include <limits>

Save ../core/src/histogram.cc

Build prometheus-cpp

make
sudo make install
We need to make one modification to one source file
cd
cd polycube
vi src/polycubed/src/server/Types/lexical_cast.cpp

Add the following directly below #include <string>

#include <limits>

Save src/polycubed/src/server/Types/lexical_cast.cpp

We need to make another modification to a source file
cd>
cd polycube
vi src/polycubed/src/server/Resources/Body/ListKey.cpp

Add the following directly below #include <vector>

#include <stdexcept>

Save src/polycubed/src/server/Resources/Body/ListKey.cpp

We need to make another modification to a source file
cd
cd polycube
vi src/services/pcn-dynmon/src/MapExtractor.h

Add the following directly below #pragma once

#include <cstdlib>

Savesrc/services/pcn-dynmon/src/MapExtractor.h

Configure and build polycube; we will also be building the drop in equivalent of iptables, polycube iptables
   
cd
cd polycube
mkdir -p build && cd build
cmake .. -DENABLE_PCN_IPTABLES=ON
make -j 2
go env -w GO111MODULE=off
sudo make install

Attempt to start polycubed

sudo polycubed

And that is where it gets hung up. The daemon never gets to the point of listening on the default port, tcp:9000. There are also no log messages (even turning up the loglevel to debug)

Pine64 ROCKPro64 SATA Software RAID5

I love experimenting with all sorts of single board computers (SBCs) and systems on modules (SoMs) - like the Raspberry Pi CM4 or Pine64 SOQuartz. This extends even to needing to make a network-attached storage (NAS). One of the requirements is the ability to attached a bunch of disks to a tiny computer. The easiest way to accomplish this is using a PCIe SATA controller. To use this, you need a PCIe lane exposed. Luckily, there are a number of system on modules with PCIe support. There is also Pine64 single board computers with an exposed PCIe lanes. These are the Quartz64 model A as well as the ROCKPro64. The former does not have the performance capabilities that the latter does, as such, we will be using the ROCKPro64.

My requirements for a NAS are bare-bones. I want network file system support; and secondarily, Windows' SMB support via the open source project Samba. Samba is of secondary importance because the primary use case for this particular NAS is providing additional disk space to other SBCs that are running some flavor of Linux or BSD (like NetBSD).

When the Turing Pi 2 was being promoted on Kickstarter, I had pledged to the project and then purchased a 2U shallow depth rack ITX server case. The project has been moving along but is taking longer than I had anticipated. In the mean time, I decided to re-purpose the server case and use it for a simple NAS.

I purchased four Seagate IronWolf 10TB drives. These are spinning metal drives not fancy NVME drives. NVME is too cost prohibitive and would ultimately not be performative; e.g. the bottleneck would be the ROCKPro64.

One of the four drives turned out to be a dud; there are only three in the above picture. The original goal was have 30TB of RAID5 -- 30TB of storage with 10TB for parity. But, because I did not want to spend much more on this project, I opted to return the broken drive and settle for 20TB of storage with 10TB of parity.

The setup is fairly simple. The 2U case, three 10TB drives, a ROCKPro64 4GB single board computer, and a 450W ATX power supply.

Here's a pro-tip on using a power supply without a motherboard: while holding the power connector with the latching tab towards you, use a paper clip, or in my case, a piece of MIG welding wire, to short out the third and fourth connectors. This probably will void the warranty if something happens to the power supply.

Here is a bit of information on the tech specs of the ROCKPro64.

The drive setup is straight forward. Three drives, running in a RAID5 configuration producing a total of 20TB of usable storage. Setting up software RAID under Linux is simple. I used this as a foundation for setting up software RAID. We are using software RAID because hardware RAID controllers are expensive and have limited support with single board computers. The one that appears to work, is about $600. It is not that I would not be against spending that much on a shiny new controller, it is that I do not feel there would be a noticeable benefit over a pure software solution. I will go into performance characteristics of the software RAID configuration later in this article.

How To Create RAID5 Arrays with mdadm

RAID5 has a requirement of at least three drives. As previously mentioned, this gives you n - 1 drives of storage with one drive's worth of storage for parity. This parity is not stored on the single drive, it is stored across all of the drives but it is equal in total to the size of one drive. There is the assumption that all drives are of equal size.

Get a pretty list of all the available disk:

$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
NAME          SIZE FSTYPE  TYPE  MOUNTPOINT
sda           9.1T         disk  
sdb           9.1T         disk  
sdc           9.1T         disk  
mtdblock0      16M         disk  
mmcblk2      58.2G         disk  
└─mmcblk2p1  57.6G ext4    part  /
mmcblk2boot0    4M         disk  
mmcblk2boot1    4M         disk  

You will see we have three, "10TB" drives.

To create a RAID 5 array with the three 9.1T disks, pass them into the mdadm --create command. We will have to specify the device name you wish to create, the RAID level, and the number of devices. We will be naming the device /dev/md0, and include the disks that will build the array:

sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc

This will start to configure the array. mdam uses the recovery process to build the array. This process can and will take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sda[0] sdc[3] sdb[1]
      19532609536 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/73 pages [0KB], 65536KB chunk

unused devices: <none>

For this 20TB to build, it took over 2000 minutes or about a day and half. Once the building is complete, your /proc/mdstat will look similar to the above.

Make a file system on our new array:

$ sudo mkfs.ext4 -F /dev/md0

Make a directory:

$ sudo mkdir -p /mnt/md0

Mount the new array device

$ sudo mount /dev/md0 /mnt/md0

Check to see if the new array is available:

$ df -h -x devtmpfs -x tmpfs
Filesystem      Size  Used Avail Use% Mounted on
/dev/mmcblk2p1   57G  8.7G   48G  16% /
/dev/md0         19T   80K   18T   1% /mnt/md0

Now, let's save our array's configuration:

$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

Update initramfsso we can use our new array on boot:

$ sudo update-initramfs -u

Add our device to fstab:

$ echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab

From some of the other documentation, there is a note that dumping the configuration to /etc/mdadm/mdadm.conf before the array is completely built, could result in the number of spare devices not being set correctly. This is what mdadm.conf looks like:

# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This configuration was auto-generated on Thu, 22 Dec 2022 16:46:51 -0500 by mkconf
ARRAY /dev/md0 metadata=1.2 spares=1 name=k8s-controlplane-01:0 UUID=62e3b0e3:3cccb921:0fc8e646:ac33bd0f

Note spares=1 is correct in the case of my set up.

By this point, if you have waited until the array has been successfully built - my 20TB array took about a day and half to complete.

Performance

We now have a shiny, new 20TB of available disk space, but what are the performance characteristics of a 20TB, Software RAID5?

We will be using iozone. Iozone has been around for decades; I first used it while in college and having a rack full of servers was all the rage -- before the creation of utility-scale cloud computing, as well as tiny computers.

Download iozone from iozone.org. Extract and compile:

$ cd iozone3_494/src/current
$ make linux

You will end up with the executable iozone in the src/current directory.

Run iozone:

$ sudo ./iozone -a -b output.xls /mnt/md0

This will both output a lot of numbers as well as take a while to complete. When it is complete, you will have an Excel spreadsheet with the results in output.xls.

At this point, I am going to stop doing a step by step of what I have executing and running. Why, I am going to show some visualizations and analyses of our results. My steps will be roughly:

  1. Break out each block of metrics into its own csv file; for example, the first block found in output.xls would be saved in its own csv file; let's call it writer.csv; exclude the header Writer Reportfrom the csv. Repeat this step for all blocks of reports.
Writer Report                                                   
4   8   16  32  64  128 256 512 1024    2048    4096    8192    16384
64  703068  407457  653436  528618  566551                              
128 504196  306992  433727  962469  498114  757430                          
256 805021  475990  850282  594276  571198  582035  733527                      
512 660600  573916  439445  1319250 549959  645703  926116  591299                  
1024    1102176 1053512 610617  704230  902151  1326505 1011817 1161176 919928              
2048    608964  1398145 1329751 822175  1140942 841094  1332432 1308682 1082427 1311879         
4096    1066886 1304093 1168634 946820  1467135 881253  1360802 931521  1047309 1018923 1047054     
8192    955344  1295186 1329052 1354035 1019915 1192806 1373082 1197294 1053501 866339  1116235 1368760
16384   1471798 1219970 2029709 1957942 2031269 1533341 1570127 1494752 1873291 1370365 1761324 1647601 1761143
32768   0   0   0   0   1948388 1734381 1389173 1315295 1848047 1916465 1944804 1646551 1632605
65536   0   0   0   0   1938246 1933611 1638071 1910004 1885763 1876212 1844374 1721776 1578535
131072  0   0   0   0   1969167 1962833 1921089 1757021 1644607 1780142 1869709 1566404 1356993
262144  0   0   0   0   2025197 2037129 2036955 1747487 1961757 1954913 1934085 1718841 1596327
524288  0   0   0   0   2041397 2080623 2087150 2049656 2007421 2005253 1930617 1876761 1813078
  1. I used a combination of using Excel (using LibreOffice Calc would work, too) and a Jupyter Notebook. Excel was used to pull out each report's block and then save each to a CSV file; the Notebook contains python code for reading in CSVs, and using matplotlib to produce visualizations.

Select Visualizations

There actually is relatively consistent reads across the file sizes and transfer sizes.