🎧 Listen to this article
10 min · AI-generated narration

One of the most rewarding aspects of building open-source tools is when users push them beyond your test cases. This week, a helpful early adopter did exactly that with the ballistics-engine CLI, and the results were both humbling and validating.

The Setup

This particular user had built an impressive workflow: CSV files defining gun profiles and location data, shell scripts to iterate through combinations, and a pipeline that generates beautifully formatted drop charts sized for e-ink readers (old Nooks, specifically - brilliant for outdoor use with their daylight-readable screens and long battery life).

Their setup included multiple rifles and locations, with real recorded dope (shooter's slang for verified bullet drop data at specific distances) from actual range sessions at 300, 665, 765, 847, 1004, and 1095 yards. This is exactly the kind of real-world validation that lab testing can't replicate.

Validating the Physics Solver

The user had been comparing the ballistics-engine output against JBM Ballistics, Kestrel AB, and Hornady 4DOF - industry standards with years of refinement. Like most experienced long-range shooters, they "trued" their ballistic coefficient to match real-world observations:

BC: 0.270  BC_ADJ: 0.85    TRUED_BC: 0.2295

For those unfamiliar with practical long-range shooting, "trueing" is the process of adjusting your ballistic coefficient (BC) to match real-world observations. Published BCs are measured under specific conditions, and real-world performance varies based on barrel harmonics, actual muzzle velocity, atmospheric conditions, and a dozen other factors. Most shooters apply a correction factor - typically 0.85 to 0.95 of the published BC - to get their solver to match their actual dope.

With this trued BC, the physics solver was producing excellent results. Their verdict:

"So far, I've found this solver to be the most accurate (dealing with environmentals) based on what I've actually shot this past weekend and prior."

That's gratifying validation for the core physics engine. But I had something experimental I wanted them to try.

Testing a New Feature: The Online Solver

I had recently added an --online flag to the CLI - a largely untested feature that sends trajectory calculations to a cloud API where machine learning models can apply corrections to the physics-based results. The ML models were trained on Doppler radar data and Doppler-derived datasets, and in theory should account for the systematic biases that make published BCs imperfect predictors of real-world performance.

But theory and practice are different things. I asked the user if they'd be willing to kick the tires on this new feature with their real-world data.

They agreed, and that's when things got interesting - in both good and bad ways.

The Surprising Result

The user's first tests with --online mode revealed something unexpected. Remember that trued BC? The 0.85 correction factor they'd been applying to match their real-world dope?

They didn't need it anymore.

With the online solver, the raw published BC of 0.27 produced accurate results without any manual adjustment. The ML correction was doing exactly what trueing does manually - accounting for the gap between laboratory-measured BCs and real-world performance.

This was the first real-world validation that the ML enhancement actually works as intended. Not in a lab, not against synthetic test data, but against actual recorded dope from someone who shoots at 300 to 1100 yards and knows exactly where their bullets land.

But There Was a Problem

While the accuracy was spot-on, something else was wrong. When the user started batch-processing trajectories through their full suite of gun profiles and locations, trajectories were being truncated at varying distances depending on the rifle configuration.

The root cause? When running with --online mode, the --ignore-ground-impact flag wasn't being passed to the Flask API backend. The API has a default ground threshold of -100 meters, so when trajectories dropped below that level, they were terminated early. Steeper trajectories (lighter bullets, lower velocities) hit the threshold sooner, which is why different gun profiles showed truncation at different distances.

Here's an example of the command that was affected:

ballistics trajectory --ignore-ground-impact --mass 140 --diameter 0.264 \
  --wind-speed 5 --wind-direction 90 --humidity 45 --altitude 2506 \
  --sight-height 3.14 --twist-rate 8 --sample-trajectory --sample-interval 9.1440 \
  --latitude 36.6 --auto-zero 100 --max-range 1530 --velocity 2875 \
  --drag-model g7 --bc 0.27 --pressure 27.29 --temperature 31.99 \
  -o csv --full --online

This is exactly why you need real users testing new features. The fix was straightforward: add the ground_threshold parameter to the API client so --ignore-ground-impact is properly respected in online mode.

The Cascade of Fixes

Once you start looking, you find more. The investigation uncovered several related issues with the online mode:

v0.13.24: Fixed the ground threshold parameter for online mode

v0.13.25-26: Added weather control parameters for online mode:

  • --enable-weather-zones
  • --enable-3d-weather
  • --wind-shear-model
  • --longitude
  • --shot-direction

v0.13.27-28: Fixed location CSV overrides for humidity and wind direction that were being silently ignored

v0.13.29: Expanded test coverage from 156 to 192 tests to catch similar issues earlier

How the Online Solver Works

The --online flag sends your trajectory parameters to the ballistics API. Behind the scenes:

  1. The same Rust-based physics solver runs (via PyO3 bindings to Python)
  2. ML models analyze the trajectory and determine a correction factor
  3. The correction is applied and results are returned

The ML models were trained on:

  • Doppler radar measurements of actual bullet flight
  • Doppler-derived drag coefficient data
  • Environmental correlation data

The correction factors are typically small - often in the 0.95-1.05 range - but they account for the systematic biases that make published BCs imperfect predictors of real-world performance.

The Takeaway

Open-source software thrives on user feedback. This curious early adopter:

  1. Validated the physics solver against established industry tools and real-world data
  2. Agreed to test a brand-new, experimental feature
  3. Found real bugs that only emerge under batch processing conditions
  4. Provided the first confirmation that the ML enhancement eliminates the need for manual BC trueing

All of this from someone who described themselves as "a Linux admin, script kitty" who just "cobbles the genius' work together." That's exactly the kind of user who makes software better - someone who uses it in ways the developer didn't anticipate, with real requirements and real data to validate against.

Updating

If you're using the ballistics-engine CLI, update to get these fixes:

cargo install ballistics-engine --force

To enable the online solver (still experimental, but now actually tested):

cargo install ballistics-engine --features online --force

Then add --online to your trajectory commands to use the ML-enhanced solver.

What's Next

The user mentioned they'd been using webhooks to JBM Ballistics for years but experienced throttling issues. Having a local solver (with optional cloud enhancement) that they control completely changes their workflow reliability.

There's also BallisticsInsight.com for those who prefer a web interface, though the CLI remains the power-user choice for batch processing and integration into custom workflows.

If you're using the ballistics-engine and find issues - or better yet, find that it matches your real-world dope - I'd love to hear about it. Real-world validation is worth more than a thousand unit tests.


The ballistics-engine is open source and available on crates.io. The API documentation is at api.ballistics.7.62x51mm.sh/v1/docs.