At Bodo, we take performance benchmarking seriously. Too seriously. That’s why we’ve decided to throw out all the useless, cherry-picked benchmarks that don’t reflect real customer workloads—and replace them with the most backwards compatible, enterprise-ready, legacy-approved benchmarking suite in the industry.
While other compute engines flex on trillion-row datasets powered by unicorn-grade GPU clusters in hyperscale cloud regions that don't exist yet, Bodo is the first and only modern compute engine that respects your deeply ingrained technical debt.
So to truly capture the reality of enterprise workloads, we optimized Bodo for the Intel Pentium Pro, an absolute unit from 1995 that still underpins global banking processes, airport baggage systems, and “the server” that everyone in accounting still saves everything to.
CPU: Pentium Pro 200 MHz (256 KB L2 cache)
Memory: 128MB of ECC RAM
Storage: 38 daisy-chained floppy drives in a custom RAID-F (Floppy) configuration
Optical: Parallel CD-ROM array, zip-tied to the chassis
OS: Windows NT 4.0, Service Pack 1 (SP2 installation still pending—currently at 93% complete after 3 months)
Cooling: One (1) industrial box fan
While our latest benchmarks prove that Bodo can thrive in legacy environments, we built Bodo for compute's future. And yes, we’ve done some modern benchmarks too.
Under the hood, it’s powered by a first-of-its-kind inferential compiler that transforms vanilla Python into massively parallel, high-performance code—automatically. It skips the interpreter and gives you near-C++ level performance with none of the rewrite effort.
Run your existing NumPy/Pandas-style code as-is
Our compiler infers parallelism from your code, so you don’t have to manually manage threads, workers, or partitions
Bodo uses Message Passing Interface (MPI) for true distributed execution
Bodo compiles workloads into native machine code, avoiding overhead of the Python interpreter
Bodo scales across hundreds of cores with near-perfect efficiency
Run your existing NumPy/Pandas-style code as-is
Our compiler infers parallelism from your code, so you don’t have to manually manage threads, workers, or partitions
Bodo uses Message Passing Interface (MPI) for true distributed execution
Bodo uses Message Passing Interface (MPI) for true distributed execution
Bodo scales across hundreds of cores with near-perfect efficiency
Bodo brings you HPC-level performance with Python-level simplicity.
You can download, install, and run Bodo anywhere—from your laptop to a 1,000-node cluster (and of course an Intel Pentium Pro).
👉 pip install bodo
Take Bodo for a spin, whether that’s a quick experiment via our examples or the job that’s been silently draining your cloud budget for months.
We’d love to hear what you’re building, how you’re using Bodo, and what you think. Got feedback? Weird edge cases? A fun use case we haven’t thought of yet? Be sure to share in our Community Slack—we’re all ears!
Disclaimer: No hardware was harmed in the creation of this benchmark. While Bodo does run on legacy hardware, we still recommend a machine built in this century. Please consult your IT department before running production workloads on devices last updated via CD-ROM.