bodo.ai PC

BodoBench95:
A Modern Benchmark for the
Timeless Power of the Intel Pentium Pro

At Bodo, we take performance benchmarking seriously. Too seriously. That’s why we’ve decided to throw out all the useless, cherry-picked benchmarks that don’t reflect real customer workloads—and replace them with the most backwards compatible, enterprise-ready, legacy-approved benchmarking suite in the industry.

Scroll down

While other compute engines flex on trillion-row datasets powered by unicorn-grade GPU clusters in hyperscale cloud regions that don't exist yet, Bodo is the first and only modern compute engine that respects your deeply ingrained technical debt.

So to truly capture the reality of enterprise workloads, we optimized Bodo for the Intel Pentium Pro, an absolute unit from 1995 that still underpins global banking processes, airport baggage systems, and “the server” that everyone in accounting still saves everything to.

BodoBench95 Test Setup:

BodoBench95 Test Setup:

CPU: Pentium Pro 200 MHz (256 KB L2 cache)

Memory: 128MB of ECC RAM

Storage: 38 daisy-chained floppy drives in a custom RAID-F (Floppy) configuration

Optical: Parallel CD-ROM array, zip-tied to the chassis

OS: Windows NT 4.0, Service Pack 1 (SP2 installation still pending—currently at 93% complete after 3 months)

Cooling: One (1) industrial box fan

BodoBench95 Test Setup:
Notepad

Key Findings

Network

Sub-45-minute SQL query execution

Through careful application of low-level assembly tuning, we executed a complex multi-table SQL join in just 44 minutes and 52 seconds (Note: keyboard and mouse input was disabled for the duration to prevent system collapse.)

Disk Copy

Massively Parallel Floppy Storage

We loaded a 50MB dataset using 38 daisy-chained floppy drives. The intern had to manually swap disks, but with practice, we got total load time down to 2 hours, 17 minutes, 43 seconds.

Defragmentation

Defragmentation Acceleration

Our new algo completes twice as fast if you tap the hard drive in a rhythmic Fibonacci sequence. If you do it backwards, it also enables Turbo Mode.

Access

Certified Y2K ready

Tested by setting the system clock to December 31, 1999. No crashes, but all financial models instantly became more optimistic.

Lights

Thermal efficiency

With our upcoming cast-iron heatsink, we cooked an entire chicken dinner with sear rates of 350°F during join operations

Play video
BodoBench95 Next

Jokes aside, Bodo is built for the future of AI and analytics

While our latest benchmarks prove that Bodo can thrive in legacy environments, we built Bodo for compute's future. And yes, we’ve done some modern benchmarks too.

Bodo is a high-performance Python-native compute engine for AI, analytics, and large-scale data processing.

Under the hood, it’s powered by a first-of-its-kind inferential compiler that transforms vanilla Python into massively parallel, high-performance code—automatically. It skips the interpreter and gives you near-C++ level performance with none of the rewrite effort.

Zero rewrite

Run your existing NumPy/Pandas-style code as-is

Auto-parallelization

Our compiler infers parallelism from your code, so you don’t have to manually manage threads, workers, or partitions

MPI under the hood

Bodo uses Message Passing Interface (MPI) for true distributed execution

No overhead

Bodo compiles workloads into native machine code,  avoiding overhead of the Python interpreter

Linear scaling

Bodo scales across hundreds of cores with near-perfect efficiency

Zero rewrite

Run your existing NumPy/Pandas-style code as-is

Auto-parallelization

Our compiler infers parallelism from your code, so you don’t have to manually manage threads, workers, or partitions

MPI under the hood

Bodo uses Message Passing Interface (MPI) for true distributed execution

MPI under the hood

Bodo uses Message Passing Interface (MPI) for true distributed execution

Linear scaling

Bodo scales across hundreds of cores with near-perfect efficiency

So whether you're:

  • Cranking out million-row joins on your laptop,
  • Running ML pipelines across multi-node clusters
  • Running high-resolution image analysis or financial simulations
  • Looking to replace expensive Spark jobs with something that doesn’t melt your cloud bill
  • Scaling model training inputs
  • Processing Iceberg, Parquet, and other columnar formats in-place

Bodo brings you HPC-level performance with Python-level simplicity.

Squares
bodo.ai

And the best part?
Bodo is now open source. 

You can download, install, and run Bodo anywhere—from your laptop to a 1,000-node cluster (and of course an Intel Pentium Pro).

👉 pip install bodo

Take Bodo for a spin, whether that’s a quick experiment via our examples or the job that’s been silently draining your cloud budget for months.

We’d love to hear what you’re building, how you’re using Bodo, and what you think. Got feedback? Weird edge cases? A fun use case we haven’t thought of yet? Be sure to share in our Community Slack—we’re all ears!

👉 Join the Community Slack

👉 Read the docs

👉 Check out the benchmarks

Disclaimer: No hardware was harmed in the creation of this benchmark. While Bodo does run on legacy hardware, we still recommend a machine built in this century. Please consult your IT department before running production workloads on devices last updated via CD-ROM.