Skip to main content
Version: Next

FoundationDB Benchmark Report

Legacy Documentation

This report was produced using the original Stroppy framework. The methodology, test infrastructure, and tooling differ from the current CLI version.

The Problem

Unlike most system software, the database market is as vibrant today as it was ten or twenty years ago. The hardware revolution — switching from rotating to solid state drives, then from SSD to NVM — all in a single decade, advance of hyper-converged architecture and multi-cloud create a brave new world for database vendors and consumers.

Parallel to hardware changes, the open source revolution presents users with hundreds of new database offerings and highlights growth of the on-premise database market. A massive shift towards polyglot persistence, cloud and multi-cloud demonstrates even bigger growth both in data volume and variety of processing needs.

Financial institutions have been pioneer adopters of database software, yet surprising laggards with NoSQL and the cloud. Concerns of security, manageability and reliability kept banks conservative.

After a decade of growth, the NoSQL market has matured. MongoDB added transaction support in 2020. CockroachDB was first released in 2017. FoundationDB, founded in 2013, was acquired by Apple and re-released as open source in 2018.

By 2021, with multiple free, horizontally scalable, transactional NoSQL databases available, the market was seeing a tectonic shift: NoSQL no longer means no transactions. Adoption of this advance required industry benchmarks — and no widely adopted instrument existed to test how well NoSQL databases fare in the historically SQL domain of financial transactions.

The Test

A credible test needed to prove:

  • ACID properties are preserved in a distributed NoSQL environment, including during node and network failures
  • Applications can scale with cluster size
  • Performance is comparable to vertically scaled DBMS on similar hardware

The test runs a typical financial application: a series of bank money transfers between accounts. The key insight is that no amount of transfers can change the total balance of all accounts.

Three steps:

  1. Data generation — Load bank accounts with initial balances. Store total balance as canonical result.
  2. Money transfers — Run parallel transfers between accounts. Step is paralleled with nemesis (network partitions, hardware failures, topology changes).
  3. Balance verification — Download end balances and verify total hasn't changed.

The Subject and the Environment

FoundationDB is a transactional NoSQL database maintained by Apple under Apache 2.0 license. Its key design property is service-based, non-homogenous architecture — storage, transaction log, proxy, and coordinator roles can be placed at different nodes.

Testing goals:

  • Verify ACID properties
  • Compare performance against PostgreSQL
  • Test horizontal scalability

Hardware: Oracle Cloud E3.Flex instances. Network bandwidth 1 Gb/s. Disk bandwidth 1 Gb/s per core.

Failure injection: Chaos-mesh for kubernetes.

Results

Table 1: Consolidated Results

#VendorNodesVCPU/NodeRAM/Node (GB)HDD/Node (GB)ClientsAccounts (M)Transfers (M)TPS
1FDB31810016101002,263
2FDB+chaos32810016101002,189
3FDB518100512101007,631
4FDB+chaos518100512101007,528
5FDB51161005121001005,782
6FDB2011610051210010010,854
7FDB51161005121,000103,369
8PG2330100128101002,059
9PG2101601002561001005,915

Key observations:

  • Additional VCPU doesn't increase throughput for FDB (Run #2 vs #1)
  • Optimal concurrency: 512 clients for FDB, 128-256 for PostgreSQL
  • Not memory bound — doubling RAM with 10x data set decreased throughput ~30% (Run #5 vs #4)
  • Scaling 4x nodes roughly doubles throughput (Run #6 vs #5)
  • Nemesis runs show no accumulated performance degradation (Runs #2, #4)

Table 2: Latency

#VendorAvg (ms)Max (ms)p99 (ms)
1FDB724145
2FDB+chaos838052
3FDB67856201
4FDB+chaos71889227
5FDB88934271
6FDB4756582
7FDB1511,267588
8PG624,511203
9PG433,568133

Table 3: Data Set Sizes

VendorAccounts (M)Transfers (M)Disk Footprint
FDB1010018 GB
FDB10010032 GB
FDB10040088 GB
FDB1,00010127 GB
FDB1001,000225 GB
PG1010071 GB

Nemesis Results

Nemesis tests ran on 3-node and 5-node clusters, simulating network partitions and hardware failures:

  • Killed one node every two minutes (replaced immediately by Kubernetes operator)
  • Choice was fixed for 3-node, random for 5-node clusters
  • Result: ~5 second availability pause during pod failure (FoundationDB moving coordinator role), then normal operation resumed
  • Tests passed with comparable performance and correct resulting balance
  • No accumulated degradation observed during continuous failures

Conclusions

Over hundreds of hours of testing across different clouds, topologies, and adverse actions:

  • ACID verified — Unable to make FoundationDB lose transactions under any test condition
  • Fault tolerant — Database continued working normally after restoring from degraded state
  • Performance — Small cluster comparable to 3-core replicated PostgreSQL
  • Scalability — 4x nodes roughly doubled throughput (not linear, but good for correlated workloads)

The cluster doesn't scale linearly, but the result is considered good for the correlated workload of money transfers. Configurations outside scope: larger clusters (hundreds/thousands of cores), different workload types, background activities like backup/restore.