Benchmark methodology

How performance numbers are measured and reported.

Principles

Reported throughput values are intended to be reproducible and conservative: strict validation enabled, deterministic execution, and no reliance on GPU acceleration.

What we publish

We publish raw JSON outputs (where safe), the exact commands used, and the rules used to label results.

Public artifacts

Performance figures

  • Synthetic strict-consensus apply peak: ~159k TPS (signed transfers, strict apply semantics, deterministic ordering).

Hardware

Publish hardware details alongside results (CPU model, core count, RAM, OS, kernel). Keep the runtime single-machine and CPU-only.

Exact commands

Generate a fresh POR-style report bundle (includes the synthetic TPS JSON):

python scripts/praxisnet_por1_verify.py --out-dir data/artifacts/praxisnet_por1/$(date -u +%Y%m%d_%H%M%S) \
  --synthetic-accounts 2048 \
  --synthetic-batches 1000,5000,10000

Publish the public verification bundle (copies required reports + exports chain JSONL and genesis artifacts):

python scripts/praxisnet_publish_artifacts.py --version v1 \
  --praxisnet-data-dir data/praxisnet_testnet \
  --out-root artifacts/public/praxisnet

Important: never claim “validator TPS” from HTTP ingest. Label synthetic/apply/validator/end-to-end numbers separately.