TLDR: During the interim phase, we took Hyperscale from a codebase to a reproducible, large-scale test network.
- Public test sustained 500,000 transactions per second, with peaks above 800k TPS
- Transactions were real swaps, executed across 128 shards
- Cross-shard atomic transactions were preserved at scale
- Private tests demonstrated linear scaling, doubling throughput when moving from 64 to 128 shards
- All internal tests ran on commodity Amazon Web Services infrastructure, and were supported by community nodes during the public test
- Code, tooling, and logs are ready to published to enable independent reproduction
This post closes the interim phase and explains what happens next.
When I stepped in after Dan’s passing, the goal wasn’t to “take over” Hyperscale. It was to make sure the work didn’t stall, to revive testing, and to get Hyperscale to a point where it could stand on its own: understandable, reproducible, and ready for the community to take forward.
This is the final post in this series. It’s a short look back at what we did during the interim phase, and a clear description of what happens next.
Before Dan open sourced the first parts of Hyperscale, he walked me through the codebase and showed me how to build and run it. After his passing, I started experimenting, and quickly discovered that the public repository contained enough to understand the architecture, but not enough to run a full network. Once I got access to the full codebase and Dan’s latest work, the early phase was a mix of trial, error, and slowly building intuition.
Hyperscale has many layers that need to align at exactly the right moment, and even with Dan’s documentation, it took time to learn how the system wanted to be run.
From there, progress came from doing structured experiments, and iterating quickly with both the private testing group and the Foundation DevOps team. Over the course of the interim phase, we went from compiling and running a singleton build to hitting 100k transactions per second, then 250k, and then pushing close to 400k in private testing environments with a wide mix of nodes.
That work culminated in a public test where we sustained 500,000 transactions per second and saw peaks above 800k TPS.
The headline number matters, but what mattered more was what the number represented.
This was not a lab-only test running on specialized hardware. The workload was not simple transfers. These were swap transactions, executed across shards, at extreme scale. The public test validated the core property Hyperscale was designed for: linear scalability without giving up cross-shard atomicity.
Public test setup
For the public test, we ran Hyperscale with 128 shards on Amazon Web Services alongside community nodes where anyone could participate - all on the same network. The network consisted of 384 bootstrap nodes, 40 validator nodes, and 6 load-generation (spam) nodes. The bootstrap and validator nodes are m6i.xlarge instances, with 4 cpu cores and 16gb of RAM. Really commodity hardware. For the 6 spam nodes we needed a bit more fire power and upgraded to m6i.12xlarge instances with 48 cores and 192Gb of RAM. Each spam node was capable of driving roughly 100,000 swap transactions per second, with one additional node held in reserve to increase pressure if needed.
Earlier that same week, private tests demonstrated this explicitly. We ran Hyperscale at roughly 250k TPS on 64 shards, then repeated the test at 500k TPS on 128 shards, observing the same per-shard throughput. That result matters more than any single peak number, because it confirms that adding shards increases total throughput proportionally.
Lessons Learned
One of the strongest lessons for me during this phase was that sending transactions at scale can be harder than validating them. The workload generator becomes a system in itself. One of the major breakthroughs came when we identified that part of our transaction generation pipeline was effectively single-core, which meant that throwing more hardware at the problem didn’t help. We rewrote the spam tooling to use all available cores and rebuilt parts of the bootstrap process so that wallets and pools could be generated more efficiently. That was one of the moments where the path forward became much clearer.
Another thing that became obvious quickly is that automation isn’t optional. Manually managing nodes stops working once you go past a small handful. Terraform and Ansible-based automation was a turning point for scaling experiments quickly and making tests repeatable. It didn’t just make things faster; it made the work reproducible by others, which matters far more than hitting a single number once.
Next steps
That brings me to the most important part of this post: what comes next. The most important deliverable from the interim phase isn’t a blog series or one performance milestone. It’s reproducibility. Over the coming period, the foundation is keen to open source the remaining code , and the documentation and operational material will be published so that others can run their own tests. That includes the setup, network configuration, tooling, and the guidance needed to reproduce the environment and workload. There isn’t an exact ETA on the open sourcing, as agreement is needed with some other parties outside of the Foundation, but I know the foundation team are working to try and resolve this.
Once the set-up and code is open sourced, if you have working knowledge of Terraform and Ansible, and access to an AWS account or similar infrastructure, you’ll be able to start up your own network and run the same style of tests yourself. That’s intentional. The goal is to move Hyperscale forward in the open, where developers can verify results, challenge assumptions, improve tooling, and explore the next iterations without needing to be inside a closed environment.
None of this work happened in isolation. The private testing group kept showing up for iterative tests, the Foundation DevOps team provided a controlled environment and automation support when we needed to move faster, and the public test was only possible because node runners and community members joined in, helped onboard others, debugged installs, and kept the whole thing moving. I’m grateful for the effort and patience from everyone involved.
For me personally, this post marks a meaningful line under the interim phase. This was never about chasing a big number on a dashboard. It was about proving that Hyperscale works, that it scales the way it claims to scale, and that it can continue without being dependent on a single person. The next chapter is community-led. The code, docs, and tooling being made available is the handover, and what comes next, deeper validation, alternative implementations, new experiments, or pushing beyond the current limits, will be decided by the builders and node runners who now own the future of the network.
The mission continues, in the open.

