In this first update, Timan, the Interim Hyperscale Lead, looks back at the first steps in the journey to getting the Hyperscale research network up and running again.
Before Dan open-sourced the first parts of the Hyperscale code in March this year, I had helped him with a code review, during which he showed me how to compile and run the Hyperscale code. After his passing, I decided to take it further and experiment on my own. That’s when I realised not every piece had been published. Dan often said he did not want to open source everything, since that would be giving away the golden eggs. The public codebase was enough to understand the architecture, but not enough to run a full network.
First steps, and the two problems I had to solve
A few weeks later, when Adam asked me to take a deeper look at the code, he gave me access to the private repositories and a backup of Dan’s work folders, including his latest uncommitted changes. Dan wrote clean, well-documented code and followed a rigorous commit rhythm, so everything fit together quickly. With the full codebase in hand, I was able to compile the latest version and run a singleton build, a special mode that spins up an entire network on a single machine.
Even with the full codebase and Dan’s documentation, it was not immediately clear how all the moving parts were meant to work together. Having the ingredients and the recipe is one thing. Cooking a five-star meal is something else entirely. Hyperscale has many layers that need to align at exactly the right moment, so the early steps were mostly trial, error, and slowly building an intuition for how the system wanted to be run.
As I dug deeper, two technical challenges surfaced almost immediately:
- How do you send the kind of “spam,” as Dan called it, needed to saturate the network with complex transactions/swaps.
- How do you set up a network of multiple nodes.
I decided to attack the spam problem first, because a multi-node network is pointless unless you can feed it transactions.
There is a console inside a Hyperscale node that accepts administrative commands. By inspecting the code I found tools that could be used to submit transactions. I started small: one transaction per second, then ten, then a hundred. Those tests hit a wall fast. The simple transactions I tried wrote to a single contract, and that created a bottleneck. There was also a script to generate wallets, liquidity pools, and swaps, but that too reached a ceiling.
Learning to Build a Proper Universe
To create a Hyperscale network, you configure a “Universe.” The universe defines shards, bootstrap nodes, round and epoch timings, and more. The configuration is encoded and shipped together with initial transactions in a partly binary string. Understanding the exact configuration Dan used was not trivial, even with his documentation.
The Hyperscale and private testing Telegram groups were invaluable. Dan regularly shared his thoughts and bits of knowledge. In one message, he shared some API endpoints available in the nodes. One of those endpoints exposes the universe configuration. By booting Dan’s latest configuration and fetching that endpoint, I could read the exact configuration he had used in his last test. That filled in a lot of gaps.
Iterating Tests with the Private Group and Foundation DevOps
I reached out to the private testing group for a first private test. Like a first meal, it was edible, but not yet what it could be. The test crashed. Over the next weeks, we ran multiple iterations. Each test taught me something new about configuration, transaction load patterns, node counts, and timing. Sometimes we pushed too hard. Other times, we did not have enough nodes. Progress was messy, but steady.
To move faster, I asked the Foundation DevOps team for a set of nodes, so I would not always have to rely on the private testing group. Running in a controlled environment lets me trade robustness for speed when needed, and to push far harder than we could on ad hoc setups. With that environment, I started tracking higher swap rates. First 10k swaps per second, then 20k. But a conceptual shift was necessary.
Dan’s final public tests ran with 200 nodes spread across 32 shards. Running on eight to twelve nodes, I should not expect his total tps. Hyperscale is designed to scale linearly. So the right metric is tps per shard. Once I focused on tps per shard, I saw similar results to Dan’s last public test. That meant it was not the protocol holding us back; it was scale.
Automating Nodes and Scaling to 100k TPS
Up until this moment, I logged in to each node manually and issued commands. With more than 12 nodes, that became impossible. The devops team had already been working on this problem with Dan and helped me adopt their node management automation based on Ansible and Terraform. That automation was a game-changer.
Booting 36 nodes across 12 shards took me to 100k swaps per second. That was the milestone I set for myself at the start. Hitting it was a huge moment. It validated my understanding of Dan’s architecture and his design assumptions. It also validated the documentation he left behind. Everything fit together the way it was supposed to when you combine the right configuration, the right identity keys, and a scalable node provisioning workflow.
What Surprised me the Most
I expected my journey to a first small-scale test to be a two to three-month learning process. In reality, it moved faster than I thought. Dan’s clean code and thorough documentation made a massive difference. The private testing group and the Foundation DevOps team accelerated the work even more. But the biggest lesson was conceptual: focus on where the limitations actually are. If you measure total TPS without normalising per shard, you can misdiagnose the problem and pursue the wrong optimisations, for example. It’s important to keep an open mind when attacking these complex challenges.
The Mission Continues
We are now past the 100k TPS mark and working toward 250k. That means more nodes, more shards, and better tooling to generate transaction load in realistic patterns. I am improving the scripts to send spam, so it is easier to run robust, large-scale tests across hundreds of nodes. I am also documenting everything I learned, so people joining later will have a shorter path from compile to a 100k test.
This is a team effort built on top of Dan’s work. I am grateful for the access I have been given, for all the help from the private testing community, and for the Foundation devops team that helped make large-scale automation possible. Hyperscale is about linear scalability and atomic composability, and the tests so far show the design works. Now it is time to scale up further, refine our tooling, and make Hyperscale ready for broader public testing.
The mission continues.


