Radix Technical AMA with Founder Dan Hughes – 2nd February 2021
Every two weeks, Radix DLT’s Founder Dan Hughes hosts a technical ask-me-anything (AMA) session on the main Radix Telegram Channel. There are always great discussions around Cerberus, Radix’s next-generation consensus mechanism, general design approaches to the network, and of course general industry questions.
Thank you to everyone who submitted questions and joined our AMA. You can find the full transcript below.
This weeks questions & Answers.
access to the quorum certificate of a command?
All state transitions require a quorum of agreement or rejection. A Quorum Certificate will be available at all validators that execute any state transitions within the atom, for all state transitions within the atom. A contract will be executing and invoking state transitions, so yes, it stands to reason that it will have access to the output certificates from those transitions.
of starvation for a transaction involving busy shards? How are transactions in the same shard prioritized?
This is possible if there is no mechanism to provide some guard against it. The solution will likely be validators that weakly sync their mempools with other validators on shared events and use a priority queue to ensure that all validator sets process any atom in question at around the same time. This is something I’ve recently developed and is implemented in the Cassandra research platform, and it works wonderfully in situations where there is an uneven load or high latencies 😊.
Simply I’ve used it for years, know it intimately, and it had all the features needed with acceptable performance. If any of that changes, the persistence layer is fairly abstract, so swapping out for an alternative isn’t too hard. Fun fact, Bitcoin used BerkeleyDB in the early code bases until it was swapped for LevelDB ~2012 IIRC.
I have a question about the existence of Radix’s “future-proofing”. I would hope the basic data structure (the atom) contains an extra bit or byte whose meaning is “this is an atom of format version 1”. This would leave open the possibility of extending the atom data structure, say when RPN4 is ready, to include such things as a snark field. I hope you can confirm that some form of “future-proofing” will be in RPN1.
Yes, of course, all components are versioned so that we can roll out updates and maintain compatibility with legacy components. The backwards compatibility is required in the event of replay where a new node joins and needs to replay part of the history to validate state and become synced.
look, it seems to be asynchronous with multiple steps and not low latency? Especially what is the difference to Elrond’s composability (why do you consider that as multi-step and high latency?)
Because there is an order-book which, if trades are settled with a matching engine, requires a total order. Any trade that is settled changes the order book and potentially the price of the buy/sell, the spread, etc. EtherDelta didn’t have a matching engine IIRC, as a user you simply looked at the pending trades and “picked” the one that suited you best. But that leads to some “interesting” edge cases shall we say. To be a real DEX IMO requires a matching engine, and matching engines require total – order, therefore it’s an asynchronous process.
liquidity pool. You can’t do that if you have a limited amount of shards (like on ETH2), but in the end, we can’t scale a single liquidity pool? Am I right that the throughput of one Dex liquidity pool (for swapping) is limited by the throughput of one shard? (Would make sense because you need one single valid counter of tokens which need to be consistent – not much different to a central implementation, we can’t use eventual consistency). What do you think about the max TPS per shard can we achieve (@Blind5ight said something around 3000TPS)?
I may add at this point for further scaling we can also shard at the smart contract level and simply launch multiple liquidity pools with the same token pair (maybe automatically). If these 3000 TPS wouldn’t be enough anymore, we would probably have enough liquidity to split it into multiple pools with independent arbitraging between them. This approach probably is widely applicable to other types of dAapps, e.g. lending crypto for stablecoins where you can lend XRD for USDC. A dynamic interest rate could be used for arbitraging by making the interest rate dependent on the amount of USDC in the contract. The lesser USDC is in the lending contract, the higher the interest rate. The key to this whole aspect is that we have a nearly infinite amount of shards.
In the case of a traditional DEX with an order book, that order book needs to live in a shard. There is no means to guarantee a total order if bits of the book are spread around multiple shards, and it would add a lot of latency/overhead to constantly sync them in some way.
That, of course, leads to the issue mentioned that there would then be a “cap” on what that DEX pair could handle in terms of throughput within that order book, which would be equal to the maximum performance of the validator set that serves it.
Having multiple books in multiple shards and allowing arbitrage to level them out is a valid idea, but really all that is doing is moving the latency/overhead from a protocol level to a market level.
I’d wager that in big markets with a number of books, the majority of trades would actually be “arb book trades” rather than actual pair trades. I’m not sure if that’s a good or bad thing, as in the worst case, your real trades end up being bottlenecked by all the arb trades, and you’re back where you started.
(like Hyperledger) when it comes to real usage in production (not just proof of concepts), are there any plans for Radix to provide some sort of permissioned solutions for enterprises based on the Radix DLT, in the future?
I believe that are two pieces to why permissioned ledgers are still so attractive in a lot of cases to corporations.
1. There isn’t currently a permissionless ledger that can compete with a permissioned variant on the metrics of throughput vs efficiency vs security. We can go some way to levelling that out, but that then leads to #2.
2. Data ownership is a BIG thing. Like REALLY big. There is a lot of information I as a business, corporate, or government do not want to be publicly available. Encryption doesn’t help, what is secure now publicly may not be in 10, 20, 40 yrs. What is seen can not be unseen.
For the reasons of #2, there will always be permissioned. And in answer to the question, perhaps, if there is a demand for it. For now permissionless is king 😊.
without further context, those analogies are meaningless. Could you expand a little on what vertical and horizontal mean as per the article?
Vertical scaling is having a truck full of pizzas that need to be delivered within an hour. The pizzas are so good that more people want pizzas, but they must still be delivered within an hour, so we get a bigger, faster truck. As demand increases, we need an even bigger and faster truck. Eventually, the truck’s size is ridiculous, and it needs to travel at the speed of light to deliver all the pizzas in time.
Horizontal scaling is having a van, and when it’s full, buying another regular-sized, regular speed van. But over time you have 100’s of them all delivering some of the pizzas.
serving the network during the epoch? If yes, what would be the value of the grace period in mainnet? Or does the network select the next available node from the available list?
Guessing this is referring to slashing of stake. Nodes crashing, going offline etc., are accounted for. Unlike double voting, these situations are difficult to prove, as if the node has crashed, there is no incriminating evidence, because it isn’t able to produce any. Best that can be done is a subjective decision that the node in question is gone and gone for too long. As there is a time period required to come to that conclusion collectively, simple crashes, short term network outages won’t result in any slashing or penalties.
As devs can quickly adapt to nightly builds looking at the results going forward after mainnet.
The testing suite is extensive, from unit, integration, system-level and in between. Check out the GitHub if you want some idea what all that looks like and how it all hangs together and is used.
(e.g. below 1% slippage)? Now the other shards need to know the new shifted price. How does this work?
That is only a relevant problem if the DEX is a liquidity pool based one and not a book based DEX. If it’s a pool based CFMM / AMM, then there are some tricks that can push the throughput higher, but you’d still ultimately reach a cap of throughput for that pool. Interesting problems, no real solutions I’m afraid (yet).
This was a new field not so long ago, but now there are a number of options in the form of things known as VRF (verifiable random function). Basically, it’s some complex fancy maths that allows anyone to validate the randomness of a value, and that it was generated correctly (not via what is called grinding).
A bunch of projects use them (IIRC Cardano uses one that was developed by Algorand which has on its team the person first to create a VRF).
That covers all the questions Dan had time to answer this session. If you are keen to get more insights from Dan’s AMA’s, the full history of questions & answers can be easily found by searching the #AMA tag in the Radix Telegram Channel. If you would like to submit a question for the next sessions, just post a message with questions in the main telegram, using the #AMA tag!