Radix AMA – 17th March 2020

Radix DLT
26th March 2020

Thank you to everyone who submitted questions and joined our AMA. You can find the full transcript below.

AMA Transcript

Piers: Okay. All right. Welcome everyone to the Radix AMA about the Cerberus WP! We’ve collected a load of questions on Reddit and a few other forums, which I’d already added to the column on the right-hand side where you can see the questions.

To get ahead of the order that’s already here – if you just click upvote, it will move the questions upwards. You can also ask your own questions by submitting a question in the box at the bottom right. You can ask any extra questions or follow up questions. This really is a forum for you just to, uh, get some more clarity on some things that you want to know.

So without further ado, I am going to pass this over to Dan, so he can start going through, the questions that we’ve got. 

When is the next white paper coming out? 

Dan: So that would be the updated economics. 

Yeah. So we’ve, well obviously with, with proof of stake for the RPN networks, we’ve had to modify some of the economics, mainly around emissions. 

Piers: We don’t have a set date, but there’s already been a few drafts, it’s going to be out pretty soon. The changes are relatively minor, essentially the addition of an emission and a couple of other things that we’ve had to sort of move around, but I don’t expect any major surprises. 

Dan: And a few other bits and pieces. So, yeah, like not a massive change to the baseline economics. It’s just more just bolting on some of the stuff that we need to have to support proof of stake for the RPN networks. 

How does Cerberus balance liveness and safety? What do you think is more important for a secure network?

Dan: Great question. That’s an interesting question. Also, potentially a loaded question too. Um, so, Cerberus being a multi-degree consensus mechanism –  generally if you get the network into a particular state, then it will know if there’s an attack going on etc – it prioritizes safety over liveliness so that you can’t roll back transactions for example.. Um, bitcoin blockchains tend to prioritize liveliness over safety, hence proof of work is your safety guard. This really depends on what your network is built to do – there’s no right or wrong answer here. In the kind of multi-degree networks safety is, generally prioritized over liveness. Probabilistic networks with single degree stuff, then liveness is usually prioritized over safety. And there are pros and cons to them both, so it’s just a matter of looking at what it is you’re trying to do and what you’re trying to get out of the network and just trade enough between those two things, depending on your architecture and what it is you want to do. Cerberus prioritizes safety over liveness

Piers: What do you think the, um, importance for security is in prioritizing safety over liveness? 

Dan: There’s no real kind of definitive answer for that. It really does depends on what you’re trying to achieve, what your network looks like where are your safety bounds are in terms of your Sybil and all that kind of stuff.

Obviously a secure network is paramount, whatever you do. Prioritizing safety over liveness just gives some nice advantages where an attacker can’t just arbitrarily picks a moment in time and then do a deep reorg of all the ledger history based on what he wants to have happened, which you can do in Bitcoin because it prioritizes liveness and your proof of work as your visual safety basically, but then the trade-off is, while you do have deep partitions that are separated for a long period of time, then the longer they’re separated, the more difficult it is to move them back together.

Whereas with something that prioritizes liveliness and there’s more probabilistic in the way that it drives things, you can usually merge those things in some way so that you don’t have a kind of persistent fork – but again, there’s no right answer. It just depends – what you want to do, you want the network to do what your requirements are.

This is just the easiest route as well for us to get an RPN-1 to preserve safety because then we don’t have to think about mergers and having a lot of petition tolerance and stuff as well. 

What is the patent situation and how will they work with open source? 

Dan: You can have a patent and you can have an open-source project as well. It just depends how you construct the licensing around your open source. You can have an open-source project with some sort of restrictions depending on what somebody is using it on.

So if they’re using it for a particular purpose and they’re building from the Radix codebase, that’s fine, but if somebody builds basically Cerberus or Tempo or anything else that we have a patent on without using our codebase, then they would then infringe the pattern and give you the right to sue. How you manage that is very flexible and very open. So there’s no problem being open source with the patent.

What are some of the key milestones on the road to RPN-1?

Piers: We actually have it on our GitHub and Knowledge Base, which has the basic milestones.

Dan: So, um, at the moment we’re working on building the kind of foundational guts of Cerberus and at the moment where we’re just building all of that logic and all of the, all of the components that drive the event coordinator, nodes need to be coordinated – when do they vote? When do they do certain things? So at the moment we’re just working kind of localhost, building all that kind of stuff. Um, I believe now that Florian and has started to move on to some of the other tasks where that then opens up to multi-node stuff.

Um, he’s been doing quite a lot of work on some of that kind of boilerplate plumbing down at the, at the kind of fundamental levels. Um,  once we get to having Cerberus functioning in terms of, of multi-node, then we can start to do things like system testing and, and catching all of those kinds of orchestration issues that we may see between nodes talking to each other in the lead to proposing something under the nodes, being able to vote on it and knowing when to vote and making sure that the vote is valid and that there’s a supermajority and all that.

After that we then need to start thinking about how do things like the global shard, where will the validate the soundscape registered and how, how that all kind of comes into play. So that’s probably one of the things that will come shortly after the multi-node stuff along with actually driving proof of stake and those kinds of things.

But we can, we can chop that into, into kind of two tests so we can run a global shard and have it very kind of API permission for our own internal development so we don’t have to be staking and all this kind of stuff to test at the fundamentals of the validator set selection and how these validators are used for particular events and stuff.

Then step from there forward and to kind of the proof of stake side of things where then you can stake, de stake and delegate and all that kind of stuff. Further along with those milestones, after all that, there’s things like the fees that we need to do. We need to build the emission stuff and all kind of general stuff –  testing and integrating the Radix engine and tighten up any things that kind of happened. So we kind of have two to three major milestones first, which is all kind of really heavy stuff. Single node consensus, multi-node consensus, economics emission staking validator set.

From there on out, it’s more around most of the more peripheral stuff, like your networking layer and your API layer and how the Radix engine kind of integrates all of that stuff and any additional features that we need there and those kinds of things. Um, so yeah it’s generally split up into about four large milestones for RPN-1.

Is the view number mentioned in the Cerberus Whitepaper a logical clock of sorts?

Dan: It is and it isn’t at the same time – if you take the strict definition of what a logical clock is, then it is. Well, it’s a different kind of logical clock than what most guys that know Radix for are in Tempo

So in Tempo, the logical clocks were kind of local clocks that each node would run and strictly increment every time they saw something. Whereas, um, in RPN-1, it’s more of a kind of global shared clock – as in a view number, everybody’s aligning to the, to the latest view number. And then when we moved to sharded those view numbers are more kind of emergent, logical clocks.

So you’ve got a particular event with some particular particles in particular shards. And each of those particles essentially has its own kind of view number or logical clock, if you like. So once we’re in full sharded the logical clocks are very shortlived.

Depending on what definition you look at, then some may regard them as logical clocks, some, some may not. I personally do, Florian will give you a different answer.

When can we expect an update on the leader in the denial of service resistance section of the Cerberus Whitepaper?

Dan: Actually have a meeting tomorrow that may touch on some of this stuff. So all the kind of lead to dust off, um, that ties in quite heavily with validator selection and all the global shard and how all that’s managed.

There’s a number of meetings that we need to have to kind of drill down in the granularity of what that looks like. We have some, you know, broad strokes architecture of what the global shard looks like, but there are some little details that we need to get into a little bit more, but everybody’s kind of had a lot of stuff to do with the first milestone, so it didn’t seem sensible to do this work, but especially as everybody’s got quite good pace at the moment too.

Well, we are getting to the point where some of these questions need to be answered – so there’ll be a series of meetings over the next couple of weeks – this is a great question for the next AMA. 

Piers: I suppose a follow-up question for myself is like how critical do you see, um, lead a DOS resistance being in sort of the first versions of the Radix Network.

Dan: I think as we moved from RPN-1 to RPN-3, the difficulty of DOS in a leader becomes much more difficult.

For RPN one, we’ve got to have a fairly small, quite tightly controlled, validator set just because of how RPN-1 works. RPN-2 will be potentially a little bit better because we do have multiple Cerberus instances running, so you can potentially have a lot more nodes in the network, um, serving these different instances. So your, your risk of DOS resistance there drops a bit more because you’ve got more nodes and then you don’t necessarily know who’s going to be the next leader, and by the time you figure that out and DDOS them, the events probably confirmed…and so, and then once we get to full-on sharded, then you’ve got to have a lot more information.

You can collect all this information, but then your window to act upon that information as we step through the different RPN networks just gets much smaller. Um, and you know, you could if you’re the one generating the event and then there’s some tricks you can do, but really you’re just DDOSing your own event, so…

The risk reduces as we, as we move through. So at the beginning, it’s very much a fixed set and those validators are mainly responsible for everything, and it’s a fairly small number, but then as we can scale up and scale-out, you’ll have a lot more diversity around the leaders that are leading the consensus for a particular event and you won’t necessarily know who that leader is going to be either.

Is the plan to develop a Dan based Sybil strategy in parallel with POS and when it is time for RPN-3 to possibly switch to a Dan based Sybil strategy?

Dan: I’m assuming they’re talking about SMS. Um, people ask this a lot as well, right? I mean, POS is good enough for us for now. We’ve got a lot of work to do and we’re trying and we want to get work done as quickly as possible. Um, of course, I would like to move to a more novel Sybil mechanism based on all of the research and hard work that we did and over the past, what, six to eight months.

It’s certainly not very high up on the roadmap at the moment. There’s still a lot of research to do there as well. And at the moment it’s all hands on deck. I and everybody else right now don’t have a lot of spare time and we’re probably not going to get much spare time for the foreseeable future to kind of dig into this stuff.

Even when we’re there, you probably need another good six months for SMS where it was left off to dig back into that and, you know, bring it to maturity – really like battle test it, get peer reviews, other sets of eyes on it to make sure that it’s good, cause it is quite a novel twist. So it’s not like we’re building on much of anything that came before.

We can kind of bring along any of that knowledge or that kind of confidence that what we’re bringing along. and you know, what already exists in SMS – there isn’t a lot in there that is tried and tested. So, um, quite a lot of work there to get that to commercial grade.

Piers: It’s a nice to have, right? But in the battle for making sure that you are. The protocol of choice for X, whatever X happens to be, whether or not it is defi or sort of, um, a non-government money or store of value, whatever it happens to be. Right? Um, the Sybil protection mechanism often isn’t the deciding factor at the moment. 

Dan: Where the Sybil strategy would come into play is if PoS really centralizes –  probably the main catalyst where you would think, right, okay, this is just got way too centralized there. There’s way too much control in the hands of the few.

That doesn’t happen overnight either. I mean, proof of stake aside from all of its kind of recursive issues that it has in terms of like difficulty in making it secure. For me from an efficiency point of view, you can’t get much more efficient than what proof of stake does. There’s some stake, you know who owns the stake and you just shuffle that set based on some function and take a number of validators out of that and then your validators.

That’s, pretty trivial to do – from a consensus point of view, give me some validators. Boom. There you go. Real nice and easy, real quick. Um, but, you know, proof of stake does centralize over time. So, um, that would be the main reason, I think, in the future to, to actually switch out if it, you know, if it ends up where you’ve just got 20 whales that can find out with that, definitely is an idea.

I saw in a roadmap that RPN-1 will use blocks. Does this mean a usual blockchain data structure, or is it different? 

Dan: Yeah. So, um, I believe that was a little bit of an error in the paper. I don’t think it’s supposed to reference them as blocks as such. Um. But we are having some discussions for RPN-1 – because of the reduced validator set size, there were batching events into something that may look like blocks would actually just make our lives a little bit easier. So, um, this is kind of a pending question. Originally this was no, and then we had a discussion and it was still no, and then we did some analysis on the throughput that we could achieve on RPN-1 wasn’t quite what we were aiming for.

Batching these things into something that essentially is almost a block of stuff would push that throughput for RPN-1 a bit higher. With RPN-2 you’re then moving into multiple instances and everything is kind of organized into the shards that it will be and you can’t really use blocks at that stage, soo they would be more of a temporary stopgap. 

Piers: If it was batched into blocks, would it look like a blockchain data architecture? So would those blocks subsequently be chained together, or would it look different? 

Dan: Um, so from the fundamental data, architecture and the database on your node, um, it wouldn’t be stored as a blockchain as such -it would be more of a mapping, kind of a, an overlay mapping of this batch of things – it goes into this block, but on database and then kind of particle and shard space, they would be stored, um, as, as if they went in blocks. It just, it just means that you can, you can reduce kind of message complexity and latency and stuff when you only have a small set of validators that we need for RPN-1 to get out the door.

Batching those things into essentially blocks from a kind of a consensus and network point of view in terms of, you know, transferring data and stuff and getting agreement on those things – it just helps us to push the throughput a little bit more. What, from a, from a kind of foundation consensus point of view, how things are managed, how the Radix engine manages things and stuff, then that would, that would behave very much as RPN to RPN-3 would.

Well, it’s just, it’s just more of a kind of efficiency tweak that we need to do now that it’s just short-lived for RPN-1, cause you can’t batch things into blocks when you move beyond RPN-1. 

What is the plan for marketing on Radix and creating some awareness and maybe some hype right at the token sale?

Piers: Yeah. I mean marketing is a key part of our strategy in getting token distribution. One of the things that we constantly have choked on is timelines cause you can ramp up your marketing spend and then not actually be able to deliver on the timelines that you were aiming for, for delivery of technology, that, that sort of goes alongside it.

So you, we often put together strategies that involved a certain amount of marketing spend – either over the top or sort of content-based, um, or working with, um, sort of marketing partners, PR firms, all that kind of stuff – we’ve tended to not actually sort of push the button on them. We’re still assessing what is going on with the markets at the moment with regards to Covid-19.

But assuming the static timeline we’ll be ramping up some marketing span. We’re working with some partners already on PR. We will also be rolling out an ambassador program – working with some of the top tier universities in America and then, and then the rest of the world.

We’re also doing R&D with UC Davis – basically working with the, uh, the leading PhDs and professors in the blockchain space on specifically the kind of consensus that we’re working on and having them look at our technology and our solutions and doing research on things like mathematical proofs and peer reviews and things like that, which obviously helps the reputation of the of the mechanics of what Radix is created to create this high scalability system in the first place. 

That all helps in terms of building a marketing message around the credibility of Radix and what we’ve created and that will sort of flow into sort of the bigger marketing funnel.

We’ve continued to build our mailing list and you know we’ve actually seen a fairly significant uptick. Um, over the last two – three months – by another 15, 20%. 

While you don’t see, um, so much in the paid marketing space, so it was sort of like coverage and on click-through marketing and banner ads and stuff like that, we’ve actually, uh, significantly improving our numbers for, um, direct marketing and we continue to do that with our funnels and we’re going to be updating our website that’s going to be coming along end of this month, start of the next month, with a much simpler, click-through design, much more focused around sort of the upcoming token sale and all that kind of stuff – that’s all in all in process. Uh, and you’ll start seeing some of the results of that, uh, alongside the updated, uh, token economics. 

When can we alpha test Cerberus related stuff?

Dan: So realistically that’s probably gonna be after the second milestone, which is all the multi-node stuff. I mean, can’t really test much on this until you’ve got all that plumbing in place where nodes are actually talking to each other – no real timeline on that at the moment. It’s still, it’s still kind of moving around quite a bit depending on what progress we get one week to the next. So, um, but as soon as that’s on the cards, then obviously I’ll let all you guys know, cause that helps us a lot with testing and it means we don’t burn as much to the good old Amazon. It’s a more real test as well – cause you know, Amazon and Google have their own dedicated fibre from continent to continent pretty much and that’s not really what we want to test. So yeah, probably, probably at some point after the, the, the second milestone, which is all the multi-node stuff and all the global shards stuff figured out, then yeah, we’ll at some point then be in a position where we can start to do off test. So watch this space for that.

Will some transactions be free?

Dan: Um, I don’t know yet. We need to look at the fee model a little bit more.

Where’s the problem of free transactions? How’d you spam prevent if they’re free? Right. We had a bit of an internal discussion not long ago about using proof of work for that, but using proof of work as a spam prevention isn’t great either, because depending on what hardware I’m using, then everybody’s paying a different cost for their spam prevention –  from an iPhone, I can’t do as many pow’s- if I’m a dedicated attacker and I want to screw around with the network and I use GPU, I can do orders of magnitude more than the iPhone user can for orders of magnitude less money. Um, so free transactions are a bit of an issue. Um, I mean, it’d be nice to have them for some things.

It’s something we need to revisit. 

Piers: I think that I think it’s one of those things that’s like, someone has to bear that costs some somewhere. And so it’s really got, you’ve really got away whether or not adding free transactions to the system versus what are already very, very cost-effective transactions on Radix.

Right? Like you’re already starting in a very low, cheap base. Um. And you can put quite a lot into an atom as well. It’s not just the, the fact that there’s one thing in there. 

Dan:I mean, it’s like, like you were saying, who bears the cost of that? So if you have free transactions, then it’s a node operators that bears the costs.

Because if somebody is spamming the network and your anti-spam mechanism isn’t good enough or it gets breached. Then they essentially need to have machines that can keep up, which then makes the per-node cost expensive, which means that you get centralization of your nodes.

Then because the cost to entry gets higher because your cost of use goes down and to operate. So, you know, we don’t really want that either. 

Piers: I think if there was a very clear reason for free transactions that drives the ecosystems or drives… so there’s two ways of looking at a network. 

One is I just want businesses to use the network, and the other is to look at it from the point of view of how does the ecosystem feed the ecosystem. So do you think about stuff on ethereum where, you know, MakerDAO creates DAI and then DAI then goes into other small contracts like Compound or, um, you know, how synthetics are then traded on uni swaps.

These things will feed each other. They will feed the ecosystem. 

Uh, and if there’s something that you can say – Oh, look, if there’s a flywheel, there’s a, there’s a flywheel here -if we can make this function cheaper than it’s actually gonna drive more adoption in the ecosystem and more users to come in – that’s a really good reason to think about essentially subsidizing a function on your network. 

Dan: Then you need some governance mechanisms to be able to agree, investigate, determine which of those kinds of features or applications or use cases are the ones that are driving that flywheel – it pushes the problem to a different place.

Who is going to run 100 or more nodes on RPN 1 and 2. And why is the number 100, how this minimum will be maintained? 

Dan: With RPN-2, iIt doesn’t necessarily have to be a hundred, depending on how we’re moving forward -we’d likely don’t need to have such a strict constraint on the number of nodes that can validate it. So it, and the idea for RPN-2 as well as 3, it does open up that valid validator set. So it does become much more decentralized than RPN-1. The reason we need to have a constrained validator set in RPN-1 is that RPN-1 is the simplest possible version of Cerberus so that we can prove the tech and get to market.

Which means that it runs completely unsharded. There are shards from the, from the data level, but from the kind of operations level, all nodes essentially serve all shards and there’s no way to kind of configure that in RPN-1.

To keep the complexity low in RPN-1 there’s only a single instance of Cerberus, whereas in RPN-2, you can have many, many Cerberus instances running on the same machine in parallel with each other. 

From RPN-2 onwards it’s massively multithreaded.

The complexity of multi-threaded right now is quite high and obviously we want to get to market as quickly as possible so that then predicates the requirement for a while. 

If we go in single-threaded, we can’t have too many validations because if all these validators, if we had say a thousand, then your message complexity and authentication complexity goes exponent and you don’t want exponent. Um, so while it isn’t the perfect situation in terms of decentralization for RPN-1, it does mean that we can get to market much quicker than if we were building RPN-2 as our first go to market strategy, but there is that trade-off of, you need to get the validator set low – so for RPN-1 it will have a look or flavour of some kind of DPoS.

Piers: Minimum isn’t that it has to have a minimum of a hundred. It’s that we will be making sure that a hundred at week that will be at least a hundred validators allowed in RPN-1, but like they, like if there was less than a hundred validators that joined at the start to be validated, then it would still function with 20 validators. It’s just that it’s the maximum we will allow a maximum. And our minimum-maximum that we’re aiming for is a hundred if that makes sense, as in like it will, it will be a hundred – it could be more, but we’ve pretty sure that it’s going to be a hundred um, but like if it’s less, if it’s less than the network will still run.

It’s just you can’t do any more than the hundred and if you get to a hundred, then it starts getting to the point where delegation makes more sense because then you’re essentially competing to be in that a hundred. 

Dan: There are a few stepping stones to get from RPN-1 to two and then three.

Um, but I mean, it’s going to be a young network. RPN-1 isn’t going to need thousands of TPS. It’s not going to need to be super-duper decentralized. I mean, they’re nice things to have, sure, but we’ve got a roadmap to get to that. 

Yeah, just let’s just get something out, get it, get it running. Test the kind of core fundamentals of the technology. If there any problems, a single instance unsharded Cerberus instance is much easier to debug and have fixed and something that’s this multi-instance, which is also a nice thing about what Cerberus enables us to do is to kind of stage it and release these things with increasing complexity, but have the confidence that the stuff that those increasing complexity things build on top of is really, really solid. Um, instead of just trying to boil the ocean in one go, we can, we can boil it a leak at a time.

I have a feeling that in RPN-3 shards might be attacked by malicious nodes pretty quickly if amount staked in these shards is quite low, close to threshold. Is there such a danger? How does the Cerberus protocol make the life of a malicious node more difficult? 

Dan: It’s a good questions and it’s a question that needs about an hour on its own right for me to answer it,  but I’ll try and do a compressed five-minute thing and then grow if you want to, you know, bring it all up in their telegram chats or, or any other kind of discussion forum and get a bit deeper into afterwards, then, then we can do that.

So RPN-3, the shards are more ephemeral shards – each particle essentially lives in its own shard. Um, so for me to attack a particular shard – what I’m really doing is I’m just attacking a particular particle, and that particle might just be five XRD  and then when the owner of those five XRD spends them, that shard is essentially dead unless there’s a collision, which is, you know, super low odds. 

If there was a collision to the power of two, five, six, if it happened, then that shard would kind of revive with a new particle in there- for me to actually attack that particular shard, it’s, I’ve got to put my, my nodes, my validators across that shard and if I don’t already know about a particle that’s there that I wish to attack and kinda still liveness – cause that’s the only real thing I can do is I can only really still liveness on that particular shard with that particle. So if I wanted to attack you but I didn’t know where any of your particles were or you made a transaction that changed the state of your particles or the particles that you owned, then I may have put some validators in a particular place, but in the meantime, you’ve spent some funds, that then changed which particles you have, and so therefore you have a different set of shards that constitute your account. 

So I’ve got to destake, which takes a period of time, restake which takes a period of time and then in the meantime, you then spent some more and it’s like, – Oh s**t, he keeps moving these particles around and I can’t keep moving my validators is because a lot of the time it takes them to do that. – that’s kind of the simple breakdown for a quick five minutes. Um, part of the reason why these shards and the, Cerberus instances are ephemeral so that I can’t easily go shard shopping – which is what we tend to call it, where I’m putting all my validators, all of my stake, all of my Sybil power in one place because I want to attack it  – now, if I’m successful, all I can really do is stall liveness on that particular particle – so just because I can stall liveness on that particular particle, doesn’t mean that I can stall liveness on other particles so you could still represent your transaction with some different particles and it will go through because I didn’t have validators on those particle shards.

Can you give any comments regarding testing Cerberus at ExpoLab? Quoting Mohammad Sadoghi’s words it was “part of their work” and “in real-world environments”.

Piers: So we’re going to be releasing some more details about our work with Mo – basically him and his team have put together what essentially amounts to a consensus testing bed. Um, that sort of is a combination of simulated real-world environments – byzantine actors, faulty actors, like things going on and offline with a sort of like a mathematical modelling of systems.

So it’s a novel approach – we’ve got this system, Cerberus – let’s put this into the, into our test environment, which is like a step between actually deploying a system in the real world and what would be considered like a traditional simulated environment where it sort of goes further than that, but it isn’t quite all the way.

Dan:  So it’s kinda like force testing for consensus mechanisms, do this, send that over there, do this wrong, and do that wrong. 

So if anyone’s familiar with fuzz testing and you know, rest API or an API endpoint or something – kind of similar in the way that it’s doing, it’s just, it’s just throwing things into the network there to try and cause bad stuff to happen and then sees how the network responds, if it t crashes or goes down before the security bound that you would expect. So if you need your 33% but it started, everybody failed at 20, then there’s clearly something wrong somewhere and all that kind of stuff.

UC Davis is an interesting partner for this, because for us to build those testing environments – it takes a lot of time and for each consensus mechanism you need to put a lot of time and effort into actually building your framework and then building the specific components that you need – they’ve already got a lot of that done, and a lot of Cerberus just kind of very nicely slides into what they already have. So, um, as well as getting, you know, eyes on from academics and professors and all of their knowledge that they have, we also get to access their really good testing framework as well.

Piers: It was really interesting working with Mo on this. He was super excited about the Cerberus paper when we presented it to him and, and, and was very like he was like – this is right at the cutting edge of everything that’s in the consensus space right now. He was super excited to get his team working, working with us as soon as possible. So that was really great. We know that academics move slowly. We know that this academic process is going to move slowly. I don’t expect anything, any time in like the next couple of months, but, you know, it’s great to have that going on in the background and really like adding the more certainty to what we already believe to be an incredibly robust approach to creating linear scalability in decentralized environment.

Some think that you “threw everything away” from Radix and “replaced it with established algorithms and concepts” etc etc. Is it the case? What are the main benefits of Radix? Why is it still better?

Dan: Okay, so yeah, this comes up a bit as well. We really didn’t throw that much away, to be honest.

We still have all the Radix engine, all the composability stuff that we made there. The Radix engine carries a lot of the application layer stuff that we need to be able to do in order to make Radix even successful. It’s no good being able to do a million TPS if there’s no way for anybody to use it.

The shard model – we spent a lot of time on the shard model and figuring out how to do state sharding. So all of that kind of knowledge and experience carries over and the shard model of Cerberus and the state sharding model of Cerberus borrows a lot from Tempo. I would even argue that it’s an improvement because there were lessons learned there –  things were identified in the past that we could improve in our existing shard model from Tempo.

There’s all the shard model stuff that’s come over. Um, we learned a lot of stuff around gossip, um, on how to do that very efficiently because that’s what Tempo really needed. 

We learned a lot of stuff there –  things like how to do fault detection, which may come in useful. Well probably will come in useful later on to do with kind of, um, uh, stake slashing and stuff like that and the RPN two and three networks. Also a lot there on how to kind of detect certain things and how to um, make it so it’s very difficult for nodes to, um, not expose their, their malicious behaviour.

Other things are we going to use here like random, verifiable random functions. We learned a number of ways to do that with Rempo and a lot of that can carry over as well. 

And then there’s things like – you replaced it with established algorithms and concepts!

Logical clocks were established and Merkle trees that made up a lot of the Radix temporal proof was an established algorithm and concept – everybody builds mainly on the shoulders of giants and then adds their own little bits and pieces. And then somebody else comes along and builds on that collective shoulders of giants and does their thing and then they become part of that collective and so on and so on and so forth. 

Some things were thrown away, sure – a lot of it wasn’t

Piers: I think the answer to the, you know, what is the main benefits of Radix? It’s what it’s always been  – it’s linear scalability of not just transactions, but states for smart contracts – this is something that is so critical for the industry that no one has a good solution for. 

We’re seeing what just happened with maker DAO and the market crashed and the congestion on Ethereum, that has meant that people have walked away from collateralized contracts that should have seen a maximum loss of 13% walking away with zero because of the congestion because smart contracts don’t scale and no one has those answers. 

It’s a combination of how we have created consensus, uh, scalability, but also how we’ve then put on things like the Radix Engine on top of that to make that into something that’s actually composable, actually can create these kinds of functionalities in a way that does scale and can work for the world.

Dan: It doesn’t really matters how you get there. You know, linear scale, state sharding, et cetera, et cetera, et cetera. Who cares how, how that still not, who cares really how it looks under the hood. You know? No one’s going to care how, how it does what it does as long as it does it.

As long as we achieve that, then, then I think the how isn’t so much of a question so long as it meets the, the why and it gives you the, what. That was very cryptic. You know what I mean? 

Trying to sound smart with the why’s and the wat’s and ultimately failed! 

I hope you guys hammered down all the windows and doors or closed yourself in a bunker while the virus is seeking for a next victim. Is everyone in a team okay?

Piers: We have one potential covid-19 victim in our team. He’s fortunately, young and healthy and should pull through. The Stoke office and the London office are working from home.

The rest of the team is basically remote. We’ve got some people that are stranded in other countries at the moment, but you know, fortunately because we set ourselves up as as the mainly remote working team, um, it hasn’t impacted our work. We still do stand-ups etc.

Bouncing ideas around: is there a way of planting a paid streaming platform on top of Cerberus? some kind of pay-per-byte streaming?

Piers: Yeah, absolutely. We played with some ideas like this. If you go on our blog, you can see, um, a Radflix idea. Um, this was the idea of owning rather than the streaming.

You can see how that can quickly go to the sort of, the streaming section. Often I think for things like that, that there’s, uh, state channels probably gonna be a cheaper way of doing it than every single transaction on ledger – but yeah, absolutely you can, you can do things like that.

If you want to build one of those, then we absolutely want to see it and play around with it. That would be awesome. 

Algorand was chosen to power Marshall Islands’ national digital currency. Do you think Radix might be the choice one day too?

Piers: Yeah, sure. We’ve spoken with a number of, uh, governments and organizations that are working with governments and building digital currencies.

Scalability for countries that are smaller as the Marshall Islands is not really an issue, especially as the degree of digitization of the country itself. 

These kinds of peripheral state currencies are going to have a lower impact on the overall economy because you’re going to see so much behavioural change in the economy itself before you can really take advantage of that. Countries like Singapore, like China, um, uh, and so more high tech countries, Sweden and other members of the European Union are going to get more impacts from.

Most of the stuff is still in trial phasing – just working out what, what is, what a specification needs to look like, what does performance looks like? And you know, when Radix is, um,  just later stages of development and we have a property network available, we’re absolutely going to be going after these kinds of opportunities armed with the knowledge that we’ve already built up from having these conversations before. 

Piers: I’m just going to, I’m just going to kind of, here, we’re going to be cut off very shortly, so we should probably wrap it up, but thank you very much for everyone’s, uh, questions. Uh, and, um, yeah, looking forward to everyone, seeing everyone next time and, and, and, and answering a few more questions about Radix.

Dan: Yeah. Ciao. Bye.