Radix AMA – 12th February 2020

Radix DLT
14th February 2020

Thank you to everyone who submitted questions and joined our AMA. You can find the full transcript below.

AMA Transcript

Piers: All right. Hi everyone. So, we’re in Bigmarker. You can add your Q&A to the side. So the side panel is here, uh, and you can, uh, rank the questions so you can upvote on the ones that you want us to answer. We’ve, uh, already collected some ones ahead, so we’re going to start on those first. Any you’ll add we will be able to answer.

This will finish exactly on top of the hour, so it will be exactly an hour. Anything we don’t get to we will go through and answer and we’ll add to a update blog about the AMA, so let’s start off with the first one! 

Can you give us the earliest possible date for when we expect to get a believable date for the publication of a new timeline document? Even a document without dates, but just a spec depicting the linear order of events and the dependencies on prior events would be helpful.

Dan: We’ve been doing a lot of work on theory stuff, so we’re putting in the final bits and pieces to a white paper that explains Cerberus, also a lot of development planning around the tasks that need to be done to implement Cerberus and all the little bits and pieces that it needs as quickly as possible.

Don’t have any concrete timelines at the moment, so it’s going to be a bit fuzzy for a while, but with the move to open source repositories, we will be able to see the progress and people can then start to make their own estimates and that’s where you get close to having beta tests and MVP ones –  maybe we can actually nail down sensible date.

We got to the stage where I don’t have like talking about dates cause it didn’t work out very well, so Sophie will kill us if we do that. 

According to the Radix ways of communicating, you have decided to go open source by default with every new repo you release. In the past you had planned on going close source for essential parts of the code in order to prevent copycats from making minor changes to your code and competing with you. How do you plan on dealing with the threat of such copycats or say, what does repo mean?  

Dan: Okay, so repo is repository. So we’re code repository, so things that you see on github.

So, the shift away from staying closed source to getting this open source route is really predicated by the fact that we need to show progress. Way back a number of years ago, before there was a Radix team and everything, the community was a lot more engaged and everybody knew a bit more what was going on and we’d have these discussions…

We really want to try and get back to that because it was quite productive, also I would say, um, it just allows people to know a bit more what’s going on and, and not just having some arbitrary dates in the future – and that sort of gets focused on. It means that we can also use you guys to test our releases.

The switch to opensource, while its main reason is to kind of try and move that confidence, in what we’re actually doing, not bit higher and also so we can start to show off some of the stuff that we have developed, that nobody even sees. 

It’s just been so much work that we’ve done on various bits and pieces of tech and research and all that kind of stuff that no one even knows that we have. That can be very useful to open that up and say, look, you know, we haven’t just been **** around for the past 10 years – we’ve actually done all this cool stuff. That’ll be very useful

Open-sourcing service is a little bit of a shift away from some of the boiling ocean stuff that Tempo was trying to do. 

We want to stage it – the  main component of Cerberus that’s really very novel is how we take a three phase commit Byzantine fault tolerant consensus mechanism, and then float that into our sharding theory that we’ve tested out pretty well with things like the 1MTPS test.

We’ve spoke about all of that theory and tech and research quite a lot already – so if anybody was going to steal it, they probably already are building something with those ideas. The Radix engine is it a point where it’s very complete so it doesn’t make more sense to keep that closed source?

Now it’s open. You could, you could actually take it and bolt on something else as well. So it’s a very complete thing. 

The three phase commit BFT algorithms, is also fairly mature and now anyways, there’s not really anything that we’re going to be open sourcing from this point forward that I think is super novel that isn’t already known about or isn’t something there.

Piers: Yeah. I think there’s also sort of a, a wider concept here. The reason that public blockchains public DLTs give people confidence in them is that they are sort of, there’s no single point of failure, including in the code base. Someone can come along and, and, and be sure that that thing is, is always going to be public domain and always going to be available.

Even if Dan dies or we disappear or anything like that.. We’re trying to go-to platform for everyone and we do want to involve people in that journey. And I, we’ve always been conflicted. It’s like a concentrate and get things done versus allow people to come on that journey with you.

And like the. Way in which we’ve done things before, has we lost that thing that was so great about Radix at start when we were able to more, immediately work with the community and make something better as a result of that. So it really is a combination of we want to show people what we’re working on and it’s progressing, but we also want to give people more of an opportunity to get involved as well.

Dan: Also, important point is that it was always the intention for it to go open source anyway -maybe earlier than expected. 

Why is the week at some issue such a big issue? I mean, why not just reuse all week atoms and force the nodes or clients to resubmit the atoms? This, in my opinion, will mostly affect those who want to game the system. A weak atom and with no conflict will never be a weak atom.

Dan: Without getting too deep into the technicals of what tempo does and how it, how it does it. -the main issue is that you can have two atoms that enter the network different points and they propagate across the network and they hit the nodes, during the gossip, and they build these temporal proofs. So if you’ve got a large network, and let’s say you’ve got 5,000 nodes on one and 5,000 nodes on the other, having some simple mechanism like “hey, this thing’s just happened. At the same time, we all need to just forget about it.”

You’ve got to send a lot of messages between all those nodes and then he does, if there’s messages and you just don’t know if those messages get there if some of the nodes are going to ignore those messages or keep that info. So it’s not just as simple as saying, Oh, yeah, there’s these two things in the network, and that can flip and start again. There’s a lot of complexity around that. 

Piers: I think there is like as with all of these things, you end up, you can end up voting like a Rube Goldberg machine that is, is like overly complicated to solve something. You go like, okay, we’ve got this problem with this bit, so we need to do this extra thing that then adds all of this message complexity, and then we need to do this thing that adds on to this messaging.

Get backstage and you get to the point where even though it’s sort of like from a very. Like mechanical point of view, maybe solving the problem, but from a network health point of view may end up causing massive DDoSs problems and things like that. So you just move one problem to another. We have spent a lot of time looking at how we can solve those things and ultimately came to the decision that it wasn’t going to create a system that was elegant and secure and safe and live that would always have those guarantees. And so, you know, we will be speaking more about what we’ve done in the past and what didn’t work and what we learned from it. 

That’s a route we went before – we were like, okay, can we, can we find a way of patching this particular hole? But we found that there was consequences to how you would do that in any different mechanism -that was a lot of the sort of uncertainty that we are having in the dev team. 

Dan: So after we found this weak atom problem we spent a good two or three months trying to try to beat this on the head with a quick fix.

The fixes for it are quite drastic. It was a lot of surgery involved to implement those fixes and still raises unknowns because a lot of the concepts of tempo anyway were very kind of extravagant almost – kind of that’s the point where we’re like, you know what, like trying to just do all these things at the same time and while we could fix that, that might bring for the things that were unknown – so later on in another six months, we might find some other issue and we have to get back again. Everybody wants us to go to a scalable network, so let’s just focus on doing that one, the whole thing first.

Piers: Yeah. Just be the focusing on the outcome rather than the mechanism. 

If POS is able to get to 1 million transactions per second, how can you even shard a POS consensus system? It’s never been done before. At least tempo with its logical clocks and temporal proofs were designed for it.

Dan: Okay, so Proof of stake is a sybil protection mechanism and it doesn’t really have any bearing on how fast your, your consensus mechanism. The two are very separate things, right? The sybil mechanism guards against sybil attacks and your consensus mechanism uses the sybil mechanism, in this case, to provide a set of validators that will be used to validate a particular transaction. If you’ll pause stuff over here, it is changing – that’s fairly low frequency, so you’ve got like staking times and unstaking times, so that kind of just ticks along. Then the same nodes kind of in that constantly. Your consensus mechanism is just pulling from my information to figure out – “okay, which nodes I’m going to talk to now for this particular thing, which nodes I’m going to talk just after this particular thing”.

 And so, you’re not actually sharding n the proof of stake sybil mechanism. It doesn’t even fit into the equation of what you’re doing. One thing you do need to have is that if you’re using POS  then all the nodes in the network need to know what, which nodes have got, how much staked and if there’s any unstaking etc. Because that’s a fairly slow cycling process then it doesn’t really put a lot of load onto the networks, all nodes in the network, regardless of how many shards they are serving and where It doesn’t really give any large amount of load they have to deal with this. It stays fairly static.

So you just pulling from nodes and into these nodes, and it’s about news and your consensus makes the assumption that you’re still abandoned as much drama. 

Piers: The start of that conversations is stated  relatively clear in the Cerberus paper

Dan: Uh, yeah, yeah, it is. Yeah. I mean, I don’t think it goes too in depth with it because, PoS is just a sybil mechanism.

So, so we, we did some thought experiments and after, after we’d nailed down a can of grounding theory of what Cerberus does and just some sort of experiments around, okay, instead of proof of stake, we do proof of work here and what that effects it has.

Obviously we don’t want to do a proof of work because it’s expensive and centralizes everything.

Will the public, anyone definitely be able to buy tokens. Eg. there’s no chance of hanging around for years only to find out the only accredited investors will be able to buy. And when do you think this might be some time before launch?

Piers: This is something that we’ve been spending a lot of time with our lawyers. I like the concepts of distribution and making sure that everyone has a fair opportunity to participate, especially when you start thinking about a proof of stake rather than a proof of a brother and like a proof reputation or proof of importance or something else.

Then that really does become an important question and it is a very high priority for us. So, um, everything that we can do legally we will be doing. I think that there are some good risk forwards here, but that will be coming out later. But yeah, I understand, you know, that, where that question is coming from and it is something that we’re really working on hard. 

When can we expect an alpha/beta of the new POS implementation? And how does the shift/influence the work that developers for developers or any working with the current APIs?

Dan: So the intention is to keep them as close as possible to what they are already, so that any impacts for people that are working with the APIs is minimal. Those API endpoints interface with Radix Engine, not so much the consensus later. So they really stay pretty static – maybe some additional ones to deal with the proof of stake

Piers: I think of it like a little bit like a layer cake. We really tried to modularize everything. which means that a lot of the work that we’ve done on the higher levels, so where the way your developers would be living to be doing that stuff, we try to keep as abstract as possible from how the underlying system works. So you can think of the sybil implementation influencing yours of your validate selections, stuff like that, your consensus algorithm working with the data model for that sharded environment, then the atom model being the, the way in which you agree on what state transitions are happening and then the libraries that then go into that. We’re not changing the atom model itself and how that functions, which means that the API endpoints and all of the things that you’d be doing is developer stay constant.

We know that people have already started building on stuff and we do want to make sure that there’s as little impact as possible to the people who’ve already built stuff using the Radix network. 

Dan: So, Alpha/beta of the PoS implementation – so development’s already started this week. Um, we’ve got some more kind of task breakdowns and stuff that we’ve been working on to figure out what bits and more do we need to develop stuff in.

There’s quite a bit to do to build the fundamentals of Cerberus. So we not really looking for any kind of alpha/beta testing before, say, July and then, yeah, we can probably start with a beta test (Piers: depending on how development goes) – it’s not a promise! Assuming our development cycle runs pretty smoothly it’s possible

Piers: Yeah. And again, this is why we’re trying to open things up because like we want to give people the information so they can see how things are going on the development cycle and can see where the areas of uncertainty on what still needs to be done before, you know, XYZ milestone can be achieved.

Ultimately our number one goal is, is delivery of a network in as short of a timeframe as possible, but with the caveat that it has to be secure enough that we would have a high confidence of people that they’re not losing their money. So there’s like these like combination of things and testing that we have to do to make sure that’s true.

Will the atom model work with Cerberus?

Yes. 

Going PoS will influence the current proposed token distribution. Do you have any ideas around current issues with existing PoS implementations, like manipulation of voting cartels, pools, whales centralization?

Dan: Yeah, so this is, this is one of the pain points.

That went back and forwards upon quite a while when thinking about what does a sybil mechanism look like and in order to get to a launchable network ASAP. Um, and there’s just some things about PoS you can’t fit. Centralization was the nature of it. So it was going to happen.

So that’s not the kind of things that we’re going to try and be tackling. Um. Some elements of PoS and the issues with it become a bit easier when it’s not on a blockchain – forks, for example, nothing at stake attacks become a little bit more constrained, used to be a bit more specific as an attack about what it is you actually can do it and how you can leverage the nothing at stake to your advantage.

They still have things like slashing that you can do. Things become a little bit simpler as well, because you have all these multiple forks. You haven’t got all these multiple forks to figure out – which is the real one, which are you slashing on. A multi-phase commit BFT algorithm prefers safety over liveness

All that means is that once you’ve got an agreement it’s very difficult to cause a fork. And if a fork is caused, then safety is preferred, so is it’s easier –  you don’t have these two different forks, and if it gets too far away, then you just end up with what’s called a petition instead, why you have a small network and a large network. The large network continues and the smaller network as well. Um, but there’s no way to reconcile those networks,so then even if you weren’t voting on both, it’s essentially two different networks anyways. 

Piers: Yeah, I mean, then it becomes a social work in reconciliation. I think that the emphasis what you’re going to see a lot in how are we describing what we’re building is, is going to be sort of good enough – and then pathways to do better. So, you know, getting to a stable public network, which allows a decentralization of a form, I. E. anyone can join the network if they want to and anyone can participate in the network if they want to. Given that the ability to stake that’s a good enough starting. There is still a number of things that need to be solved, and we’re not the only team that’s trying to solve this. You know, these are problems for any of the proof of stake implementations that you’re seeing in the world today. And we’re just happy to be able to be putting, our team’s brains, along with the community’s brains to that as well. But ultimately, it doesn’t prevent us from getting to a public decentralized network, albeit with some things that still need to be solved along the way.

How soon may we expect an alpha net to play with that one? What is the finality with the new PoS implementation, we were under five seconds before. What’d you expect with cost?

Dan: Again, kind of goes back to a previous comment where it’s the proof of stake doesn’t really affect any of that kind of stuff. It doesn’t affect finality, throughput or anything like that. So the finality will be in a similar ballpark – maybe a bit faster, maybe a bit slower. We don’t know until we build it and test it – it’s not going to be too far from what we’ve previously said. 

Piers: Yeah. And about, about five seconds is probably perfectly reasonable.

What is a best case scenario for, in that launch?

Piers: It depends what you’re talking about. I suppose a best case scenario is that we have a  MVP at the first drop, so we called it MVP zero. It is stable and running as expected, and we haven’t needed to do any massive changes that prevent us from doing the roadmap of scalability – MVP2, MVP3 et cetera.

So just making sure that that initial base is a really solid foundation for a, for an upgrade path to reasonable scalability and then linear scalability at the end point. That fundamentally is the difference for how we building here – we are building with scalability in mind. We’re building with a data structure that has scalability is sharded from the start. is built all-around these fundamental principles that we learned from tempo and all the other things that we built before that – but yeah, the best-case scenario for a mainnet is a stable mainnet that people can already use and start getting underway on immediately.

Are the currently designed Radix libraries still valid, do they need a big rework?

Piers: We’re using the atom model, so no

Why did Zalan leave the company?

For those of you who don’t know – he was head of DevOps. He was a phenomenally talented person who really relishes the problem set of very large network problems. Ultimately it got to the point where he wasn’t really using his very appropriate skillset and being able to apply it to what we were doing because we had a number of delays.

Unfortunately, he decided to move on to a new challenge where he’s now working with a smart car company that already has a lot of cars on the road – he works on how to make sure that they can deploy more effectively. We love Zalan, he’s a great guy. We’re still in contact.

He still helps us out from time to time, but ultimately the company wasn’t at the stage that would justify him staying.

Isn’t it important to have an alpha/beta already running before the OTC is available?

Dan: Is it?

Piers: I mean, adds confidence, right?

Dan: But still, you could see the progress and the repos…

Piers:  So one of the mistakes that we have made in the past is focusing on, from the point of view of technology delivery, short term marketing requirements, at the detriment of actually delivering the thing that we’re supposed to be delivering.

The MVP zero first drop has been caught right back to what we consider to be the absolute bear bones of what a network needs to be while, still prefer preserving the atom model and that sort of the easy API programmability. So, you know, If it so happens that our MVP zero drop gets to the point where we can run and alpha or beta at that point then absolutely!

What we don’t want to do is shift the development objectives away from actually delivering a public network. Because everyone can see that code and run the tests that will be running By the time you know that, that comes around halfway through this year, then there should be a fair amount for people to be able to look at and play with and hopefully a alpha/beta network – but if it’s not going to get that, then it would be foolish given our previous experience to move the development objective away from delivery of the main net, because that’s fundamentally what you want to be focusing on.

PoS takes a lot of work to implement and it has many pieces of infrastructure that may not transition to other systems – staking pools, rewards – how much of it can be transitioned to the next system.

Dan: Okay, so there’s no definite timeline on, on, at what point will we want to switch out PoS. I think that’s probably trying to look a bit too far into the future – if it’s functioning fine and it’s able to keep the network secure, then do you really want to go and change that just for the sake of it? Probably not.

Piers: I think that’s a community led priority decision right? Like the network comes out and there’s a, there’s a finite amount of developer resources in the company, and then there’ll be a finite amount of people in the community that care about different things. There’s so many things that need to be solid for a public network – everything from programmability and functionality to, scalability to, to sybil protection mechanisms and all of that kind of stuff. 

We are not putting a timeline on that because when we get that, we don’t know what the priorities of everyone are going to look like. It may be that people are actually quite happy with that because the priority is actually we really want to get to this throughput because we’re now getting bottleneck that, at you know, 10,000 or 50,000 transactions per second because of the way the network is being used.

That has to be a community/developer led decision that will – everyone will be able to be part of that conversation

Dan: Yeah. I mean it comes down to if we get, if we have an alternative that doesn’t offer any immediate kind of improvement over other issues that are pending, then it makes sense to postpone that to later – if we have something that improves something that’s causing the problems, which is centralizations and network or something like that, then maybe it’s more prudent to then implement that. I think we just have to wait and see. What we need to do versus what we want to do. 

Given that you’re going fully open source, how will you be monetizing consultancy and implementation services?

Monetization comes directly from the tokens – much like many of the other open-source public network platforms like Ethereum or Neo, Stellar – any of these. Ultimately, the way we build the economics really does equate quite strongly to the value that comes out the token with what’s happening on the network. We really want to maintain our aligned incentives with that and make sure that the public network is great.

Regarding the MVP before alpha-beta. my pondering regarding OTC availability before alpha-beta is linked to thinking tat a running client may maybe enhance the price of XRD. So if MVP comes before  – maybe OTC should be available during MVPs.

Piers: Yeah. Like again, we’ll see how the development roadmap plays out.

We’ve got a bit of time before now and then. The main thing that we want to do as a company is make sure that the team has everything they need to get started and they’re completely focused on that. Then we will adjust our expectations, communications and what we’re going forwards is as the Radix technology and the state of technology as the team is working sort of along the timelines that they’re assessing based on the objectives that we’ve set out.

Tell us what you know and what you know that you don’t know about SMS?

Dan: So what do we know about SMS…  it seems to do what we’d like it to do, which is, um, it penalizes attackers  by making everybody behave as an attacker – what Sybil mechanisms generally don’t do is that you have your rational actors and you have your way of rational activism – they look very different to each other and so their behaviors are you know, also very different. SMS is designed to just kind of conflict the two so that everybody’s having to perform some activities.The honest actor doesn’t act differently to a dishonest one. It’s how the network is and it, and if you kind of boil it down to brass tacks, um, and so if everybody’s attacking the network, all attacks generally be called long range.

So I can’t just come into the network now throw some computing power at it and in the next hour I’ve got control of the network. It does not work that way. It resists about the more compute power – I push the network, the stronger the network resistant me cause I get, uh, my identity slashed, my ticket slashed and all that kind of stuff.

We did quite a lot of testing around that and we build a lot of theory and simulators and all this other stuff to go with it. And on the surface, it looks good on –  passes the tests, all the kind of formal function stuff checks out. However, it’s such a kind of paradigm shift that there’s also a lot of thinking about what we are missing – what, what don’t we know? It’s still risky

Piers: Yeah. It’s also like your adding two degrees of  R&D rather than the one degree of R&D to a project that you want to get delivered. And the more uncertainty you build, the more of a shifting sand you get. It’s still a really interesting theory.

It approaches sybil problems in a very different way. But, um, yeah, it’s just, it from all of the polls that we’ve done in the talking that we’ve done with various community members, and so just understanding what developer requirements are and what people are finding problematic in other platforms. It generally doesn’t come down to Sybil –  it comes down to things like, “Oh, it’s really difficult to do this kind of it to build this kind of small contract. Or like the throughput is really crap when I need it. Or the, you know, I’m worried I’m going to get bottlenecked in the future if I’m really successful, or it’s really expensive to use” and these things that are not simple problems. 

These are problems to do with the platform as they exist. Sybil problem is like, I think in some ways an existential problem in that what we want collectively is to know that the network can arbitrarily be taken over and attacked so that all of the things that I would, that are a value on that platform are suddenly at risk and the, you know, moving an attack from short, like all attacks as they are at the moment, proof of work or proof of stake from short range attacks. I can just go and buy a lot of hashing power, I can go and buy a load of, um, a stake and, and enter the network that way.

That’s a problem. But it’s not so much of a problem that’s preventing people, from building things and doing things and creating great, you know, sort of businesses out of it. So I think that it’s one of those things where it’s like, yeah, we, we’re going to need to get to that, but it’s one of those problems that sort of only exists when the network becomes very, it socially important rather than just academically important or like important to some groups of people which are already aligned with the existing mechanisms like staking or proof of work. 

Dan: So, yeah the three main things that SMS gives, um, is that it’s very centralization resistant it makes all the attacks long range and it is efficient. Yeah. But that’s all. Um, but anyway, the point I was gonna make. Um, all those things become very important in super scaled networks later – but there’s nothing that, that it has that POS can’t do to start with.

So for example when all of your liquidity is moving in POS from stake into DeFi products, right, because they make more money from liquidating than from the stake and point and like loans or something so that’s been happening – and at that point it’s like oh crap, maybe we need to think about switching out proof of stake because all that stake that can be input into DeFi is actually reducing the security of the network. That’s not going to happen in our network, immediately is going to, and it takes some time. So that’s rapport for people to come on and start building those things and having  SMS, that would be great – yeah, sure. It doesn’t, there’s no need for it to prevent things out of that style with proof of stake is, um, much less risky option because we know what we need to go. We know what we can and can’t do. There are still unknown unknowns with proof of stake -sure, But not as many as SMS. Right. 

Do you work already with any third party projects that you’re excited about like life? Can we get any clues what these are working on? Or is it too early?

Piers: Yes, we worked with a lot of different projects. You know, we’ve had to go back and be very honest and upfront with them and say, look, we’re going to be delaying because we found this technical problem and we don’t want, your project to be stopped by us. Some of the partners had been like, “that’s fine because you guys are delivering a, a specification that we actually can’t get anywhere else. We can’t get this lean scalability. We can’t get the responsive time and these availabilities that we actually know that we want long term”.

I’d say that’s the minority. The majority of projects we’ve been actively helping and saying, look, you know, we’re not going to be able to deliver in the timeframes that we thought we were – let’s help you look at other alternatives so that you can go ahead and be successful, because ultimately that’s what this is about. We’re building a platform to try and make people successful if that’s being delayed and we want to make sure that those people don’t get stuck by us, but you know, in the process, we’ve learned a lot about what people do and don’t want they’re building on platforms. We’ve spoken with everyone from small governments that are trying to issue their own state currency to people who are trying to incentivize influencers on social networks, which need massive scalability and advertising platforms and all this kind of stuff.

And that it’s really exciting. It’s really good to dive into those user stories and understand what the requirements are and make sure that what we’re building from a builder point of view really matches that. We’ve sort of gone back into focus development mode to really make sure that this is actually delivering what it needs to for the people long term that we expect to be able to use this.

Is PoS going to be able to get to 1MTPS?

Dan: Yeah. PoS doesn’t affect it as I’ve already stated before. Cerberus gets to 1MTPS. You’ll see for yourself once the whitepaper is out that it can linearly scale. It has some interesting properties that will allow it to do that, where previous BFT algorithms haven’t been able to, and it’s all, it all comes down to good architecture data, which is something that I believe for a very long time. Before Radix I built all kinds of iterations to split the states and put it into little silos and stuff and organize it in a way whereby you could use it to scale and shard.

Piers: I’ve been experimenting with the analogy. Which I haven’t, I don’t think I’ve tested it on you yet, but the analogy is iif you think of a traditional blockchain as being a CPU, it has to sequentially work on things that are sort of fed to it, whereas, you can think of Cerberus as a GPU where you can have multiple parallel compute instances.

The computes more specific. It’s not general compute, like a CPU, But as a result of being able to paralyze that compute and sort it into sort of related and unrelated, uh, state changes, you are able to get massively more parallel throughput.

Dan: Yeah, that analogy works. 

Wikipedia describes POS with many variations on the way that POS has been implemented, this Wiki and the references. It also includes/ addresses the kinds of attack POS is vulnerable to. Which implementation variation do you like and why? What are you confident that you are not replacing with one set of problems for another?

Piers: That’s a really big question. We are going to be starting to do some podcasts about sort of specific areas of technology and discussions like this is going to be a great place for that. I think Dan’s already referred to one area where this has an advantage, which is that you don’t really end up having the nothing at stake problem in the way that we’ve built the consensus system which is a sort of, one of the big problems that currently exist in blockchain land. 

Dan: You just don’t get it with multi-phase commitments. Some issues with PoS that you get on blockchain, you did get here and some issues that you didn’t get on blockchain, you get here as well.

Piers: it’s a great question and one that we do not have time to jump into in full. But yeah, more information will be coming out both around Cerberus and our POS implementations as well. 

Is the Dex still in the mix for first release. If that works at global scale for a wide range of fungible assets, than that is in itself a killer app.

Dan: I would love it to be what then I’m not gonna try and boil the oceans anymore. So as Piers said early on, the initial MVP drop is just cut back, and just provides the fundamentals for moving along our scalability roadmap. Um, being able to deploy tokens and simple apps and stuff. Radix Engine can do some of that stuff. Nothing really too exotic because of our focus. The reason that we’re narrowing that focus down so much on that is that we’ve done in your community polls, we spend time with people on discord and on telegram, um, anybody that we talk to have, you know, what is the most exciting thing about Radix to you?

 Everybody comes back with scale as the number one way.

Piers: Dex does come in a second or third drop. So like this is one of these things that we need to be discussing as a community around like –  we get to our first drop and then if it is the next level of scalability or is is you know, removing PoS and putting a differentSybil mechanism and still the priority or is something like the Dex actually a priority?

I also would like, and you know, this is something that we won’t be able to do immediately, but I would like uds to be able to share some of our thinking on how things like this would be architected in our system and give the community more opportunity to sort of help development along on things like that, if that’s something that they really care about.

I think that’s really interesting some really cool sort of things you can do once you have that sharded environment. For things like the dex a big element of the dex goes into how the sharding model is done, but it’s not on our priority roadmap for the number one.

Another reason that you probably don’t want that immediately is because the dex is only really useful if there are assets on the platform. A Dex is a function of wealth. What wealth of the system and both of the different assets that are available. And the amount that people want to be able to trade or exchange those assets.

So the first thing that really needs to come before a Dex is, you know, using Radix is system is a really great way of building scalable assets, which is what, you know, what the Radix engine was designed to be fantastic at. And then as that, that that ecosystem emerges, you can then start to think about what, how can we make the system better for those people, for functions that they need?

The wonderful thing about public ledgers is this interoperability between assets, this ability for you to be able to move between them and exchange them like that is a killer app. 

Dan: Yeah. It’s like where it is the EtherDelta without ERC20, right? 

Are there already useful apps lined up or close to being lined up?

Piers:  We’re working on a few (Dan: Wallets?). You need to be able to do some very simple things on a platform and you want some standard things to be there. So, yeah, like all of the things that we think are absolutely essential, we are already sort of working on or have or have planned in the works.

As we get to the point where we have a static specification for how the system works for people who want to be building their 3rd party wallets with third-party applications that we can then actually start to give them away to build on the platform again – then we’ll definitely be re-engaging.

But for the time being, again, this is all about focus on the technology and getting that to a point where people can usually build on it. 

What is the current token distribution? Is the token current token distribution static or will you consider alterations?

Piers: So the share of who is going to get what in the genesis atom is going to say as it is in the economics paper. However, there’s a few things that we need to address –  distribution is not great at the moment. We want to make sure that we’re addressing ways of making distribution sort of more open off platform. This goes into a question that we asked before

It’s very likely that we’re going to need to introduce something like an emission to the platform for the POS to work properly. So there’s going to be some discussions around that. Again, you know, very happy to hear, uh, any suggestions that people have from the community.

That’s sort of one of our next things that we need to be really looking at is how. 

Given the fact that we’re moving to PoS, how can we make sure that the security of the network isn’t going to be a problem with the way that the distribution currently works. 

POS will lead to greater centralization. Most of the GC members are expected to play as nodes. Should we open the debate and increase the shares of GC? This will provide greater security at the early stages, and they will be able to benefit from the early investment

Dan: I don’t fully understand what that’s asking. 

Piers: Yeah. Um, I think that the, I think it’s sort of saying that the worry of a sort of arbitrary control of something like the foundation at the start. Um, I think that. Every network has got a sort of an initial bootstrapping problem of how do you make sure that start at the network is secure.

This is something that we’re going to be making sure. Like, This comes into distribution questions like maybe the foundation needs to distribute more to the community between now and, and, and when the, and when the platform is live. Again, these are questions that we need to be analyzing with respect to how POS functions.

Dan: There is a big problem there as well that we need to figure out what that looks like with PoS. Then there’s also the consideration. Of course not all of your currency is going to be used to stake. So  the investigations that we did around POS, one of the things that w looked at with what kind of percentage of the currency suppliers used to stay against the network. 

That was a bit crude, and I haven’t spent too long on it. Put a ballpark in. It’s usually around three to 5%. Some edge cases and exceptions, depending on what network is. It’s not a great deal really. It kind of parallels with things like Bitcoin as well. 

How will Radix be better than Ethereums solidity?

Dan: Uh, so I mean, the are two very different things from the outset. So solidity is a  turing complete language and Radix Engine is composability. So there are benefits, pros and cons with both of those. 

With solidity, you can pretty much build anything you can like Pac Man in solidity. You could build a complete game. With that come cons, which are bugs for a start and case based conditions –  how much gas is this going to require because there’s a loop in here and all kind of edge cases – a nightmare to develop for – it works like that just generally with Turing complete languages

Piers: So I think that the buildability component of that is pretty big, but there’s also the two other components, which I think are less well understood with how the atom model works, o Dan talks about composability, um, but also the scalability of the applications you build.

So in ethereum you’re’ required to run the entire virtual machine of every single thing or program that is being run on the network, which in turn creates massive amounts of congestion. If you then move to ethereums sharding model theory –  Casper implementation. You have all of this problem of moving things across shard because it’s not state sharded, so your transitions between shards become very, very heavy, on your main chain, and you end up having all of these bottlenecks that come just by of how the smart contract engine has been built that are difficult to get over. The other side of this is composability, which we’ve written an article about on our blog – please go and check it out.

Nevertheless, essentially it’s this way in which you can make assets on the same ledger, talk to each other and interact, and fundamentally that’s a really difficult thing to do with solidity and the VM. What Radix does is it makes the assets much more composable with each other, which means that when you want to start building more complicated things, where you’re composing these things together, you have so much more flexibility and scalability when you have something right, the Radix engine and the Radix network than you do with something like Solidity.

So look, solidity and Ethereum has really sort of like blazed the trail on how to do this, but that fundamental decisions on how they’ve built that architecture, that decision to go with an account model and their decision to go with turing completeness, and that’s fundamentally means that they’re going to constantly be up against scalability problems constantly be up against problems where these applications don’t really speak to each other very well. 

Dan: Because you’ve got to carry the state around and you’ve got to execute all the state and stuff, and it’s kind of a complicated state. It’s not like you can chop it off very easily – so Radix Engine is built around state flipping.

So basically, if I’ve wanted this to travel to that and you just flipping the states with these particular things and the way you put those things together, you can do these complex things a lot lighter. With lighter bricks, but you can put them together in a particular way to make an nice…muddled soup thing. The whole kind of compares ability thing and the way that sits with the atom model and means you can shard that stuff as well, which is also difficult to do with Turing completeness.

How will the Radix engine be better than Algorands Teal?

Dan: I haven’t even had the time to look at Teal, I’ll take that as a note that I should go and have a look on it. 

Sometime back Zalan was showing some examples of how Prometheus and Grafana implementations can give an awesome view monitoring of running a node or several of them, all sorts of statistics and graphics and curves. Is there something you’re having planning?

Piers: Yeah. Our new head of dev-ops Shambu, will absolutely be involved in making sure that running nodes is as smooth as it was before, much of the tooling we’ve been using we’ll use again.it focuses more on how the box is performing rather than how the box is running.

Dan: That was super handy for the 1MTPS tests. All the same conditions apply when I’m running tests at home with Cerberus as well. All hooked into the API at points, which is much the same.

Piers: Yeah, I think we’ve got through everything with 7 minutes to spare, thanks for everyone’s questions, anything you want to say in closing?

Dan: It’s tough about this, solving the scalability problem. Just as you think you got, the little bastard comes up behind you and says no! Hopefully you’ve all read the documents now that have been put together on the blog posts and all about this whole journey.

If it was easy, somebody would be doing 1MTPS 3,4,5 years ago. Um, and so it just kind of highlights that magnitude of problem we’re trying to solve, and I know it’s disappointing. It’s disappointing for me when things go wrong and we have to kind of go back to the drawing board on some things or shift timelines around. Yeah, it’s, it’s annoying and it’s frustrating, but we’re all moving forward. 

Piers: How do you feel overall about Cerberus and the future?

Dan: We can still make it. If I didn’t feel that then, then I wouldn’t be able to sit here and have these conversations with you guys, the community, and everybody else. So, I’m still confident, I still got that fire, but at the same time I’m allowed to be disappointed.

Piers: It’s hard to kill your babies. 

Dan: It is hard to kill your babies.

Piers: The big learning that we’ve taken with this approach is not just, um, the, how can you create something that you can build towards the future, but have milestones of deliverables that actually serve a need in the market and build community that can tell you what you a what they actually need from that as you go. Like ultimately how useful these platforms are going to be and how important they’re going to be to the world is going to be decided by everyone who decides to use them.

The decision to use one of these platforms is. Really going to come down to whether or not it serves your need. And so that’s something that we’re going to constantly have to be conscious of. Doing it this way allows us to both give people more visibility on us getting towards something. It gives us a less ambitious first deliverable.

You know, we’re not going to do beating a million transactions per second start off with, we are not going to get even close to that, but it is going to be a platform that you can see a clear roadmap from where that is in the first version to how it’s going to get there and then allow us to build up to that scale while other people are using the network to the point where it’s necessary to deliver that scale.

Because ultimately, you know, we deliver 1 million transactions per second day one -No one’s going to be using a million transactions per second day, one that is the entire transactional throughput of the world. It’s something that we can, we can build incrementally towards and it’s something that we want to invite much more people into come and come and share with us that journey as well.

Dan: One thing, looking in retrospective it’s like, didn’t quite get that with tempo for various reasons but I remember, I remember when 50 TPS on the first effort alpha test way back in 2013 and everybody in there and the, and the phone went crazy cause it was 50 TPS that could not have imagined 1MTPS tests on anything.

Piers: I was a naysayer inside. I was like, yeah, a thousand is enough!

Dan: There is progress. And so if it kind of flat-lined and stuff, then maybe I’ll be questioning more as well. For there’s this, this clear, way of trying to progress in terms of what we’re able to do, what we’ve learned, what, what, what, what we’re able to put together.

Piers: So. Yeah. Okay. All right. Thank you very much everyone and have a wonderful rest of your week.