Radix - TempoDan Hughes
25th September 2017
Download as PDF
In January 2009, Satoshi Nakamoto introduced the first successful implementation of a permissionless and trustless Distributed Ledger Technology (DLT), the Blockchain. It provided a secure, reliable and decentralized method to timestamp events, providing total order, without the need for overarching 3 rd party regulation or trusted entities. The ability to timestamp events in such a manner allowed Satoshi to solve the "double spend" problem and implement the concept of a digital currency. This manifested as the digital currency Bitcoin1.
Bitcoin, and its success, has inspired others to take Satoshi's blockchain and apply it in many innovative ways. Ethereum is a notable example, first proposed by Vitalik Buterin2 in late 2013, it allows arbitrary Turing-complete logic to be executed upon a blockchain, facilitating smart applications, contracts, and even autonomous organizations to exist and function in a decentralized environment.
However, blockchain implementations have several challenges, including scalability, efficiency and security. These are now beginning to hinder its progression into the mainstream. Alternative architectures have been developed, the most notable of which being a DAG (Directed Acyclic Graph), and while a DAG provides some improvements, at sufficient scale they too succumb to many of the same problems as Blockchain.
A method is needed for reaching trustless consensus, across a distributed ledger, that can scale in an efficient, unbounded linear fashion. In this paper, we propose a solution to the double-spending problem using a peer-to-peer network of nodes with logical clocks to generate a temporal proof of the chronological order of events. The system is secure providing nodes with a historical record of the generated temporal proofs can participate.
Further, by providing the ability to execute arbitrary scripts, a fully expressive, Turing complete, state change system may be built on the new consensus paradigm. However, this paper does not deal with the developer environment for decentralised applications on Radix, which will be explored in detail in a later paper.
The Tempo Ledger consists of three fundamental components:
A networked cluster of nodes
A global ledger database distributed across the nodes
An algorithm for generating a cryptographically secure record of temporally ordered events.
An instance of Tempo is called a Universe and any event within a Universe, such as a message or transaction, is represented by an object called an Atom.
All Atoms contain at least one endpoint destination, represented by an endpoint address. Endpoint addresses are derived from an identity, such as a user's public key and are used to route events through the network.
Atoms generally take the form of either Payload Atoms or Transfer Atoms. An example of a Payload Atom is a communication, sent to one or more parties, like an email or an instant message. Transfer Atoms are used to transfer the ownership of an item, such as currency, to another party.
Atoms may also contain other Atoms, as well as various other data, depending on their purpose. This extra data might include conditional destinations, owners, participants, associations and application meta-data. Exotic Atom variants can be created for specific application purposes if required.
Figure 1: Standard Atom types & basic structure
Clients may create and submit Atoms to the network via any node it is connected to. A submitted Atom is then processed by the network and, if valid, a Temporal Proof is constructed for, and associated with, that Atom from that point forward.
Tempo relies heavily on eventual consistency to achieve a total ordering of events.
The Tempo ledger is a distributed database which stores all Atoms that exist in a Universe. It is designed to be horizontally scalable, supports semi-structured data, and can update entries.
A local ledger instance operating on a node can be configured to store all, or part, of the global ledger. A subset of the global ledger is known as a shard. The total shard space is configurable per Universe, but is immutable once deployed. Nodes can reconfigure to support any subset of the shard space, helping to ensure that the Universe can handle large load requirements without requiring expensive hardware to operate a node. Critically, this enables performance constrained IoT devices to participate as first-class citizens in a Universe.
Sharding is a fundamental design feature of Radix, which implies a robust approach for guaranteeing that Atoms are in the correct shards, and an efficient method for determining which nodes will retain copies of which Atoms.
Considering that all Atoms must have at least one endpoint in their destinations, we can derive a shard ID using the destination, truncated to the shard space dimensions via a modulo operator. Some Atoms, such as Transfer Atoms, may have multiple destinations and therefore will be present in multiple shards.
This is by design, as an Atom that is present in multiple shards increases the redundancy and availability of that Atom. A further benefit is that any Atom that performs an inter-shard transfer is present in both the previous owner's and new owner's shards. This, in part, eliminates the need for a global state and mitigates any expensive inter-shard state verification operations needed to prevent "double spends".
While Payload Atoms are relatively simple, comprising of some arbitrary data, destination/s and a signature, Transfer Atoms are more complex.
An owned item is represented by a Consumable. Ownership is defined as a sequence of Consumables, which provide an auditable history of owners over time. Consumables are a subclass of Atom.
To transfer ownership of an Item contained in Atom to Bob, Alice creates a Consumer ), which references the Consumable that specifies her as the current owner and signs it with her identity. Consumers are also a subclass of Atom, and identify a Consumable that is to be "consumed".
She also creates a new Consumable, which contains the Item being transferred, along with the identity of the new owner: Bob. The Consumer and Consumable are packaged into a new Atom and submitted to the network for verification.
Figure 2: Ownership Transfer
Any node that receives Alice's Atom can now trivially validate that Alice is indeed the current owner of Item. This is performed by validating the signature of the submitted Consumer against the owner information present in the last consumable for Item held in the node's local ledger. If the signature successfully validates, then Alice must be the current owner. The transfer will then execute and Bob becomes the new owner.
Some transfer operations may require that Item is not transferred in its entirety, such as currency. Consumables can be configured to allow partial transfers of an item, if the item specification allows it. In this instance Alice would create two Consumables, one to Bob for the principle, and another back to herself for the remainder. Similarly, multiple Consumers may be used to reference many Consumables owned by Alice and transfer them all to Bob in one execution, thus guaranteeing atomicity and reducing network load.
To ensure swift delivery of events to all nodes in a shard, Tempo employs a Gossip protocol to communicate information around the network. Gossip protocols have proven to be an efficient and reliable means of achieving mass propagation of information in a peer-to-peer network.
Nodes broadcast information about their configuration, such as a set of shards they wish to receive events and state information for, and any network services they may offer (such as relay and discovery) allowing further optimization of information delivery. They may also broadcast metadata about the other peers they are connected to, further assisting in the routing of information and events.
Nodes within the network adopt a "best effort" approach to keeping their local ledgers up to date via the active synchronization and gossip protocols. When receiving an Atom via any of these means, a node will perform validation of the Atom against its local ledger. If a provable discrepancy is discovered, a node can communicate this information to its neighbouring nodes allowing them to act and resolve the discrepancy.
Though reliable, this approach will undoubtedly lead to occasions where events are missed and the state of an item may be incorrect in some local ledger instances. To resolve these inconsistencies, nodes rely on detectable causal history anomalies triggered by events. They can then query other nodes to obtain missing information and achieve eventual consistency with the rest of the network regarding an event and its subsequent state.
For example, Node receives an Atom that results in an inconclusive validation procedure, perhaps due to a reference to Consumable that Node does not have. Node can then query its neighbouring nodes to return any dependency Atom/s that reference Consumable and revalidate. Node now has consistent state for Consumable.
For Atoms to be validated correctly, they need to be routed to the nodes that contain the associated shards allowing the causal history of any Consumables, state and other information to be verified.
Endpoint destinations provide the required routing information to ensure that Atoms are received by appropriate nodes via the gossip communications layer.
Consider the example of Alice transferring Item to Bob. Alice included her endpoint destination, which indicates she is transferring from Shard, and included Bob's endpoint destination which indicates she is transferring to Shard. Nodes storing Shard need to be aware of the event of; Alice's spend; Bob's receipt; and of the state of Item in each shard. Post the event, nodes storing Shard no longer need to be aware of any future changes to the state of Item (unless it is sent again to Shard). The responsibility of Item's state has transferred to any nodes storing Shard. If Bob should then spend Item to an owner in another shard, the responsibility of maintaining the state of Item will once again change.
Figure 3: Gossip of Atom targeting Shards
Processing only events that affect state within a node's subset of the global ledger, and the shifting responsibility of state maintenance, greatly reduces total state processing overhead. This is key to the scaling performance of Tempo.
The foundation of Tempo consensus is based around Logical Clocks which are a simple means of providing a relative, partial ordering of events within a distributed system3.
Within Tempo, all nodes have a local logical clock; an ever-increasing integer value representing the number of events witnessed by that node. Nodes increment their local logical clock when witnessing an event which has not been seen previously. Upon storing an event the node also stores its current logical clock value with it. This record can then be used to help validate the temporal order of past events if required.
Only the receipt of an Atom that has not been previously witnessed by that node may be classed as an "event" for any given node within Tempo.
Temporal proof provisioning
A Universe is split into Shards, where nodes are not required to store a complete copy of the global ledger or state. However, without a suitable consensus algorithm that allows nodes to verify state changes across the shards they maintain, "double spending" would be a trivial exercise, where a dishonest actor could spend the same item on two different shards.
Temporal Proofs provide a cheap, tamper resistant, solution to the above problem.
Before an event can be presented to the entire network for global acceptance, an initial validation of the event is performed by a subset of nodes which, if successful, results in: A Temporal Proof being constructed and associated with the Atom, and a network wide broadcast of the Atom and its Temporal Proof.
Using Alice's transfer of Item to Bob as an example, the process starts with Alice selecting a node she is connected to, Node, and submitting Atom requesting that a Temporal Proof of a specific length be created.
Upon receiving the request, Node will, if it is storing either Alice's or Bob's shard, perform a validation of the Atom. In the case of it having a copy of Shard for Alice, it will ensure that Item hasn't been already spent by Alice. If any provable discrepancy is found, such as Item being already spent by Alice, or the Atom is badly constructed, processing of the Atom will fail. Otherwise, Node will determine a set of directly connected nodes which are storing either Shard, select one at random, and forward it the submission request. If a suitable node is not found, Node will search through its node graph and associated metadata to discover viable relay/s with connections to nodes maintaining Shard. After Node discovers a suitable candidate, Node, it will append a space-time coordinate and a signature of Hash to the Temporal Proof (creating a new one if none is yet present). Where is Node's logical clock value for the event, is the ID of the observer Node, is the ID of Node, and is the event Hash. Node will then transmit the Atom and the current Temporal Proof to Node.
Figure 4: Temporal Proof
Upon receiving the submission from Node, Node will also validate Atom, and if successful, will select a subsequent node to forward the submission to, append its coordinate and signature to the Temporal Proof and transmit Atom and the Proof to the next node. The process repeats until the required number of nodes have participated in the Temporal Proof or a provable discrepancy is discovered by any node involved in the process.
Figure 5: Temporal Proof provisioning and gossip of Atom
Temporal Provisioning of Atom in the above example will produce the following coordinates:
|Logical Clock||Event||Observer||Next Observer|
The length of a Temporal Proof defines how many nodes should be part of the provisioning process. A length that is too short reduces the efficiency of resolving conflicts between Atoms should they arise, and may result in an Atom not being correctly verified, requiring it to undergo temporal order determinism at each node. Lengths that are very long unnecessarily increase the bandwidth load within the network, as well as the time taken for an Atom to become final.
Once the Temporal Proof length has been determined, if the Atom being transmitted has any dependencies or Consumables, the network can also optimise node selection to improve the future speed of verifying that transfer. This is because an auditable causal history can easily be created if a node that was involved in validating a previous transaction, upon which this transaction relies, is included in the new temporal proof.
In simple terms, if Alice sends Item to Bob, and Bob then sends Item to Carol, it is highly beneficial for network efficiency if one of the nodes that were involved in creating the Temporal Proof for the Alice Bob transfer is also part of the Temporal Proof for the Bob Carol transfer.
|Alice Bob||Bob Carol||Carol Dan||Dan Frank|
Achieving Temporal Proof causal history is relatively simple: if, when taking part in Temporal Provisioning, any candidate nodes available to Node are also part of the Temporal Proof of any dependencies of Atom, Node will select at random one of those as a priority if not already part of the Temporal Provisioning for Atom.
To increase the likelihood of creating a Temporal Proof with these properties, the length is again an important factor. For most purposes, or Max should be sufficient, where is an estimated size of the nodes present in the network at that time.
In the event of a conflict (e.g double spend) or anomaly between two events, the coordinates of a Temporal Proof may be used to construct a vector clock4 using the LogicalClock component of each. One or more vector clocks can then be used to determine the partial order of the associated Atoms.
Given two vector clocks, for two Atoms and , of the form and VC, where the logical clock value for the corresponding event of each Node is laid out in the array below:
It is trivial to determine that the Atom associated with VC was presented first to the network, as both vector clocks have a common Node whose logical clock value in VC is less than in VC.
Vector clocks follow a simple ruleset to determine order, where; VC is less than (or before) VC if VC is less than or equal to VC and where at least one VC is strictly smaller than VC
Vector clock comparisons provide a simple and efficient method to determine order in many cases of conflict without requiring the use of more expensive means. However, should a pair of vector clocks not contain a common node between them, they are said to be concurrent, and an additional mechanism is required for these situations.
Given two concurrent vector clocks which Node has received:
To determine which is the earlier Temporal Proof, Node has two mechanisms available to it.
The first mechanism references the node's local ledger, which it can trust to return honest information regarding the events it has seen thus far and not subvert the truth.
The node searches for events after VC which were involved in and attempts to discover nodes from VC that are part of those Temporal Proofs and compare the logical clock values.
Should the node discover within its local ledger an event which has a vector clock of the form:
The temporal order of VC and VC can be resolved due to and being present. The logical clock for Node in VC is greater than in VC, therefore VC was created after VC. The logical clock for Node in VC is less than in VC, therefore VC and VC were created before VC.
Should the initial search not provide any results, event vector clocks can be linked together in a sequence to increase the likelihood of discovering the solution. If VC contained only Node but no nodes from VC, the node could extend the search to include the other nodes present in VC.
Resolving concurrent events in this manner is simple, fast, efficient and independent of any further external information being required at the time of validation. It is sufficient to resolve a large percentage of concurrent events providing that the node is an active participant and is storing one or more shards relevant to the conflict.
If a resolution is not available at the time of conflict, Node will store both Atom and Atom in its local ledger, marking them according to which it saw first, and employ a second event ordering mechanism: Commitment Order Determination.
For light nodes storing only their own events, such as IoT devices, commitments are the only way in which they can determine order.
To assist with total order determination of events, nodes declare to the network a periodic commitment of all events they have seen.
This commitment is produced either when a node takes part in Temporal Provisioning for an event, or at will over an arbitrary interval. A commitment is a Merkle Hash5 constructed from the events a node has witnessed since submitting a previous commitment, with the first leaf being the last commitment a node submitted, producing a linked sequence of commitments over time.
Figure 6: Commitment Sequence
If the node is taking part in a Temporal Provisioning process, the commitment is included in a node's Temporal Coordinate as , resulting in the extended space-time coordinate . The commitment is tamperproof as the coordinates are signed by the producing nodes.
Figure 7: Temporal Proof with commitment
A node may be requested to provide information to enable verification of any commitments it has produced at any time. They should deliver all the relevant Atom hashes to the requesting node, allowing it to reconstruct the commitment hash and verify. Requesting nodes can then take appropriate action in the event of a fraudulent commitment being detected.
This uncertainty of when a commitment verification may be requested also prevents nodes from tampering with their logical clock values, as all commitments have a logical clock value associated with them and so tampering is easily detectable.
For example, if the value of for Commitment was 100 and the value of for Commitment was 200, then Commitment should contain 100 items. If a requesting node is not returned 100 hashes when verifying, tampering of the logical clock may have occurred. Commitments are also used to provide a secondary mechanism to determine temporal order of events; Node has received Atom that conflicts with an Atom. Node contacts one of its neighbours, Node, and queries it for any commitment information corresponding to Atom. Node responds with: its commitment for Atom; a set of Atoms which were witnessed after Atom within the same commitment; its logical clock values for Atom and Atoms; and the leaves of the Merkle Hash. With this information Node can verify the logical clock values of the returned Atoms, the integrity of the commitment, and that the returned Atoms are part of it.
Figure 8: Commitment Validation
Once this information has been obtained from Node, Node can query Node which delivered Atom. It requests that Node return commitment and logical clock information for Atom and any of the Atoms, as well as the leaves of the Merkle Hash allowing Node to verify.
|Node Logical Clock||Node Logical Clock|
Therefore happened before .
As we now have commitment and logical clock values, resolution is simple: Atom was before Atom if any Atoms returned by Node are present within a commitment delivered by Node, where the logical clock value of any Atom from Node is less than the logical clock value of Atom for Node. If Atom was the last within a commitment, Node may return a subsequent commitment, or Node can contact another neighbour. The likelihood of Atom being the last within all commitments for all nodes is infinitesimal. Should Node not have any of the Atoms available in its local ledger, Node can query other neighbouring nodes about Atom via the same process. If a resolution is not available at that the time of the conflict, Node will store Atom and Atom in its local ledger, marking them according to which it saw first, until a resolution is determined or discovered.
This paper has proposed a method for determining total order of events within a distributed system, without relying on trust, that is scalable, efficient and agnostic to its operating environment. To reduce overhead and increase performance, our solution defines a structured, shardable architecture that limits state transfer information to only members of the network that need it. A peer-to peer network provides the means for communication and information routing via a reliable and robust gossip protocol. Nodes maintain a logical clock when witnessing events and periodically produce a tamper-proof commitment representing those events. When witnessing events, nodes collaborate to create Temporal Proofs which contain verifiable space-time coordinates which are used to construct vector clocks and determine when an event was first seen by the network. Nodes can join and leave the network at will and rely on detectable consistency anomalies to synchronize and bring them up to date.
1. S.Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System, 2008: https://bitcoin.org/bitcoin.pdf ↩
2. V. Buterin, 2014: https://ethereum.org/pdfs/EthereumWhitePaper.pdf. ↩
3. L. LamportTime, Clocks, and the Ordering of Events in a Distributed System, 1978: http://lamport.azurewebsites.net/pubs/time-clocks.pdf. ↩
4. C.J.Fidge, Timestamps in Message-Passing Systems that preserve the Partial Ordering, 1988: http://zoo.cs.yale.edu/classes/cs426/2012/lab/bib/fidge88timestamps.pdf. ↩
5. R.C. Merkle, Merkle Tree, 1979: https://en.wikipedia.org/wiki/Merkle_tree. ↩