Mon. Jun 21st, 2021
    OCamlPro, Tezos, Free TON, Ethereum, Free TON

    Authors: Fabrice Le Fessant and Thomas Sibut-Pinot, OcamlPro

    In this article, after a reminder of what blockchain is, we want to emphasize the importance that productivity in this area will have in the coming years. In particular, the bandwidth and latency of the blockchain carried out through the following low-distributed mechanisms: immediate finality which guarantees the immediately irreversible nature of adding a block, and state sharding, which allows the blockchain control to be shared among all its validators.

    We will also introduce you to Free TON, a young blockchain that is one of the few in which these two mechanisms are presented on a functional and mature platform. The Dune Network community, a fork of Tezos [1], has decided to team up with this blockchain to add to the exceptional performance of Free TON their validation and tool usage skills.

    Introduction

    At a recent blockchain webinar hosted by Systematic Cluster Blockchain GT, discussions turned to the importance of the number of validators used for a public blockchain. For example, a blockchain like Tezos uses about 400 validators, while others use only a few dozen. It can be mistakenly concluded that the greater the number of validators, the more powerful and efficient the blockchain is. It is almost the opposite: if a large number of validators are required to guarantee the blockchain’s resistance to certain attacks, their number, on the other hand, is an obstacle to efficiency. An obstacle, until there is a single mechanism, called sharding, which is beneficial. Only a few blockchains today have this mechanism, which, according to our analysis in this article, is necessary to ensure the usability of blockchain over time and thus the stability of the applications that use it. It is also almost impossible to add this mechanism retroactively: when many companies choose blockchain for deploying their future applications, this criterion must be put in the first place. At the end of this article, we introduce some blockchains that offer this mechanism, such as Free TON.

    This number of validators is historically linked to the decentralization criterion so dear to Satoshi Nakamoto, the inventor of Bitcoin. The last decade has seen a kind of a Cambrian explosion of blockchains and decentralized consensus algorithms, many of which have turned out to be successful. In parallel with the refinement of these algorithms, significant efforts have been made to reduce the practical limitations of blockchain applications.

    Over the past few years at OCamlPro we have accumulated considerable experience in developing applications for Tezos and Dune Network[2]. As would be expected, given the evolution of IT over the past decades, we emphasize the importance of two criteria: on the one hand, performance (computing power, storage, and bandwidth for the maximum number of users) and latency (the speed at which a transaction is guaranteed, which is crucial for real-time application interactivity). In addition to these expected problems, there is another blockchain-specific problem, finality, a term that we will define later.

    After a brief excursion into the history and a reminder of the first steps of blockchain, we will look, with some examples of operating blockchains, at how these performance problems are solved in practice, in particular through the concept of sharding. We will see that sharding involves a complete rethinking of the contract’s architecture and their interactions; that if we want scalability, it is not enough to adapt existing smart contracts. In short, we need to choose the right high-level infrastructure.

    Recalling Blockchain: Bitcoin And The Basics Of Blockchain

    Blockchain can be described as a distributed database.

    Just like any database, its specificity is that at any time its state can be recalculated by performing all operations since its creation. These transactions or operations are stored in cryptographically linked blocks from the time of the initial block, called the genesis block. This cryptographic sequence between blocks guarantees the immutability of the blockchain, in other words, these blocks will never be changed in the future.

    Servers (nodes, miners, validators) create new blocks at regular intervals, which must agree to select only one block, following the previous one, using the consensus protocol. Depending on the blockchain, the computing power required to run the validator varies from the computing power of a simple desktop computer, a powerful server, or even a network of specialized processors. Thus, the first Bitcoin blocks were probably “mined” (the term used for block production) on Satoshi Nakamoto’s laptop in 2009. Today, entire hangars filled with specialized ASIC machines, consuming electricity equivalent to that of entire countries, are required for any profitable mining operation on the Bitcoin network.

    Using the term “transaction” for operations is not quite common, as the term coexists in the field of databases and economics. In economics, completing a transaction means transferring money from one party to another; on the other hand, it means making a change in the ledger (such as a bank ledger) which withdraws from the buyer’s balance what he deposits into the seller’s account. A database transaction is a generalization of this particular transaction, which is a financial transaction: it is a modification of a general ledger, which is a database. So, in Bitcoin, the two meanings of the word “transaction” are essentially confused, since blocks contain only financial transactions. These transactions are denominated in bitcoins, the associated (crypto) currency, which is inseparably the purpose and fuel of the blockchain.

    Blockchain is also fully distributed. The Holy Grail that Satoshi Nakamoto sought and found was to create a decentralized digital currency, one that did not depend on anything for its functioning, which eliminated the possibility of storing its transaction database on a server in particular. So he decided the blockchain would be replicated on all nodes of the Bitcoin network. This means, and this is important for the future, that all nodes store all transactions of the Bitcoin network since its creation in January 2009. We are talking about several hundred gigabytes only in raw format. This does not necessarily mean that Bitcoin does not scale, but some blockchain applications need to find other solutions.

    Obviously, it is crucial that all nodes, after a short discussion period, have strictly identical copies of the blockchain: any disagreement between nodes about the existence or parameters of a transaction would be fatal to the confidence in Bitcoin. So we need a protocol to coordinate nodes, which can be in any number, and generally do not know each other and do not trust each other.

    The solution proposed by Satoshi Nakamoto is often called the “Nakamoto consensus” or originally “proof-of-work”. This solution is to make block producers — “miners” — compete in solving mathematical problems. Intentionally useless and expensive: on average, every ten minutes the winner of this race gets the right to produce the next block and gets rewarded for his efforts — the first transaction of this block. Thus, this reward is an incentive for miners to contribute to the production of Bitcoin blocks by spending if profitable, electricity, and mining equipment.

    However, the average regularity of block production every ten minutes is achieved by constantly adjusting the mining difficulty. That is, given the data of the machine of fixed power, the difficulty of finding a solution to the current mathematical problem for block mining. When the blockchain “sees” that blocks are being produced too quickly — for example, because of the increased power of the latest specialized chains — it increases this complexity. It is a constant arms race in which more electricity is consumed today than in all of Argentina. 

    In addition, the necessary equipment is less and less within the reach of small structures. Consequently, the Bitcoin network has been centralized around several huge mining “farms” for many years. De facto, they control its governance — this is the name of the process of changing the Bitcoin functioning, collectively managed by the software that most (in terms of their computing power) miners prefer to use.

    Finality of transactions (we used the English term in this article, the term “finality” is a false friend because it indicates purpose, not finality), that is, their constant, permanent character after their introduction into the blockchain, is an important criterion. After all, the seller does not want to risk having his payment canceled after the transfer of the product and the departure of the buyer. Yet the finality in proof-of-work is only probable: it depends on the quantity of “work” that is, on the computing power (hence the material and electricity) that would allow the alternative blockchain to be created. It will help convince the validators, i.e. all the nodes. This amount of work, if the blockchain is properly decentralized, that is, if no single entity has a critical amount of computing power, is usually a deterrent: the finality of this bitcoin transaction is highly possible, but the existence of opposite examples is enough to be a problem in practice. Most applications wait until several blocks are built on top of a transaction to be considered final, which can take tens of minutes or even hours. In the past, this has not prevented attacks on smaller blockchains, the so-called 51% attacks, where the definitiveness of the transaction was questioned. This allowed the attacker to get his share back, despite all the precautions of the application. We will see later that there are blockchains that provide instant finality and thus avoid such attacks.

    Ethereum And Smart contracts

    Vitalik Buterin and Gavin Wood seem to be the first to fully realize the contingent nature of the coincidence in Bitcoin between “transaction” in the sense of databases and “transaction” in the sense of economics. They proposed that the “ledger” of the blockchain should take into account not only the balances of users, but such a complex state, the balance of which would be just one of many elements, and which could be changed through transactions following the rules of a computer program called a smart contract. The smart contract itself is stored in a blockchain, so it cannot be changed over time. So it was about using a decentralized data structure, which is a blockchain, to create a “world computer” accessible to anyone, with payment for computing and storage. 

    The first version of Ethereum, released in 2015, replicates the basic components of Bitcoin: proof-of-work and uniform storage of all transactions on all nodes. Soon, proof-of-work was proposed as a replacement for proof-of-stake, although it wasn’t until March 2021 that its implementation in Ethereum began. Proof-of-stake consists of transforming the adversarial nature of block production (validation), based on the computing power of mining, into an allocation of validation work according to the funds locked up by each validator in the blockchain. As with proof-of-work, this is encouraged by rewards. However, in the case of “unfair” behavior (which consists in the parallel validation of two chains), these funds may be subject to confiscation.

    Ethereum is very successful and has given rise to hundreds of decentralized application (or DApps) projects. But, as is often the case on the first try, many pitfalls are discovered, including a lack of security for programs that sometimes manipulate hundreds of millions of dollars. At the same time, Bitcoin’s scaling-up problems, from which the number of transactions skyrocket and then, by extension, the average price of the transaction commission, lead to a crisis of governance, symbolically centered on the question of the size of the transactions. blocks. Ethereum is no exception, applications such as Cryptokitties [4] are ruining the network, showing the need to design a blockchain that moves to a higher level since these problems are still relevant.

    Blockchains And Performance

    For an application, blockchain performance will be expressed according to two main criteria: its bandwidth, the number of transactions per second it can process, and its latency, the time it takes the user to confirm a transaction. In fact, bandwidth limits both the number of users and the number of operations, but also the complexity of the processing required for those operations. Since the blockchain is shared between all its applications, this amount accumulates: the more applications in the blockchain, the more its bandwidth will be limited for them. The latency limits the usability of the application: while some applications can accommodate a latency of several minutes or hours, many applications require confirmation after a few seconds or minutes, at most, otherwise loss of customers and turnover.

    If we look at the bandwidth of Bitcoin and Ethereum, the most widely used blockchains today, we see that Bitcoin is limited to about 5 transactions per second, while Ethereum allows just over 16 transactions per second or 1.4 million per day. Regarding latency, the easiest way is to study the latencies imposed by exchanges, such as Binance, on import/export for most cryptocurrencies: quite often you have to wait an hour for a Bitcoin transaction and half an hour for an Ethereum transaction.

    It is important to note at this point that the latency is indeed the time for the operation to be finally confirmed, without the possibility that the network then decides to modify the blockchain to delete that operation. This is a much different timeframe than the more usual measure of the gap between blocks, which only measures the time it takes for an operation to appear on the blockchain. So, if this gap is only one minute between blocks for Tezos, Binance requires waiting for 30 blocks after the transaction for it to be sufficiently confirmed, or latency of 30 minutes, again for Tezos.

    Bandwidth and latency have another opposite effect on the blockchain: because they generate latency before a transaction is included in the blockchain, they create competition between transactions, which forces users to pay more fuel, “gas” in tokens, so that validators will prioritize their transactions over others. At a time when many relatively unused blockchains highlight the low cost of their fuel (often a combination of the low token price and the low cost temporarily imposed by validators), we need to understand that once the blockchain is used by real applications, this argument will collapse. Except for a few blockchains that have immediate finality and a sharding mechanism, which we describe below, that overcome these limitations.

    Latency And Finality

    So let’s take a look at the existing solutions to the latency problem. The first solution, when blockchain does not solve this problem on its own, is to solve it at the application level. In everyday life, latency is also a problem, transferring between banks or paying by bank card does not happen immediately, but there are compensation mechanisms that allow traders to have a fiscal illusion. Such mechanisms can be adapted to blockchain applications, allowing them to wait for confirmation, even if user feedback is instantaneous. There is a risk that will have to be economically compensated. Some protocols allow identical guarantees but with no risk: these “layer-2 protocols” allow transactions outside the blockchain, which is much faster. The cryptographically transformed chain of these transactions is then inserted on the blockchain, often in a compressed format (a single operation in the blockchain can represent multiple exchanges). Nevertheless, all these mechanisms (clearing and layer-2 protocols) are still limited to relatively simple operations, often the simple token transmissions. They can hardly be adapted for more complex processing.

    Fortunately, there are blockchain protocols that can significantly reduce the latency by providing an immediate guarantee of no rollback after a block is accepted. These protocols provide immediate finality. To do this, these blockchains use consensus algorithms such as PBFT (Practical Byzantine Fault Tolerance), which, because of the higher power requirements for the validator nodes, make it possible to guarantee that the verified block will never be invalidated. Cosmos and Free TON [3] blockchains use protocols such as Tendermint. Thus, an import or export of tokens for these blockchains on an exchange is confirmed in just a few seconds, while most other blockchains will let the user wait an hour. Other blockchains have kept the more traditional protocol, albeit by modifying it by adding an extension, a “gadget” that separately provides a form of finality, not necessarily immediate, but almost. This is the case with the future Ethereum 2.0, which will use a gadget called “Casper FFT” [8], or with Polkadot [9], which already uses the GRANDPA gadget. In any case, it is guaranteed that only the output of the gadget cannot be invalidated, but may fall slightly behind the development of the blockchain itself.

    These protocols or gadgets work in the same way with the PBFT algorithm [10], which is repeatedly executed for each block or group of blocks: a subset of validators is selected to take part in each iteration since the cost of including all validators would be prohibitive. This selection is made according to the deposits (stakes) made by these validators and changes regularly. During the iteration, the validators alternate phases, proposing blocks, often selecting a leader, voting for blocks and, when a two-thirds majority is reached, the vote is considered over. There is a lot of literature on these algorithms, and although it is quite outdated, it is still very popular. The variations are often subtle enough that they cannot be described here.

    It should also be noted that immediate finality does not completely solve the latency problem: a blockchain whose bandwidth is limited either by the gap between blocks, the number of transactions per block, or the ability to process blocks will impose another form of latency. This is related to the competition between the operations that will be included in future blocks. Thus, some Bitcoin and Ethereum transactions can wait several dozen minutes before they are considered for inclusion or are never included at all.

    Therefore, solving the finality problem without solving the bandwidth problem is not the same for the more general latency problem.

    Bandwidth And Sharding

    Since the latency issue cannot be completely solved without solving the bandwidth problem, we can consider the proposed solutions for it.

    Generally speaking, the bandwidth of a blockchain — the number of transactions per second it can process and store, is limited by three main factors:

    1. The transmission time over the network of data related to transactions between validators. For all validators to accept a block, they must verify the transactions that compose it, and therefore must possess not only the data of the transaction itself but also the entire database state on which the transaction is based, often hundreds of gigabytes of data. Thus, a few years ago, Bitcoin was at the center of a real war between users and validators, among other things because the latter, mostly based in China, preferred to maintain limited bandwidth to compensate for their low connection speeds to the rest of the world.
    2. The time it takes to reach a consensus between validators. Depending on the chosen consensus protocol, and the power required by the validators, this time can differ. So it is 10 minutes for Bitcoin and about 15 seconds for Ethereum, both operating in proof-of-work (PoW). As for proof-of-stake (PoS), the Emmy+ protocol from Tezos provides at best one block per minute, while Tendermint for Cosmos [5] provides one block every 5 seconds and 3 seconds for Free TON with immediate finality.
    3. Transaction processing time. This time varies considerably from blockchain to blockchain: thus, Bitcoin only supports extremely simple transactions, allowing nodes to process them very fast, while most other blockchains promote very rich and expressive smart contract languages (Solidity, Michelson, Wasm, etc.), but are often expensive and slow in execution. For this reason, some blockchains highlight the performance of their nodes, such as Solana [11], which transfers computation to graphics processing units (GPUs), or Ethereum and Free TON, which deploy nodes in Rust, which makes much more efficient use of multicore.

    A priori, most of these factors can only be moderately reduced. However, this is not the first time in the history of computer science that execution time seems incompressible: concurrent execution made it possible to cross this barrier thanks to distributed computing and parallelism/multicore. In the case of blockchain or distributed databases, this is called sharding, a technique that all major websites use to be able to manage their millions of users.

    In the case of a classic database, sharding consists of splitting up each table in the database to store only a portion in each shard (segment). Each segment consists of a small number of machines, most often read-only, and the main server handles the writing. When a user logs in to a website, they are immediately seamlessly redirected to the shard containing their data. Website growth is then simple and adaptive, adding new shards each time the previous shards are full.

    The blockchain case is a little more complicated. On the one hand, even if we want to divide the blockchain into many smaller blockchains, we must maintain the appearance of a single blockchain, in particular concerning the guarantee of the immutability of previous blocks (the guarantee that the history will not be falsified). On the other hand, most transactions involve more than one user (as is often the case with websites). Indeed, the simplest transaction contains a credit account and a debit account, while a smart contract transaction can initiate a chain of calls from multiple smart contracts. This makes it much more difficult to allocate a transaction to a single shard.

    There are several levels of segmentation, which we will present below. Of course, the first tier is simply the absence of sharding, the most common case today, with a single blockchain: thus, Bitcoin, Ethereum, Cardano, Solana, Tezos, Algorand, and all their forks currently offer no sharding mechanism. Ethereum 2.0 [7] is the only one to include sharding in its roadmap, planning to provide such a mechanism in 2021-2022, but extremely limited, with a very limited number of shards, and whose use will first be limited to simple payments. On the contrary, Tezos has officially spoken against the use of sharding [15].

    The next level is the static sharding proposed by Cosmos and Polkadot: these two blockchains allow the creation of multiple blockchains in their ecosystem, all linked by the main blockchain that records the progress of each of the secondary blockchains (it is the Cosmos Hub for Cosmos and the Relay Chain for Polkadot). In both cases, however, the creation of a new blockchain is a very expensive event; this new blockchain manages an independent token and must be validated by its own validators. The idea is that an application, to get a debit guarantee, must create its own secondary blockchain or share it with a reasonable number of other applications. In such a system, it becomes very complicated to make all the applications interact with each other if they are on different secondary blockchains. It is also difficult for developers to know in advance where to place their application without knowing in advance how much bandwidth it will require. In the end, this form of sharding provides improved bandwidth, but remains limited and not always really usable.

    Zilliqa [12] provides more dynamic sharding, called transaction sharding. Each node continues to store the entire state of the blockchain there, but only processes a subset of transactions. The shard that will process the transaction is selected according to the address of the debtor of the transaction, but only for simple transactions (a payment from the debtor to another account or a smart contract). All other transactions, in particular those involving multiple accounts or smart contracts, are executed by specific nodes, which are also responsible for collecting the results of all shards. Finally, this system offers significant bandwidth for simple transactions, but cannot host applications, using smart contracts, without saturating collector nodes. On the other hand, the need for each node to store the entire blockchain is also a limiting factor, as it imposes high storage and communication costs, which will sooner or later limit the speed of blockchain formation.

    Finally, the last type of sharding is called state sharding. State sharding consists of splitting the blockchain state between shards, i.e., a shard node stores only a small part of the blockchain state and operates only on transactions that act on the part it stores. This sharding can be static, the number of shards is fixed and does not change over time, or dynamic, where the blockchain can adapt the number of shards according to usage, dividing a shard into several when necessary or merging several shards when they are under-utilized.

    Few blockchains implement state sharding, we were mainly interested in Free TON, to which we devote a longer presentation at the end of the article, and in Elrond [13], both of which provide dynamic sharding, while NEAR [14] offers only a static version of state sharding. In essence, the Free TON approach takes sharding to the highest level by considering smart contracts as real agents exchanging asynchronous messages as in a distributed system, which makes it possible to obtain significant speeds which are difficult for its competitors to achieve.

    In dynamic state sharding, a shard can be divided to hold the load, as in cell mitosis.

    Choose A Blockchain Without Sharding And Wait?

    While many companies wonder which blockchain to choose to develop their application, it can be tempting to put aside performance criteria and rely on simpler criteria such as fame (Ethereum in the world, Tezos in France), gas fee or developer availability (Solidity), hoping to see the performance problem solve itself in future versions of the blockchain.

    We think it’s risky.

    First, let’s take this example, will software designed for a desktop computer automatically speed up if it runs on a network of multi-core computers with data distributed across them? The answer is no; the software has to be designed from the ground up to take advantage of multi-core and distribution. The same is true for sharding. To benefit from this, a set of smart contracts must be designed for asynchronous communication, to access only data that is local to the shard, in order not to run expensive communication and synchronization mechanisms between shards.

    On the other hand, blockchain’s ability to instant finality and dynamic sharding of its applications is far from proven. Ethereum’s ambitious project to move to sharding in version 2.0 is a long project (at least 4 years), risky (the smart contract sharding solution has not yet been proposed and we are already halfway there), and that applications that have been deployed are already excluded from its plans (the Ethereum 1.0 chain will be included as a single shard in the system). If the most active ecosystem is so cautious, likely, most of the other blockchains that have not taken into account sharding today will be unable to pass this milestone.

    About Ocamlpro

    OCamlPro SAS is a spin-off from Inria, created in 2011, its team of research engineers designs, builds, and implements custom software for its customers in a wide variety of areas, but with a focus on reliability, security and performance. It does this by using its extensive experience with OCaml and Rust languages, as well as formal methods, and disseminates this experience through professional development. In finance, OCamlPro developed and implemented the Tezos blockchain prototype from 2014 to 2017 and works with companies such as Jane Street (HFT), Bloomberg, and LexiFi.

    For more information, visit:
    http://www.ocamlpro.com
    https://timeline.ocamlpro.com

    Source — DÉBORAH LE BOVIC (FINANCIAL INNOVATIONS)

    Links:

    [1] Tezos. https://tezos.com/
    [2] Dune Network. https://dune.network
    [3] Free TON Community. https://freeton.org
    [4] CryptoKitties craze slows down transactions on Ethereum. https://www.bbc.com/news/technology-42237162
    [5] Cosmos, the Internet of Blockchains. https://cosmos.network/
    [6] Terdermint. https://tendermint.com/core/
    [7] Ethereum 2.0. https://ethereum.org/fr/learn/#eth-2-0
    [8] Casper FFG. https://arxiv.org/abs/1710.09437
    [9] Polkadot Network. https://polkadot.network/
    [10] Practical Byzantine Fault Tolerance (PBFT). https://en.bitcoinwiki.org/wiki/PBFT
    [11] Solana. https://solana.com/
    [12] Zilliqa. https://www.zilliqa.com/
    [13] Elrond. https://elrond.com/
    [14] NEAR. https://near.org/
    [15] Scaling Tezos. https://medium.com/hackernoon/scaling-tezo-8de241dd91bd

    3
    0