Vitalik Buterin: All Research Breakthroughs Needed for Ethereum 2.0 Have Been Figured Out - OhNo WTF Crypto

Breaking News

Vitalik Buterin: All Research Breakthroughs Needed for Ethereum 2.0 Have Been Figured Out

#ohnocrypto #ethereum #eth

Vitalik Buterin, ethereum’s inventor, says they have figured out all research breakthroughs needed for ethereum 2.0, with now just implementation left.

“We’ve actually already had all the research breakthroughs we need for a full implementation of eth2. This has been the case for about a year now,” he said.

The bold claim was made while there are now about three testnets for phase zero of ethereum 2.0, with a cross client testnet being the next stage which may begin this summer.

That phase zero is just staking, a sort of dummy main-net that’s halfway between a testent and a fully featured blockchain.

Some features, storage sharding, is to be added in phase 1 expected next year. The actual and full launch is then phase 2 expected in two years.

The full design is a somewhat complex system that we think connects different node groupings by individuals just running the nodes of the shards they’re interested in.

A shard is basically the current ethereum network, but let’s say with 1,000 nodes. Then we have network B, or shard B, with its own 1,000 nodes, all running on the same base code and thus the same blockchain. There’s hundreds of shards.

Since these are basically different universes, getting them to talk is a breakthrough that goes beyond sharding to connecting private and public blockchains, sidechains and all the rest.

The Ethereum 2.0 Design

The way this works can be explained by Buterin first at a technical level and then we’ll give a far more simple reading of it. He says:

“The general process for a cross-shard transaction (for an example, we’ll use transferring 5 ETH) is:

On shard A, destroy 5 ETH, creating a receipt (ie. a Merkle branch with a root committed into the state root of that block) containing (i) the destination shard, (ii) the destination address, (iii) the value (5 ETH), (iv) a unique ID.

Once shard B becomes aware of the state roots of shard A up until that point, submit a Merkle branch proving that receipt into shard B. If the Merkle branch verifies and that receipt has not already been spent, generate the 5 ETH and give it to the recipient.

To prevent double-spends, we need to keep track in storage which receipts have already been claimed. To make this efficient, receipts need to be assigned sequential IDs. Specifically, inside each source shard, we store a next-sequence-number for each destination shard, and when a new receipt is created with source shard A and destination shard B, its sequence number is the next-sequence-number for shard B in shard A (this next-sequence-number gets incremented so it does not get reused). This means that in each destination shard we only need to keep track of SHARD_COUNT bitfields, one for each source shard, to prevent double spends, which means a cost of only one bit of storage per cross-shard tx.”

In other words, basically you lock eth in a smart contract on shard A, show proof at shard B that you did so, and you get the eth at shard B.

To prevent double spending they’re basically using what sounds like a nounce, that is giving it a number and incrementing it ad infinitum presumably.

This is at the protocol level, so you know no one is cheating because you run the node of both shard A and shard B. Your node validates the rules, sees the block headers or receipts as Buterin calls them, and if there’s anything wrong your nodes tell you.

Here we used nodes in plural, but for you to run both nodes it might be just one client, although that’s implementation details which are at very early stages for phase 2.

As you can imagine, it would be very hard here to get a smart contract on shard A to “talk” to a smart contract in shard B which has say the Cryptokitties DNA.

We’re not sure how you’d transport that DNA to shard A and get Cryptokitties to race at shard A while they’re generated at shard B.

One way to do that would be to go through a central coordinator, but that would have its own problems. While for moving eth, it’s peer to peer as detailed above.

A Breakthrough or Just Limit Lifting?

Getting to this stage was perhaps the hardest part, although some say coding is the hardest part.

If you are running the nodes of different shards at the same time, moreover, it’s kind of the same as increasing the blocksize.

The difference here would be that you don’t have to run different shards simultaneously, or you can have a full node for one shard and a light one for another.

Here however is where you could get into endless arguments with a bitcoiner who would probably argue that a full node is one that runs all shards and few can afford to do so.

Here is where eth 1x comes in and that’s a complicated design on its own with its premise being the deletion of data by removing stale smart contracts and by outright pruning.

The first might be easier than the second and bitcoiners don’t know much about eth smart contracts, so we don’t know what they’d say about that.

For pruning, here you’d probably have to set checkpoints, which is kind of like a new genesis block. Your node then doesn’t have to start all the way back in 2015 in the case of eth. It can start in say 2017. The other data is then discarded, perhaps uploaded somewhere as an archive.

The difficulty here would be who sets the checkpoint. If it’s one person or a group, that can have many problems, but with staking there could potentially be ways of doing it in a decentralized manner.

The Race to Scalability

A bitcoin dev once commented that ethereum is pursuing all the ideas that were rejected in Bitcoin Core.

Whether that’s a good thing or a bad thing depends on your views and depends on how it is actually implemented because Bitcoin Core did not quite pursue the ideas to reach a proper conclusion.

Moreover bitcoin in its current state can’t continue operating in a long enough time frame because although 1MB every 10 minutes is not much, if we take it out say in ten years it all adds up.

It does so slowly and in some ways it’s all fine for now. The current blockchain size, for example, is 220 GB. In ten years, 520 GB will be added to it. Considering bitcoin is running at a bit above 1MB, it would be 1 Terabyte.

One terabyte might be fine, but it doesn’t reduce, it only increases. So eventually it goes to 10, maybe 100 terabyte.

You can obviously say eventually we’re all dead, but if you solve that, then you’ve kind of solved scalability.

The Lightning Network might buy some time, and ethereum has Plasma, state channels and all the rest, but they don’t quite address the fundamental problem of ever growing history.

For now in bitcoin they’re trying to just compress the data as much as they can and, you know, there’s time in a way, but time can be to the advantage of a competitor as a lot more capacity while remaining decentralized is obviously a very useful thing.

Whether ethereum will be able to provide it, remains to be seen. The plans have been laid now, all apparently has been figured out, with the foundations to go out later this year, then the bricks and all the rest next year, then the windows and the roof in 2021, and then the nice decorations which mum can deal with to make a nice ethereum house built by the circa 100 protocol devs.

Copyrights Trustnodes.com



via https://www.ohnocrypto.com/ Trustnodes, Khareem Sudlow