Vitalik’s latest long article: Sequel to Ethereum’s Evolution Theory, four key improvements in L2

24-03-29 11:05
Read this article in 22 Minutes
总结 AI summary
View the summary 收起
Original author: Vitalik Buterin
Original translation: jk, Odaily Planet Daily


On March 13, the Dencun hard fork activated, making one of Ethereum's long-awaited features: proto-danksharding (aka EIP-4844, aka blobs) possible. Initially, the fork reduced transaction fees for rollups by more than 100 times, as blobs were almost free. In the past day, we finally saw a surge in the volume of blobs, and the fee market was activated as the blobscriptions protocol began to use them. Blobs are not free, but they are still much cheaper than calldata.


Left: Thanks to Blobscriptions, blob usage has finally reached the goal of 3 per block. Right: With that, blob fees “enter price discovery mode”
Source: https://dune.com/0x Rob/blobs


This milestone represents a critical shift in Ethereum’s long-term roadmap: with blobs, scaling Ethereum is no longer a “zero to one” problem, but a “one to many” problem. From here, significant scaling work, both in terms of increasing the number of blobs and improving the ability of rollups to utilize each blob, will continue, but it will be more gradual. Scaling changes associated with fundamental changes to how Ethereum operates as an ecosystem are increasingly behind us. Furthermore, the focus has slowly shifted, and will continue to slowly shift, away from L1 issues like PoS and scaling, to issues closer to the application layer. The key question that this post will explore is: where is Ethereum going next?


The Future of Ethereum Scaling


Over the past few years, we have witnessed Ethereum gradually transforming into an L2-centric ecosystem. Major applications began to move from L1 to L2, payments began to default to L2, and wallets began to build their user experience around the new multi-L2 environment.


A key part of the Rollup-centric roadmap from the beginning has been the concept of independent data availability space: a special portion of space within a block that the EVM cannot access and can Store data for second-tier projects such as rollups. Since this data space is not accessible to the EVM, it can be broadcast separately from a block and verified separately. Ultimately, it can be verified through a technique called data availability sampling, which allows each node to verify that the data was published correctly by randomly checking several small samples. Once implemented, the blob space can be expanded significantly; the ultimate goal is 16 MB per slot (~1.33 MB/second).


Data availability sampling: every Each node only needs to download a small portion of the data to verify the availability of the overall data


EIP-4844 (i.e. blobs) does not provide us with data availability sampling. But it does set up the basic framework in such a way that from here, data availability sampling can be introduced and blob counts can be increased behind the scenes, all without any involvement from the user or application. In fact, the only "hard fork" required is a simple parameter change.


From here, the two directions that will need to continue to develop are:


1. Gradually increase the blob capacity, eventually achieving a panoramic view of data availability sampling, providing 16 MB of data space per time slot;

2. Improve L2 to better utilize the data space we have.


Bringing DAS into reality


The next stage may be a simplified version of DAS. It's called PeerDAS. In PeerDAS, each node stores a significant portion (e.g. 1/8) of the total blob data, and nodes maintain connections to many peers in the p2p network. When a node needs to sample a specific piece of data, it asks one of the peers known to be responsible for storing that piece of data.



If each node needs to download and store 1/8 of all data, In theory then PeerDAS allows us to increase the size of the blobs by a factor of 8 (actually it is a factor of 4 since we lose a factor of 2 due to the redundancy of erasure coding). PeerDAS can be rolled out over time: we could have a phase where professional stakers continue to download full blobs, while individual stakers only download 1/8 of the data.


In addition, EIP-7623 (or an alternative such as 2D pricing) can be used to enforce the maximum size of the block (i.e. the "regular transactions" in the block) Sets a tighter limit, which makes it safer to increase both the blob target and the L1 gas limit at the same time. In the long term, more sophisticated 2D DAS protocols will allow us to improve across the board, further increasing blob space.


Improving L2 performance


Today, Layer 2 (L2) protocols can be deployed on four Improve key aspects.


1. Use bytes more efficiently through data compression


My data compression overview diagram is still available at View here;


Naively speaking, a transaction takes up about 180 bytes of data. However, there are a range of compression techniques that can reduce this size in several stages; with optimal compression, we may eventually reduce the amount of data per transaction to less than 25 bytes.


2. Only use L1’s optimistic data technology to ensure the security of L2 under special circumstances



Plasma is a type of technology that allows you to keep data on L2 under normal circumstances while providing the equivalent of Rollup for some applications. safety. As with EVMs, Plasma cannot protect all coins. But Plasma-inspired builds can protect most coins. And a much simpler build than Plasma could vastly improve today's validiums. L2s unwilling to put all their data on-chain should explore such technology.


3. Continue to improve execution-related restrictions


Once the Dencun hard fork is activated, The cost of Rollups set up to use the blobs it introduces has been reduced by 100x. Base Rollup saw an immediate surge in usage:



This in turn caused Base to hit its internal gas limit, causing fees to surge unexpectedly. This led to a broader realization that Ethereum’s data space wasn’t the only thing that needed to scale: Rollups needed to scale internally, too.


Part of this is parallelization; Rollups could enable something like EIP-648. But just as important is storage, and the interplay between compute and storage. This is an important engineering challenge for Rollups.


4. Continue to Improve Security


We’re still a long way from a world where Rollups are truly protected by code. In fact, according to l2 beat, only one of the five, Arbitrum, is fully EVM-enabled and has even reached what I call “phase one.”



This needs to be addressed head on. While we can’t currently be confident enough in the code of a complex optimistic or SNARK-based EVM validator, we are definitely capable of getting halfway there and having safety committees that can change the behavior of the code at high thresholds (e.g., what I’m proposing is 6-of-8; Arbitrum is doing 9-of-12).


The ecosystem’s standards need to become more stringent: so far, we’ve been permissive and accepting of any project that claims to be “on the path to decentralization.” By the end of the year, I think our standards should be raised, and we should only consider as rollups those projects that have reached at least stage 1.


After this, we can move cautiously toward stage 2: a world where rollups are truly backed by code, and where safety committees can only intervene if the code “clearly contradicts itself” (e.g., accepts two incompatible state roots, or two different implementations give different answers). One path to get there safely is to use multiple prover implementations.


What does this mean for Ethereum going forward?


At ETHCC in the summer of 2022, I gave a presentation describing the current state of Ethereum development as an S-curve: we are entering a period of very rapid transition, after which development will slow down again as L1 solidifies and development refocuses on the user and application layers.



Today, I would say that we are definitively on the decelerating, right-hand side of this S-curve. As of two weeks ago, the two biggest changes to the Ethereum blockchain - the switch to proof-of-stake and the refactoring to blobs - have already been completed. Future changes are still important (e.g. Verkle trees, single-slot finality, in-protocol account abstraction), but they are not as drastic as proof-of-stake and sharding. In 2022, Ethereum was like a plane changing engines in flight. In 2023, it changed its wings. The Verkle tree transition is the main remaining truly important change (we already have a testnet); the others are more like changing tail fins.


The goal of EIP-4844 is to make a big, one-time change in order to set long-term stability for rollups. Now that blobs are out, future upgrades to full danksharding with 16MB blobs, or even switching crypto to STARKs on a 64-bit goldilocks field, can happen without any further action from rollups and users. It also reinforces an important precedent: Ethereum’s development process is executed according to a long-standing, well-known roadmap, and applications built with the “New Ethereum” in mind (including L2) are given a long-term stable environment.


What does this mean for applications and users?


The first decade of Ethereum was largely a training phase: the goal has been to get Ethereum L1 off the ground, and applications have primarily occurred among a small group of enthusiastic individuals. Many have argued that the lack of mass adoption over the past decade proves that cryptocurrencies are useless. I’ve always argued against this view: nearly every non-financial speculative crypto application relies on low fees — so we shouldn’t be surprised that we see mostly financial speculation when we face high fees.


Now that we have blobs, the key limitation that has been holding us back is starting to melt away. Fees are finally significantly lower; my seven-year-old statement that the internet of money should cost no more than five cents per transaction has finally come true. We’re not out of the woods yet: fees may still increase if usage grows too fast, and we’ll need to keep working on scaling blobs (and rollups separately) for the next few years. But we see light at the end of the tunnel… er… dark forest.



For developers, this means one simple thing: we no longer have any excuses. Until a few years ago, we set a low bar for ourselves, building applications that clearly couldn’t be used at scale, as long as they worked as prototypes and were reasonably decentralized. Today, we have all the tools we need, and indeed most of the tools we will ever have, to build applications that are both cypherpunk and user-friendly. So we should get out there and do it.


Many are rising to this challenge. The Daimo wallet explicitly describes itself as Venmo on Ethereum, aiming to combine the convenience of Venmo with the decentralization of Ethereum. In the decentralized social space, Farcaster does a great job of combining true decentralization (for example, check out this guide to learn how to build your own alternative client) with an excellent user experience. Unlike previous “social finance” crazes, the average Farcaster user is not here to gamble—passing a key test for truly sustainable crypto applications.



This post is sponsored by a major Farcaster customer Sent by Warpcast, this screenshot is from the alternative Farcaster + Lens client Firefly.


These successes are what we need to build on and extend to other application areas, including identity, reputation and governance.


Applications built or maintained today should be based on the Ethereum of the 2020s


Ethereum The ecosystem still has a large number of applications operating around a workflow that is fundamentally “2010s Ethereum.” Most ENS activity still occurs in the first layer (L1). Most token issuance also happens on the first layer, with no serious thought being given to ensuring that bridging tokens are available on the second layer (L2) (for example, check out this ZELENSKYY memecoin fan applauding the coin’s ongoing donations to Ukraine, But complaining about L1 fees makes it too expensive). In addition to scalability, we're also behind on privacy: POAPs are all exposed on-chain, which may be the right choice for some use cases, but very suboptimal for others. Most DAOs and Gitcoin Grants still use fully transparent on-chain voting, making them highly susceptible to bribery (including post-event airdrops), which has been shown to severely distort contribution patterns. Today, ZK-SNARKs have been around for many years, yet many applications still haven’t started using them properly.


These are hard working teams who have to deal with a huge existing user base, so I wouldn't blame them for not upgrading to the latest wave of technology at the same time Blame them. But soon, this upgrade will need to happen. Here are some key differences between "a fundamentally 2010s Ethereum workflow" and "a fundamentally 2020s Ethereum workflow":



Basically, Ethereum is no longer just a financial ecosystem. It is a full-stack alternative to "centralized technology" in most areas, and even offers some things that centralized technology cannot (e.g., governance-related applications). We need to build with this broader ecosystem in mind.


Conclusion


Ethereum is going through a decisive transition, from an era of “fast L1 progress” to one where L1 progress will still be significant, but slightly more modest and less disruptive to applications.


We still need to finish scaling. This work will happen more behind the scenes, but it will still be important.


Application developers are no longer just building prototypes; we are building tools used by millions of people. Across the ecosystem, we need to adjust our mindsets completely accordingly.


Ethereum has upgraded from “just” a financial ecosystem to a more thoroughly independent decentralized technology stack. Across the ecosystem, we need to adjust our mindsets completely accordingly as well.


Original link


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

举报 Correction/Report
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit