header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

EigenLayer: Bringing Ethereum-level trust to middleware

2022-11-15 15:00
Read this article in 22 Minutes
In the current Ethereum ecosystem, there are many middleware (Middleware)
Original title: "EigenLayer: Introducing Ethereum-level trust into middleware"
Original author: Jiawei, IOSG Ventures


Introduction


Source: EigenLayer, IOSG Ventures


In the current Ethereum ecosystem, there are many Middleware.


The left side is the perspective of the application side. The operation of some dApps relies on middleware: for example, DeFi derivatives rely on price feeds from oracles; for example, the cross-chain transfer of assets relies on cross-chain bridges as third-party relays.


The right side is the modular perspective. For example, in Rollup sorting, we need to build a Sequencer network; in off-chain data availability, we have DAC or Polygon Avail and Celestia's DA-Purpose Layer1.


These large and small middleware exist independently of Ethereum itself, running the validator network: that is, investing some tokens and hardware facilities to serve as middleware Provide services.


Our trust in middleware comes from Economic Security. If you work honestly, you will be rewarded. If you do evil, it will lead to Slashing of pledged tokens. This level of trust comes from the value of the pledged assets.


If we compare all protocols/middleware that rely on Economic Security in the Ethereum ecosystem to a cake, it will look like this: Funds are pledged according to the network The scale is divided into large and small parts.


Source: IOSG Ventures< /blockquote>


However, there are still some problems with the current Economic Security:


for middleware. Middleware validators need to invest funds to guard the network, which requires a certain marginal cost. For token value capture, verifiers are often required to pledge middleware native tokens, and their risk exposure is uncertain due to price fluctuations.


Secondly, the security of middleware depends on the overall value of pledged tokens; potential security incident. This problem is especially evident in some protocols with relatively weak token market caps.


For dApps. For example, some dApps do not need to rely on middleware (imagine a Pure Swap DEX), but only need to trust Ethereum; for some dApps that rely on middleware (such as derivatives that require price feeds from oracles), in fact, their security is at the same time Trust assumptions that rely on Ethereum and middleware.


The trust assumption of middleware is essentially derived from the trust in the distributed validator network. And we have seen a lot of asset loss incidents caused by incorrect price feeds from oracle machines.


In this way, the barrel effect is further brought about:


Assume that a certain composability is extremely For high DeFi application A, the related TVL involved reaches billions, while the trust of oracle machine B only depends on the pledged assets of hundreds of millions. Then, once a problem occurs, due to the risk transmission and nesting brought about by the association between protocols, the loss caused by the oracle machine may be infinitely magnified;


Assuming a modular Blockchain C adopts data availability scheme D, execution layer scheme F, etc. If a certain part of it misbehaves/is attacked, the scope of impact will be the entire chain of C itself, although other parts of the system are fine.


It can be seen that system security depends on its shortcomings, and seemingly insignificant shortcomings may cause systemic risks.


What does EigenLayer do?


The idea of EigenLayer is not complicated:


Similar to shared security, try to put middleware The Economic Security has been upgraded to the same level as that of Ethereum.


Source: EigenLayer, IOSG Ventures


This is done through "Restaking".


Restaking is to pledge the ETH exposure of the Ethereum validator network for a second time:


< p>Originally, verifiers pledged on the Ethereum network to obtain income, and once they did evil, it would result in a Slash of their pledged assets. In the same way, after Resttaking, you can get the pledge income on the middleware network, but if you do evil, you will be Slash's original ETH pledge.


The specific implementation method of Restake is: the pledger can set the withdrawal address in the Ethereum network as the EigenLayer smart contract, which is to give it the power of Slashing.


Source: Messari, IOSG Ventures


In addition to the direct Restake $ETH, EigenLayer provides two other options to expand the Total Addressable Market, which support pledge respectively LP Token of WETH/USDC and LP Token of stETH/USDC.


In addition, in order to continue the value capture of the native token of the middleware, the middleware can choose to introduce EigenLayer while maintaining the pledge requirements of its native token, that is, Economics Security It is derived from its native token and Ethereum respectively, so as to avoid the "death spiral" caused by the price plunge of a single token.


Feasibility


In general, for validators, participating in EigenLayer's Restaking There are capital requirements and hardware requirements.


The capital requirement to participate in Ethereum verification is 32 ETH, which remains unchanged on Restaking, but additional potential risks will be added when new middleware is introduced Exposures such as Inactivity and Slashing.


Source: Ethereum, IOSG Ventures


In terms of hardware facilities, in order to reduce the participation threshold of validators and achieve sufficient decentralization, the Ethereum verification after the merger The hardware requirements of the latter are very low. A slightly better home computer can actually meet the recommended configuration. At this time, some hardware requirements are actually overflowed. Analogous to miners digging multiple currencies at the same time when computing power resources are sufficient, in terms of hardware only, Restaking is equivalent to using the overflowing part of the hardware Capability to provide support for multiple middleware.


Sounds a lot like Cosmos' Interchain Security, but nothing more? In fact, EigenLayer's impact on the Ethereum ecosystem in the post-merger era may be more than that. In this paper, we choose EigenDA for further elaboration.


Source: EigenLayer, IOSG Ventures


EigenDA


Note: Data Availability (DA), Erasure Coding, and KZG Commitments are only briefly covered here. The data availability layer is a split from a modular perspective and is used to provide data availability for Rollup. Erasure coding and KZG commitments are components of Data Availability Sampling (DAS). The use of erasure codes allows random download of a portion of data to verify the availability of all data and rebuild all data if necessary. The KZG commitment is used to ensure that erasure codes are correctly encoded. In order to avoid deviating from the main point of this article, some details, noun explanations and causes and consequences will be omitted in this section. If you have any doubts about the Context of this section, you can read IOSG’s previous articles "Combining Soon: Detailed Explanation of the Latest Technical Route of Ethereum" and "Dismantling the Data Availability Layer: The neglected LEGO brick of the modular future".


As a brief review, we divide the current DA scheme into on-chain and There are two parts under the chain.


On the chain, Pure Rollup refers to the scheme of simply putting DA on the chain, that is, it needs to pay 16 gas for each byte, which will account for As much as 80%-95% of the cost of Rollup. After the introduction of Danksharding, the cost of DA on the chain will be greatly reduced.


In off-chain DA, each scheme has a certain progressive relationship in security and overhead.


Pure Validium refers to only putting DA under the chain without any guarantee, and the risk that the off-chain data hosting service provider will shut down and go offline at any time. The solutions specific to Rollup include StarkEx, zkPorter, and Arbitrum Nova, that is, a small group of well-known third parties form a DAC to guarantee DA.


EigenDA is a generalized DA solution in the same category as Celestia and Polygon Avail. But there are some differences between EigenDA and the other two solutions.

For comparison, let's first ignore EigenDA and see how Celestia's DA works.

Source: Celestia


Take Celestia's Quantum Gravity Bridge as an example:


The L2 Contract on the Ethereum main chain is valid as usual Proof of authenticity or proof of fraud, the difference is that DA is provided by Celestia. There are no smart contracts on the Celestia chain, no calculations are performed on the data, only data is guaranteed to be available.


The L2 Operator publishes the transaction data to the Celestia main chain, and the Celestia verifier signs the Merkle Root of the DA Attestation and sends it to the Ethereum main chain DA Bridge Contract for verification and storage.


In this way, the Merkle Root of DA Attestation is actually used to prove all DA. The DA Bridge Contract on the Ethereum main chain only needs to verify and store this Merkle Root. Compared with storing DA on the chain, this greatly reduces the overhead of ensuring DA, while the Celestia chain itself provides security guarantees.


What happened on the Celestia chain? First, the Data Blob is propagated through the P2P network, and consensus is reached on the Data Blob based on the Tendermint consensus. Every Celestia full node must download the entire Data Blob. (Note that only full nodes are discussed here, and Celestia's light nodes can use DAS to ensure data availability, which will not be expanded here)

Since Celestia itself is still Layer1, it needs to broadcast and consensus on Data Blob, so First, there are actually very high requirements for the full nodes of the network (128 MB/s download and 12.5 MB/s upload), but the achieved throughput may not be high (1.4 MB/s).


EigenLayer adopts a different architecture - no consensus is required, and no P2P network is required.


How to achieve?


Source: EigenLayer


First, EigenDA nodes must Restake their ETH exposure in the EigenLayer contract to participate in Restaking. EigenDA nodes are a subset of Ethereum stakers.


Secondly, after getting the Data Blob, the demand side of data availability (such as Rollup, called Disperser) uses erasure code and KZG commitment to encode the Data Blob (The size depends on the redundancy ratio of the erasure code), and publish the KZG commitment to the EigenDA smart contract.


The Disperser then distributes the encoded KZG commitment to the EigenDA nodes. After these nodes get the KZG commitment, compare it with the KZG commitment on the EigenDA smart contract, and sign the Attestation after confirming that it is correct. After that, Disperser obtains these signatures one by one, generates an aggregate signature and publishes it to the EigenDA smart contract, and the smart contract verifies the signature.


In this workflow, the EigenDA node only signs the Attestation to claim that it has stored the encoded Data Blob. The EigenDA smart contract only verifies the correctness of the aggregate signature. So how do we ensure that EigenDA nodes actually store the data available?


EigenDA uses the Proof of Custody approach. That is, for such a situation, there are some Lazy Validators, they do not do the work they should do (such as ensuring that the data is available). Instead pretend they've done the work and sign off on the results. (For example, lying and claiming that the data is available, in fact they did not)


The practice of Proof of Custody is similar to fraud proof: if there is a Lazy Validator, any People can submit proofs to the EigenDA smart contract, which will be verified by the smart contract. If the verification is passed, the Lazy Validator will be slashed. (For more details about Proof of Custody, please refer to Dankrad's article, which will not be expanded here *https://dankradfeist.de/ethereum/2021/09/30/proofs-of-custody.html*)


Summary


After the above discussion and comparison, we can see that:


Celestia's thinking is consistent with the traditional Layer1. What it does is actually Everybody-talks-to-everybody (consensus) and Everybody-sends-everyone-else-everything (broadcasting), and the difference is Celestia's consensus and broadcasting is done for Data Blob, which only ensures that the data is available.


What EigenDA does is Everybody-talks-to-disperser (that is, step [3] Disperser obtains Attestation) and Disperser-sends-each-node-a- Unique-share (that is, step [2] Disperser distributes data to EigenDA nodes), decouples data availability and consensus.


The reason why EigenDA does not need to do consensus and participate in the P2P network is that it is equivalent to taking the "free ride" of Ethereum: deploying on Ethereum with the help of EigenDA The smart contract, Disperser releases Commitments and Aggregated Attestations, and the process of verifying the aggregated signature by the smart contract all happens on Ethereum, and the consensus guarantee is provided by Ethereum, so there is no need to be limited by the bottleneck of the consensus protocol and the low throughput of the P2P network .


This is reflected in the difference between node requirements and throughput.


Original link


Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit