header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

ABCDE: Detailed analysis of co-processors and various solutions

2023-12-01 17:30
Read this article in 18 Minutes
总结 AI summary
View the summary 收起
Original Title: "CN ABCDE: An In-Depth Discussion of Co-Processors and Solutions from Various Companies"
Original Source: Kris & Laobai ABCDE / Mo Dong Celer Network

With the recent popularity of co-processors in the past few months, this new ZK use case has gained more and more attention.


However, we have found that most people are still relatively unfamiliar with the concept of coprocessors, especially the precise positioning of coprocessors - what they are, what they are not, and are still quite vague. And as for the comparison of technical solutions for several coprocessor tracks on the market, no one has yet systematically organized them. This article hopes to give the market and users a clearer understanding of the coprocessor track.


One. What is a Co-Processor and What is it Not?


If you were asked to explain a coprocessor to a non-technical or non-developer in just one sentence, how would you describe it?


I want to translate this sentence from Chinese to English: Dr. Dong Mo's statement may be very close to the standard answer - the co-processor is simply "giving Dune Analytics the ability of smart contracts".


This sentence should be broken down how?


Imagine a scenario where we use Dune - you want to go to Uniswap V3 to do LP and earn some fees. So you open Dune, find the trading volume of various trading pairs on Uniswap, the APR of fees in the past 7 days, the fluctuation range of mainstream trading pairs, and so on...


Or when StepN was popular, you started reselling shoes but were unsure when to sell them. So you monitored StepN's data on Dune every day, including daily trading volume, new user numbers, and shoe floor prices. You planned to sell quickly once growth slowed down or a downward trend appeared.


Of course, it's possible that not only you are keeping an eye on this data, but also the development teams of Uniswap and StepN are paying attention to these data.


These data are very meaningful - they can not only help determine changes in trends, but also be used to create more patterns, just like the "big data" strategy commonly used by Internet giants.


For example, based on the style and price of shoes that users frequently buy and sell, recommend similar shoes.


For example, based on the duration of the user's possession of the Genesis shoes, a "user loyalty reward program" can be launched to provide loyal users with more airdrops or benefits.


For example, based on the TVL or trading volume provided by LP or Trader on Uniswap, a VIP program similar to Cex can be established, providing benefits such as reduced trading fees for Traders or increased fee shares for LPs.


……


Now the problem arises - when it comes to big data and AI in the internet industry, it's basically a black box. Companies can do whatever they want without users being able to see or care about it.


However, in the Web3 industry, transparency and trustlessness are our natural political correctness, rejecting black boxes!


So when you want to achieve the above scenario, you will face a dilemma - either you implement it through centralized means, "manually" using Dune to collect and deploy these index data, or you write a set of smart contracts to automatically fetch and calculate these data on the chain, and deploy them automatically.


The former will lead you into a trust issue of "political incorrectness".


The gas cost generated on the chain will be astronomical, and your (project party) wallet cannot afford it.


Now it's time for the co-processor to come into play, combining the two methods just mentioned and "proving innocence" of the "manual background" step through technology means using ZK technology to "prove innocence" of the "indexing + calculation" part outside the chain, and then feeding it to the smart contract, solving the trust issue and eliminating the massive gas fees. Perfect!


Why is it called a "co-processor"? In fact, this comes from the development history of "GPU" in Web2.0. The reason why GPU was introduced as a separate computing hardware at that time, independent of CPU, is because its design architecture can handle some calculations that CPU is fundamentally difficult to handle, such as large-scale parallel repetitive calculations, graphics calculations, and so on. It is precisely because of this "co-processor" architecture that we have wonderful CG movies, games, AI models, and so on today, so this co-processor architecture is actually a leap in computing system architecture. Now, various co-processor teams also hope to introduce this architecture into Web3.0. Here, blockchain is similar to the CPU of Web3.0. Whether it is L1 or L2, it is inherently unsuitable for tasks such as "heavy data" and "complex computing logic". Therefore, introducing a blockchain co-processor to help process these calculations can greatly expand the possibility of blockchain applications.


So summarizing what the co-processor does, there are two things:


1. Retrieve data from the blockchain and prove the authenticity of the data through ZK without any tampering.


2. Make corresponding calculations based on the data just obtained, and prove through ZK that the results I calculated are true and not tampered with. The calculation results can then be called by the smart contract with "low cost + trustless".


Recently, there has been a buzz around a concept called Storage Proof, also known as State Proof, in the Starkware community. Essentially, it focuses on step 1, representing Herodotus, Langrage, and many other cross-chain bridge technologies based on ZK technology.


A co-processor is nothing more than adding a step 2 after step 1 is completed, and after extracting data without trust, performing a trustless calculation is OK.


So, to describe it in relatively technical terms, a co-processor should be a superset of Storage Proof/State Proof and a subset of Verifiable Computation.


One thing to note is that the co-processor is not Rollup.


Technically speaking, the ZK proof of Rollup is similar to step 2 mentioned above, while the process of step 1 "getting data" is directly implemented through Sequencer. Even for decentralized Sequencer, it is only obtained through some form of competition or consensus mechanism, rather than Storage Proof in the form of ZK. More importantly, in addition to the computation layer, ZK Rollup also needs to implement a storage layer similar to the L1 blockchain, which is permanently stored. The ZK Coprocessor, on the other hand, is "stateless" and does not need to retain all states after the computation is completed.







1.Brevis:


















2. Herodotus



L1 states from L2s


L2 states from both L1s and other L2s


L3/App-Chain states to L2s and L1s


Herodotus proposed the concept of storage proof, which combines proof of inclusion (confirming the existence of data) and proof of computation (verifying the execution of multi-step workflows) to prove the validity of one or more elements in a large dataset (such as the entire Ethereum blockchain or rollup).


The core of blockchain is the database, in which data is encrypted and protected using data structures such as Merkle trees and Merkle Patricia trees. The uniqueness of these data structures lies in the fact that once data is securely submitted to them, evidence can be generated to confirm that the data is contained within the structure.


The use of Merkle trees and Merkle Patricia trees has enhanced the security of the Ethereum blockchain. By encrypting and hashing data at each level of the tree, it is almost impossible to alter data without being detected. Any changes to data points require corresponding changes to the hash values on the tree up to the root hash value, which is publicly visible in the blockchain header. This fundamental feature of the blockchain provides a high level of data integrity and immutability.


Secondly, these trees can be effectively validated through the inclusion of proofs. For example, when verifying the inclusion of a transaction or the state of a contract, there is no need to search the entire Ethereum blockchain, only the relevant paths within the Merkle tree need to be verified.


Herodotus defined storage proof as the fusion of the following:


· Proof of Inclusion: These proofs confirm the existence of specific data in encrypted data structures (such as Merkle trees or Merkle Patricia trees), ensuring that the relevant data does indeed exist in the dataset.


·Computational proof: verifies the execution of multi-step workflows and proves the validity of one or more elements in a wide range of datasets, such as the entire Ethereum blockchain or a summary. In addition to indicating the existence of data, they also verify the transformations or operations applied to the data.


·Zero-knowledge proof: simplifies the amount of data that smart contracts need to interact with. Zero-knowledge proof allows smart contracts to confirm the validity of claims without processing all underlying data.


Workflow :


1. Obtain block hash


Each piece of data on the blockchain belongs to a specific block. The block hash serves as the unique identifier for that block and summarizes all of its contents through the block header. In the process of storing proof, the first step is to determine and verify the block hash of the block containing the data of interest, which is the primary step in the entire process.


2. Obtain block header


Once the relevant block hashes are obtained, the next step is to access the block header. To do this, the block header associated with the block hash value obtained in the previous step needs to be hashed. Then, the hash value of the provided block header is compared with the obtained block hash value:


There are two ways to obtain a hash:


(1) Use the BLOCKHASH opcode to retrieve



This step ensures that the block header being processed is authentic. Once this step is completed, the smart contract can access any value in the block header.


3. Determine the required roots (optional).



With the block header, we can delve into its contents, especially:



receiptsRoot: The encrypted digest of all transaction results (receipts) in the block.



Transactions can be decoded to verify whether specific accounts, receipts, or transactions are included in the block.


4. Verify data based on the selected root (optional).



With the roots we have selected and considering that Ethereum uses the Merkle-Patricia Trie structure, we can use Merkle inclusion proofs to verify if data exists in the tree. The verification process will vary depending on the data and the depth of the data within the block.



From Ethereum to Starknet


From Ethereum Goerli* to Starknet Goerli*


From Ethereum Goerli* to zkSync Era Goerli*


3. Axiom


Axiom provides a way for developers to query block headers, accounts, or storage values from the entire history of Ethereum. AXIOM introduces a new cryptographic linking method. All results returned by Axiom are verified on-chain through zero-knowledge proofs, which means that smart contracts can use them without any other trust assumptions.





AxiomV1Query - executes smart contracts for querying AxiomV1.


(1) Cache block hash values in AxiomV1:


AxiomV1 smart contract caches Ethereum block hashes since genesis block in two forms:



Next. Axiom stores these Merkle roots in a Merkle Mountain Range starting from the genesis block. The Merkle Mountain Range is built on-chain and updated by the first part of the cached Keccak Merkle root.


(2) Perform the query in AxiomV1Query:



These ZK proofs check whether the on-chain data related to ZK proof verification is directly located in the block header or in the account or storage Trie of the block, and verify it through the proof of inclusion (or non-inclusion) of the Merkle-Patricia Trie.


4. Nexus


Nexus is attempting to build a universal platform for verifiable cloud computing using zero-knowledge proofs. Currently, it is machine architecture agnostic and supports RISC 5/WebAssembly/EVM. Nexus utilizes the Supernova proof system, and the team has tested that it requires 6GB of memory to generate proofs. In the future, they plan to optimize it so that regular user devices can generate proofs.


Strictly speaking, architecture is divided into two parts:









Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit