header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Security vs Trustlessness, What is the mechanism of ZK agreement

2023-03-29 16:21
Read this article in 44 Minutes
zkSync and Polygon have both launched zkEVM, creating a buzz in the industry. At the same time, there has been a lot of discussion in the community about the security and decentralization of zkEVM
原文标题:《 IOSG Weekly Brief |Security vs Trustlessness, ZK 协议在机制上该如何取舍 》
原文来源:  IOSG Ventures


zkSync and Polygon have both launched zkEVM, creating a buzz in the industry. At the same time, there has been a lot of discussion in the community about the security and decentralization of zkEVM. IOSG held a Stay SAFU (Security Day) event during ETHDenver recently, and was lucky enough to have a discussion with a head project in the field of proof of zero knowledge, They share the safety principles and novel solutions of zero-knowledge proof protocols in mechanism design and engineering construction, as well as various trade-offs in the design process.


The following is the original insight shared by the participants:


Queenie Wu, Partner at IOSG Ventures (Host);

Alex Gluchowski, co-founder of zkSync;

Ye Zhang, co-founder and Head of Research at Scroll;

Matt Finestone, Chief Operating Officer, Taiko;

Mikhail Komarov, founder of Nil foundation;

Brian R, founder of Risc0;    



  Q1: How does zero-knowledge proof enhance the security of the system you are building? On the other hand, what are the security issues associated with deploying zero-knowledge proofs?


[Brian R]


I'm Brian Redford, CEO of Risk Zero. We are the developers of the Risk Zero zkVM built in risk-V Micro Architecture and can execute arbitrary code in the ZK system. We are also deploying a Layer2 network called Bonsai that can execute any code in a ZK scenario, which can be thought of as a ZK accelerator. In terms of how ZK enhances security, I think it depends on the specific application scenario. Of course, being able to do calculations and generate proofs that can be verified anywhere in the world completely changes the paradigm for blockchain security. You don't have to redo the same calculations over and over again and then use complex mechanisms (such as economics) to keep the whole system safe.


[Mikhail Komarov]


I'm Mikhail Komarov, founder of Nil. We provide infrastructure for the zk project under development, such as the ZK-EVm compiler. This compiler can compile a high-level language into a circuit so that every calculation defined in the high-level language can be proven without performing any operations, just simple circuits. In addition, we introduced the concept of a "Proof Market," which provides a decentralized bidding market for project parties who want to generate zkSNARK/STARK proofs. Each developer can submit a bid for the proof of zero knowledge that needs to be generated, and then simply invoke the proof in the application to get the required service (for example, zkRollup can use the Proof Marketplace).


Basically, we are the infrastructure that developers need. On our own, it doesn't enhance our security, but on the whole, it does. As Brian said, it enhances security by removing the trust assumption in the protocol that should run in a Trustless environment. The key is to further reduce the assumption of trust. This is how it enhances security. I believe that zero knowledge proves that some of the security incidents that occurred last year might have been avoided if they had been programmed.


[Matt Finestone]


I'm Matt from Taiko and we're an Ethereum-compatible ZK Rollup. We pursue maximum Ethereum and EVM compatibility. In terms of security, we are unique in that we rely heavily on well-tested and proven Ethereum building blocks, clients, and smart contract patterns. As Mikhail said, ZK reduces the trust assumption or moves it to the Protocol/ proof level. It is no longer a few motivated people who "need to be trusted", but mathematics and the protocols and applications developed around mathematical proofs. There are a lot of security concerns with ZK Rollup, not just ZK.


I think we try to reuse the secure parts from Ethereum as much as possible to stay secure. With time and field testing, ZK will be a very powerful system.


[Ye Zhang]


Hello, everyone. My name is Ye. I work in Scroll. So let me just briefly talk about Scroll. Scroll is the expansion solution for Ethereum. It's very compatible with Ethereum, users can interact with apps, and developers can deploy smart contracts by simply copy-pasting code and migrating it to Scroll. It is faster and cheaper than Ethereum, while having higher throughput and tighter security. We decentralize our proof system (Decentralized Prover Network) to prevent single points of failure. This is our first step towards decentralization, because ZK Rollup is going to be decentralized for quite some time to come. Even if you have a lot of faith in math, even if you have a lot of faith in cryptography, you can still have this single point of failure because you're relying on a prover.


The first step in decentralization is to want to decentralize the proof system to make it more reliable. As for the security of ZK, I think other panellists have introduced that ZK gives you a very strong public verifiability attribute. Basically, anyone can do the calculation and perform the proof, and then anyone can verify the proof and get the same assurance. If you have millions of nodes, each person just needs to re-execute the validation algorithm instead of the initial calculation, which also makes the system scalable. This is where the power of ZK comes in. As for the problems it can have, if our system is completely dependent on mathematics, if there are some errors, like missing some constraints, then it can be dangerous. That's why we take a variety of approaches to improve our security. For example, we take a community-driven approach to development. From day one, we open source all of our development process and have it reviewed by Ethereum and our own community, which is a higher standard. This is how we minimize trust and improve security.


[Alex Gluchowski]


My name is Alex Gluchowski and I'm the CEO of Matter Labs, the company behind the zkSync agreement. We are building the zkSync Era Network, a ZK Rollup compatible with the ZK EVM. We took a slightly different approach to the EVM-equivalent Rollup. We believe we should take a pragmatic approach and start with something compatible so that developers can easily plug in and port existing applications and start from existing tools. However, the final ZK environment is different. If you tie yourself to the original technology, it will be difficult to achieve the maximum capacity of the ZK system. This is important because our mission is to extend blockchain to real-world scope, bring the next billion users into the blockchain and create a new Internet of value. If you're thinking about millions or even billions of users, you really want to keep costs down, because with all these billions of transactions adding up, costs are going to be very important.


How does this affect how we can enhance security? This is a very interesting question. When you ask about how to improve safety or any other factor, you want to compare alternatives, right? What are my benchmarks? What are the alternatives to using ZK? Alternatives may be other extension techniques available prior to ZK Rollup, such as Optimistic Rollup, side chain, Plasma, etc. These schemes introduce new assumptions of trust. If our goal is to scale to a billion users, and our mission is not just to scale throughput, but to scale value, while maintaining self-sovereignty, self-custody, permissionless nature, and the completely trust-free nature of the system, that can only be done with ZK.


Q2: When we compare different types of zkEVM, usually focused on their extensibility and compatibility, (Vitalik is carried on the detailed comparison: https://vitalik.ca/general/2022/08/04/zkevm.html). If we add a security dimension here, how would zkSync Era, Scroll and Taiko compare the different potential security risks that different mechanism designs might bring?


[Alex Gluchowski]


As previous speakers have mentioned, you need to implicitly trust many components of these complex systems to be secure, for example, you trust the code produced by the compiler, and you think it is executing the code logic that you put in that compiler. Why would you believe it was Solidity? There is no formal definition, so you just trust the compiler to behave correctly across versions. We think this is something that has to be addressed. This is why we started building a compiler based on the LLVM framework that supports Solidity as one of the front ends and relies on this mature framework, with lots of tools available for static analysis of code, security checking, etc., and on the back end for our zkVM virtual machine. We can also support other, more mature languages that have actually been used in other security environments, such as Rust, or newer languages that have added security considerations, such as Move, which avoids security issues, such as double flower. All in all, although it is complicated, we have to solve it from different levels.


[Ye Zhang]


I want to talk about some of the different approaches and their background. We are building EVM Bytecode level compatibility schemes. Basically we are compatible with EVM Bytecode. This is different from the way zkSync is built, and we also believe that the compiler should not be trusted. This is why we believe in the Solidity compiler, which is immature but relatively mature in the context of blockchain. Nobody used Solidity and LLVM before. We believe that is a better standard because of the tried-and-tested nature of the smart contract DeFi, which has already been tried-and-tested by Solidity developers. This is why we believe that following the standards of this compiler, following the standards of the Solidity compiler, following the definitions of the EVM Yellow Book is the best way to ensure system security. Because from the circuit side, we don't have to worry about the compiler side, we don't have to build our own compiler, we just take the existing infrastructure and prove that it performs correctly.


We would rather put the complexity of building the system just to solve zkEVM compatibility at the Bytecode level, rather than building a compiler and a back end that supports LLVM, we didn't want to build a compiler in addition to building zkEVM. Another consideration is that we definitely care about the developer experience. Layer2 was built to extend the existing EVM, which has become crowded with the sheer number of Solidity codes and applications. We want developers to migrate seamlessly to our system while ensuring security. That's why we don't plan to add any more fancy features to the EVM at this time.


Following this standard makes Ethereum truly scalable while ensuring optimal performance and timely delivery of the system on top of it. At the same time, we are also officially pushing various open source implementations in Ethereum, including Type1 and Type2 zkEVM, including privacy and capacity expansion. We built open source from day one. We've been very focused on the development and evolution of Ethereum zkEVM, we've led half of the development, we're part of the team, so we know exactly how long it will take for the whole system to really be ready. That's why we took this approach to preparing the product and reaching out to the community, and then thinking about how to advance Ethereum's ultimate goal.


[Matt Finestone]


These are two good answers, Taiko and Scroll schemes are closer together, and we haven't introduced a new compiler. I like what Alex said, which is what is the alternative to security in the context of blockchain? I think we can all agree that Ethereum is probably the gold standard. We follow the Yellow Book and reuse Ethereum instead of tweaking its components. Even in terms of Ethereum components, data storage structures, etc. outside of EVM, it has been proven by practice.


Of course, there are always trade-offs. Alex talked about a billion users and low cost and scalability to maintain value. We may sacrifice more in proof costs, but we stick with the battle-tested EVM and Ethereum standards. We have also weighed in on the considerations of practicality and quick market entry mentioned by Ye Zhang.

In the context of ZK, there are some directions that are not easy to implement, such as some hash functions or some data storage structures. We don't change these things because we're not sure how well they'll work, like changing Merkel Patricia Tree to Verkle Tree, even though that's on the Ethereum roadmap itself. We are more confident in the tried and battle-tested components, and the complexity of the system is not in trying to reinvent Ethereum's EVM and other components, but if ZK is deployed fully EVM-compatible. It's going to take longer to complete, and it's going to take us longer than Scroll to do some trade-offs to achieve availability. Our implementation path is much more secure.


[Mikhail Komarov]


Ethereum is battle-tested, allowing us to reuse all these systems and reduce new assumptions. But there are several other security issues that few people really think about, and our goal is to solve them. First, you have to trust the compiler. Another problem is that if you want to achieve full EVM compatibility, such as Type1 EVM compatibility, then you need to manually redeploy any EVM Opcode in the circuit by finding out what it will look like on a certain domain as an expression of the circuit. It's a manual process, and it's very complex and error-prone. We've done this ourselves and messed up the circuitry, so we know it's bad.


In order not to repeat these problems and allow anyone to make these mistakes, we are working to eliminate this security assumption by allowing people to build the circuit using EVMs that have been battle-tested, rather than re-implementing all of OpCode manually. The goal is to compile it using the LLVM compiler, rather than re-implement it manually, with minimal security assumptions. This is another security assumption that needs to be removed, which we will address for zkEVM.


[Brian R]


You can run geth on a RisCV-like system to solve Mikhail's problem. We actually just added Go support. We built and designed the RiscZero VM, and we chose the RiscV instruction set in part because it is formalized and lightweight. Safety ranges for RiscV circuits are well defined, and considerable work has been done to incorporate formal verification methods to demonstrate that an implementation conforms to the RiscV specification. We focused on making sure the cryptography in this simple system was correct, and then running EVMs on it actually worked. Of course, there is a performance penalty with this approach. For example, ERC20Token transfer takes about 1 minute.


Q3: As Alex just mentioned, it is possible to upgrade or choose another solution for any part of the system. So, how do you make sure your system is scalable, and in a very secure way?


[Brian R]


Yes, I think upgradability is a very important topic in ZK. From our perspective, we spent a lot of time making sure we were building the right technology stack abstraction without deploying the network and with a lot of economic value behind it. We can switch hash functions, switch finite fields and proof systems, or add new technologies, such as PLONK, to the technology stack. This is another reason why we chose RiscV as the main "instruction set" to support, as it is a very clean abstract system in its own right. Therefore, you can replace anything at will. LLVM obviously has very similar characteristics.


[Matt Finestone]


Yes, escalation is a big topic, and we can think of it as a problem of de-trusting systems. The deployment implementation of the system may be flawed, putting users at risk, or trusting the people who built the system, or some of the actors, may be swoops in, and so on. Escalation is about finding a balance of security and distrust at some level. As you increase trust in the system, you can remove some of these trusted actors. We should be very wary of some trusted actors here, because we are doing these things to eliminate them. For these very complex systems, it's best to intervene early on. I think Alex and the Matter Labs team have provided some good examples in this regard. They have a good safety board and time delay mechanism.


So what's the right pace for upgrading? This is a very important question, and I don't know if more users would feel comfortable with a completely de-trusted system, which is often very complex and introduces a lot of new things, or trusting these well-meaning actors. This is a very human issue, and there are certainly technical solutions, such as multiple proofs, that might be a good option. I think it's possible to reuse some of the design from components that are similar to Optimism. If our proof of validity is questionable, then reusing Optimistic Rollups' implementation will make it easier to formulate a system for proof of fraud in order to make it suitable for Ethereum equivalent environments. You can mix match proof of fraud and proof of validity, and if there are any objections, then upgradability or some type of governance scheme can override it.


[Mikhail Komarov]


Let me tell you. I've just spent some time thinking about this. I'm worried that I don't understand the problem because I want to say where is the escalation problem? We just need to rebuild the circuit. So what are the upgrade issues?


[Ye Zhang]


From our point of view, first of all, you definitely can't just compile the new circuit because it affects your proof key, authentication key, and many on-chain smart contracts, so you definitely can't do it very often. We are considering multiple proofs, adding mechanisms such as double verification. There are a number of ways to approach this, except as Matt mentioned, we won't consider pairing it directly with Optimistic fraud proof because it will make the final confirmation time much longer. We are exploring a number of other approaches and will soon have some proposals in Ethereum research on how to add some additional guarantees.


For example, Justin Drake proposed the approach of using Intel SGX (TEE environment) as some additional guarantees, strictly adding security guarantees. In addition, there may be other forms of governance, and we think safety committees and time delays are good ones. We've been thinking about that too. This is a trade-off, and I believe that most Rollups will still take longer to really get out of this upgradability problem, because upgrading a system is a long-term thing. We are paying careful attention to and studying this issue.


[Alex Gluchowski]


I can give some background information on why upgradability is an important issue. For example, for any program running on your desktop, you just download the new version and install it, right? What's wrong with upgrading? The problem is that in the context of blockchain, we are trying to build Trustless systems, but in some cases the need to upgrade can undermine that trust. For Layer1, there is no such problem. If you want to upgrade Ethereum, just download the new client, install it, and everyone coordinates the fork.


Then we schedule a fork, set a date, fork at a certain block number, so anyone who doesn't like this upgrade can stay on the old branch of the old version. This upgradability path is completely Trustless. It does not make you dependent on any honest majority or any trusted participant.


The problem arises in the context of Layer2. If we build Rollup, Rollup relies on Layer1's smart contract. The smart contract may be immutable, with fixed functions and authentication keys built into certain circuits, and the problem is that if there are bugs, there is nothing you can do about it. So what do you do if you're faced with a bug, or if you want to fix it?


We disclosed a bug in zkSync 1.0 (zkSync Lite) with an updatable time lock so the team could propose a new version of the update. Then, if they don't like the new version, all users have a few weeks to exit their assets from Layer1. We have a Trustless mechanism to implement the exit. But because we were forced into this time lock, we couldn't fix it, so we came up with a compromise and introduced what we call the Security Council, which is an independent committee. We asked some of the 15 well-known members of the Ethereum community from different communities and different projects to join us.


The team does not control the contract and can only propose the upgrade plan. The Security Council makes the decision and can decide to accelerate the upgrade. But this is still not the best option, as there is still a group of people who could theoretically install a malicious version in the meantime. Maybe they don't want to, but maybe they'll be forced by some of the players, and we can't rule that out. Therefore, if we want to take full advantage of zero-knowledge proofs and rely only on mathematics and open source rather than trusted parties of any verifier, we end up with a fully Trustless mechanism.


We are currently thinking of a better solution, where the team comes up with a time-locked upgrade proposal, and the Security Council can step in and suggest freezing the smart contract, and then soft fork on Layer1. Therefore, this needs to be coordinated with Layer1, which needs the Layer2 protocol to have a certain scale to be significant enough for the community to actually fork, install new versions, etc. Because Layer1 can't do this for every small protocol, it has to be as important a protocol as system-level things on Ethereum.


This is the best mechanism we currently have to enhance a trust-free scalable scheme to protect us from the first tier of serious vulnerabilities. But that still introduces some sort of timeliness issue, and if that happens, the agreement will be suspended for a while, imagine we've switched from Visa and PayPal to using these big Rollups for blockchain payments, and all of a sudden the user assets are frozen and no one can make payments, We need to coordinate the escalation for a few days. It's a huge problem. We don't have a better solution right now, and we don't see a better solution. If you have an idea, please contact us and let us explore it further.


Q4: One key word that has been mentioned a lot is "untrustworthy -Trustless". As we know, the most important component of the current system is still centralized. What security challenges will we face as we evolve from centralization to decentralization?


[Alex Gluchowski]


I think this (Trustless) will enhance security. This will provide us with an additional element of protection. First, ZK Rollup must provide proof of validity for each block, but it can have problems, for example, maybe we forgot some constraints. On top of that, we also need to provide signatures through the proof-of-interest consensus mechanism, which is an additional layer of protection. Because in order to damage a system, a malicious attacker must first find the vulnerability, and then must work with the majority of these validators to do the evil.


This is relatively unlikely, as the attacker will either already be the controlling party on this blockchain or will have to purchase a large number of tokens, which gives us plenty of time and it's possible that someone else will find the same vulnerability and submit Immunefi or somewhere else where the team can fix it. Or, maybe we'll run some honeypots at the same time, which are completely open and anyone can hack and get rewarded for it. So, in general, this provides protection for two factors in the whole system. And we can add more factors to that.


So far, I'm not convinced that a ZK Rollup that claims to be completely untrustworthy is safe. To me, that would be extremely risky. I will not put a larger amount of assets on such a ZK Rollup than I can afford to lose.


My favorite example is the Boeing 737 Max. The reason for this accident was not because of a software problem they were trying to divert public attention from, but because they relied on a single sensor on the plane, which was totally irresponsible. The aviation industry has a long history, with many iterations of technology, and there is a consensus that you can't rely on a single system. But because they sacrificed the safety of the system design during the production of the Boeing 737 max for various reasons (such as cost, delivery time, etc.), the accident resulted. Therefore, we always want to have at least two completely independent safety factors to reduce the probability of failure.


[Ye Zhang]


We think about ZK Rollup's decentralization roadmap with a long-term view. Decenter Sequencer or Prover first, and even how to define decentralization of ZK Rollup, we all have our own ideas. I think ultimately we will decentralize both Sequencer and Prover. But we have some slightly different priorities, and we want to decentralize Prover first. Security is definitely one of the big reasons. If Sequencer is decentralized first, then before zkEVM becomes very mature and robust, if someone really finds some loopholes and submits false proof, it will be accepted by Sequencer and block out, causing damage to the system.


Therefore, we will first preserve the centralized Sequencer. zkEVM is vulnerable because it is a very complex system. So, we hope that at least in the early stages, we control centralized sorting, at least to ensure proper and efficient blocking.


Another reason to decentralize Prover first is that there are many hardware companies looking for ways to make zkEVM more efficient. If we promise to decentralize Prover, they will be involved in optimizing the code of the system. We all know that ZK ASIC may take more than a year to come out, and if we decentralize Prover first, they will be more motivated to build our system and make it more efficient. The decentralization of Sequencer is what we plan to do later.


A more complicated factor should be taken into consideration here. If Prover and Sequencer are divided into two different groups, it is necessary to design incentive schemes very carefully, for example, the proportion of rewards distributed to these two sides should be reasonable enough to balance the incentives of the two sides.


In addition, we have some other means of security. For example, the way we're building is open source, and we're doing some internal security audits, not just external audits. We have a very strong security team. We provide funding to encourage more people to participate in building security solutions, such as tools such as formal verification. Our team has also found a vulnerability in Consensys ZKEVM and Aztec circuits. We're trying to improve the safety of the whole ecosystem.


  [Matt Finestone]


Taiko may face this challenge sooner. Although everyone has a degree of decentralization, we're actually planning to stay in the same way as Ethereum, with EVM, Gas timetables and state trees, and with Ethos and other decentralised (we'll call it) Sequencer Proposer, as well as Prover in mind. In the first test network a few months ago, about 2, 000 individual individuals or addresses proposed a block without permission. Although there may be some malicious blocks, this is also the promise of decentralization. I don't think it's incremental decentralization, maybe incremental efficiency gains, because you have to give up some efficiency. Proposer is likely to build the same blocks, resulting in some transaction redundancy, while also paying some valuable block space to Layer1 that it has paid ETH for. Some will get a refund, others will skip it.


It is not realistic for us to implement decentralization immediately in our next upcoming test network. Permissionless provers are harder in a test net environment because of witch attacks, people fill proposed blocks with spam, and Prover has to spend real computing resources to prove them, with no actual revenue.


So, it's important that we use a Permissioned Proposer to let any decentralized Prover submit blocks and get rewarded accordingly. In addition, if the system fails, a Prover submits a proof of validity, and an inconsistent proof is submitted at the same time, then the smart contract can know and suspend. It will recognize why there are two correct proofs of validity on different blocks? This situation is immediately suspended, causing a time delay. As Alex said, we're not comfortable with a completely peremptory, trust-free implementation right now, and we're trying to strike a balance.


[Mikhail Komarov]


We have considered this problem from the very beginning. Some people's initial solution is a top-down approach, such as deciding to create Rollup and then thinking about decentralized ordering issues, such as decentralizing Sequencer first. Then decentralize the Prover, ring by ring. In contrast, we take a different approach, we solve problems from the bottom up.

We started by building a decentralized Prover network to pool computing power without permission. Then we try to add Sequencer on the basis of Prover network, because Sequencer must be closely integrated with Prover network, especially with mature decentralized Prover network. There are some issues involved, such as the extra cost of proof and the complexity of communication, so Sequencer must closely combine with Prover network to ensure its effectiveness. The system we developed can serve as the underlying infrastructure for ZK Rollup.


To ensure that all Proof generation has an incentive mechanism to speed completion, improve quality and maintain security, we introduced a Proof market to manage the generation and sequencing of all proof. At the same time, we kept the system decentralized and permissionless. This approach addresses problems from the bottom up, rather than from the top.


[Brian R]


I think the approach we take is very different from other networks. It's similar to the Proof market that the Nil team is building, but we do it in a much less trusting way. We don't focus on Sequencing at the moment, but on the proof system, to get more robust about the various operations. This approach simplifies a lot of complexity and helps get the most computing to market as quickly as possible.


We want to lower the barriers for developers to build any application they want on Ethereum or any system, and have this decentralized layer of basic computing with zero-knowledge proof to ensure that the calculations are correct.


图片


Q5. Audience: In Algorand, there is a technique called State Painting. The basic idea is to take state from one consensus blockchain and "Paint" it onto another consensus blockchain. This technique is more like a cross-chain scheme, which also uses the zero-knowledge proof scheme. So in Layer2, the system consensus actually depends on the consensus of Layer1. Does this make Layer2 less secure?


[Alex Gluchowski]


In the implementation of ZK Rollups, asset flows between Layer1 and Layer2 are completely untrusted, and Layer2 fully assumes the security of Layer1. Regarding the transfer of assets between Layer2, if you are using Ethereum Layer1's native bridge, it is also completely untrusted. However, if Layer1 is not passed, its security depends on which way the bridging is implemented across the chain.


In zkSync, we are implementing a solution called Hyperchain. Specifically, we will build multiple chains driven by the same circuit that will still be bridged over Ethereum. Hyperchain will offer free, completely untrusted, very cheap transactions from any chain to any other chain. This is very important when we're talking about bringing hundreds of millions or even billions of users to the blockchain.


In the future, we won't be able to have trillions of transactions running on a single system or consensus. They will have to run on many different consensus systems, such as sharding, separate application chains, etc. But at the same time we need to keep these different chains connected and communicating at a low cost.


For example, as we use different systems of E-mail today, users can easily communicate between different E-mail systems. This is what we hope to achieve with Hyperchain. In addition to perfectly following Layer1's secure, efficient and trust-free cross-chain communication, Hyperchain can also be used at a very low cost through recursive proof.


Original link


Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit