Dedicated ZK or general ZK, which one is the future?

24-08-04 20:00
Read this article in 9 Minutes
总结 AI summary
View the summary 收起
Original author: mo
Original translation: Luffy, Foresight News


Specialization and generalization, which one is the future of ZK? Let me try to answer this question with a picture:



As shown in the figure, is it possible for us to converge to a magical optimal point on the trade-off coordinate system in the future?


No, the future of off-chain verifiable computing is a continuous curve that blurs the line between specialized and general ZK. Please allow me to explain the historical evolution of these terms and how they will merge in the future.


Two years ago, "dedicated" ZK infrastructure meant low-level circuit frameworks such as circom, Halo2, and arkworks. ZK applications built using these frameworks are essentially handwritten ZK circuits. They are fast and cheap for specific tasks, but are generally difficult to develop and maintain. They are similar to the various application-specific integrated circuit chips (physical silicon chips) in today's IC (integrated circuit) industry, such as NAND chips and controller chips.


However, over the past two years, dedicated ZK infrastructure has gradually become more "general purpose".


We now have ZKML, ZK coprocessor, and ZKSQL frameworks, which provide easy-to-use and highly programmable SDKs for building different categories of ZK applications without writing a single line of ZK circuit code. For example, ZK coprocessor allows smart contracts to access blockchain historical states, events, and transactions in a trustless manner and run arbitrary computations on this data. ZKML enables smart contracts to leverage AI inference results in a trustless manner to process a wide range of machine learning models.


These evolving frameworks have significantly improved programmability in their target domains, while still maintaining high performance and low cost due to the thin abstraction layer (SDK/API) and close to bare metal circuits.


They are analogous to GPUs, TPUs, and FPGAs in the IC market: they are programmable domain experts.


ZKVM has also made great progress in the past two years. It is worth noting that all general-purpose ZKVM is built on top of the low-level, specialized ZK framework. The idea is that you can write ZK applications in a high-level language (even more user-friendly than the SDK/API), and these applications can be compiled to a combination of specialized circuits and instruction sets (RISC-V or similar WASM). They are like CPU chips in the IC industry.


ZKVM is a layer of abstraction on top of the low-level ZK framework, just like the ZK coprocessor and the like.


As a wise man once said, one layer of abstraction solves every computer science problem, but it also creates another. Tradeoffs, that’s the key. Fundamentally, with ZKVM we’re making a tradeoff between performance and generality.


Two years ago, the “bare metal” performance of ZKVM was really bad. However, in just two years, ZKVM’s performance has improved dramatically.


Why?


Because these “general” ZKVMs have become more “specialized.” A key reason for the performance improvement is “precompiles.” These precompiles are specialized ZK circuits that can compute commonly used high-level programs, such as SHA2 and various signature verifications, much faster than the normal process of breaking them down into instruction circuit fragments.


So the trend is now very clear.


Dedicated ZK infrastructure is becoming more general purpose, while general purpose ZKVM is becoming more specialized.


Over the past few years, optimizations in both solutions have achieved better trade-offs than before: making progress on one point without sacrificing the other. This is why both sides feel like “we are definitely the future”.


However, computer science wisdom tells us that at some point we will hit the “Pareto optimal wall” (green dashed line), where we cannot improve the performance of one without sacrificing the performance of the other.


So the million dollar question arises: will one technology completely replace the other in due course?


To put this in context, let's use the IC industry as an example: CPUs are a $126 billion market, while the entire IC industry (plus all "specialized" ICs) is $515 billion. I'm sure that history will repeat itself here from a micro perspective, and they won't replace each other.


That being said, no one today says, "Hey, I'm using a computer that's completely powered by a general-purpose CPU," or "Hey, this is a fancy robot powered by a specialized IC."


Yes, we should indeed look at this from a macro perspective, and there will be a trade-off curve in the future that will give developers the flexibility to choose based on their needs.


In the future, dedicated ZK infrastructure and general-purpose ZKVM can work together. This can be achieved in many forms. The simplest approach is already possible today. For example, you can use a ZK coprocessor to generate some computation results in the blockchain transaction history, but the computation business logic on top of this data is very complex and you cannot simply express it in the SDK/API.


What you can do is get high-performance and low-cost ZK proofs of the data and intermediate computation results, and then converge them to a general-purpose VM through proof recursion.



While I think this kind of debate is interesting, I know that we are all building this asynchronous computation future driven by off-chain verifiable computation for blockchains. As use cases with large-scale user adoption emerge in the next few years, I believe this debate will eventually come to an end.


Original link


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

举报 Correction/Report
PleaseLogin Farcaster Submit a comment afterwards
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit