header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Galaxy: Projects at the intersection of crypto and artificial intelligence

2024-02-18 13:59
Read this article in 46 Minutes
Understanding artificial intelligence from a cryptographic perspective.
Original Title: "Understanding the Intersection of Crypto and AI"
Author: Lucas Tcheyan
Translation: Lüdong Xiaogong, BlockBeats

Table of Contents


Introduction

Core viewpoint
Terminology explanation


Artificial Intelligence + Cryptocurrency Panorama


Decentralized computing.

Overview
Decentralized Computing in Vertical Fields
General Computing
Secondary Market
Decentralized Machine Learning Training
Decentralized General Artificial Intelligence
Building a Decentralized Computing Stack for AI Models
Other Decentralized Products
Outlook

Smart Contracts and Zero-Knowledge Machine Learning (zkML)

Zero-Knowledge Machine Learning (zkML)
Infrastructure and Tools
Co-processor
Applications
Prospects

Artificial Intelligence Agent

Proxy provider
Bitcoin and AI agents
Outlook

Conclusion


I am a professional in the encryption industry. Please translate the following Chinese text into English without considering the context or industry-specific terms and names. Do not omit any English words or phrases, including capitalized ones such as ZKS, STARK, and SCROLL. If there are English characters in an tag, do not translate them and return the tag as is. If the content consists only of punctuation marks, return them as is. Do not translate HTML tags such as

, , , and

. If an HTML tag contains English characters, omit the translation and return the tag as is. Please preserve any content within tags. Translate all Chinese characters. The text to be translated is:

Introduction


The emergence of blockchain can be said to be one of the most important advances in the history of computer science. At the same time, the development of artificial intelligence will, and has already had a profound impact on our world. If blockchain technology provides a new template for transaction settlement, data storage, and system design, artificial intelligence is a revolution in computing, analysis, and content production. The innovation of these two industries is unlocking new use cases that may accelerate the application of both in the coming years. This report explores the integration of cryptocurrency and artificial intelligence, focusing on new use cases that attempt to bridge the gap between the two and leverage their strengths. Specifically, this report examines projects related to decentralized computing protocols, zero-knowledge machine learning (zkML) infrastructure, and artificial intelligence agents.


Cryptocurrency provides an unlicensed, trustless, and composable settlement layer for artificial intelligence. This unlocks more use cases, such as making hardware more accessible through decentralized computing systems, building AI agents that can perform complex tasks requiring value exchange, and developing identity and source solutions to combat witch attacks and deepfake technology. AI brings many of the same benefits to cryptocurrency as seen in Web 2. This includes enhancing user and developer experiences (UX) through large language models (such as specially trained versions of ChatGPT and Copilot), as well as significantly increasing the potential for smart contract functionality and automation. Blockchain provides the transparent, data-rich environment that AI requires. However, the computing power of blockchain is also limited, which is the main obstacle to directly integrating AI models.


The intersection of cryptocurrency and artificial intelligence is driving the experiments and ultimate adoption behind many of the most promising use cases for cryptocurrency - a permissionless, trustless coordination layer that facilitates value transfer. Given its immense potential, participants in this space need to understand the fundamental ways in which these two technologies intersect.


Core viewpoint:


In the near future (6 months to 1 year), the integration of cryptocurrency and artificial intelligence will be dominated by AI applications, which can improve the efficiency of developers, the auditability and security of smart contracts, and the usability of users. These integrations are not specific to cryptocurrency, but enhance the experience of on-chain developers and users.


Just as high-performance GPUs are severely in short supply, decentralized computing products are developing AI-customized GPU products, which provide strong support for their adoption.


User experience and regulation are still obstacles for decentralized computing clients. However, the recent developments of OpenAI and the regulatory scrutiny underway in the United States highlight the value proposition of permissionless, censorship-resistant, decentralized artificial intelligence networks.


On-chain artificial intelligence integration, especially smart contracts that can use AI models, require improvements in zkML technology and other methods of verifying off-chain computations. The lack of comprehensive tools and development talent, as well as high costs, are obstacles to adoption.


Artificial intelligence agents are very suitable for cryptocurrencies, and users (or the agents themselves) can create wallets to trade with other services, agents, or individuals. This is currently impossible under traditional financial channels. To achieve wider adoption, additional integration with non-cryptocurrency products is needed.


Terminology Explanation:


人工智能(Artificial Intelligence) refers to the use of computing and machines to imitate human reasoning and problem-solving abilities.


Neural Networks are a training method used for AI models. They process input data through a series of algorithmic layers, continuously optimizing until the desired output is produced. Neural networks are composed of equations with modifiable weights, which can be adjusted to change the output. They may require large amounts of data and computation for training to ensure accurate output. This is one of the most common ways to develop AI models (for example, ChatGPT uses a neural network process based on Transformers).


Training is the process of developing neural networks and other AI models. It requires a large amount of data to train the model to correctly interpret inputs and produce accurate outputs. During the training process, the weights of the model equation are continuously modified until satisfactory outputs are produced. The cost of training can be very expensive. For example, ChatGPT uses tens of thousands of GPUs to process its data. Teams with limited resources often rely on dedicated computing providers such as Amazon Web Services, Azure, and Google Cloud Providers.


Inference is the process of actually using an AI model to obtain output or results (for example, using ChatGPT to write an outline for a paper on the intersection of cryptocurrency and artificial intelligence). Inference is used both during the training process and in the final product. Due to computational costs, even after training is complete, their running costs may be high, but their computational intensity is lower than during training.


Zero Knowledge Proofs (ZKP) allow for the verification of statements without revealing underlying information. This has two main uses in cryptocurrency: 1) privacy and 2) scalability. For privacy, it enables users to conduct transactions without revealing sensitive information such as how much ETH is in their wallet. For scalability, it allows for faster verification of off-chain computations on the blockchain without having to re-execute the computation. This enables blockchain and applications to run computations off-chain and then verify them on-chain.


Artificial Intelligence + Cryptocurrency Panorama



In projects where artificial intelligence and cryptocurrency intersect, the necessary infrastructure is still being built to support large-scale on-chain artificial intelligence interactions.
































For centralized providers like Google Cloud and Coreweave, computation is expensive while communication between computations (bandwidth and latency) is cheap. These systems are designed to facilitate communication between hardware as quickly as possible. Gensyn disrupts this framework by lowering the cost of computation through enabling anyone in the world to provide GPUs, but increases communication costs as the network must now coordinate computation jobs between decentralized, heterogeneous hardware located in remote locations. Gensyn has not yet been released, but it is a proof-of-concept for building a decentralized machine learning training protocol.


Decentralized General Artificial Intelligence


The decentralized computing platform has opened up possibilities for designing methods for artificial intelligence. Bittensor is a decentralized computing protocol built on Substrate, aiming to answer the question of "how do we transform artificial intelligence into a collaborative way". Bittensor aims to decentralize and commoditize artificial intelligence generation. The protocol was launched in 2021, aiming to leverage the power of collaborative machine learning models to continuously iterate and produce better artificial intelligence.


Bittensor draws inspiration from Bitcoin, and its native currency TAO has a supply of 21 million with a four-year halving cycle (the first halving will occur in 2025). Unlike using proof of work to generate correct random numbers and obtain block rewards, Bittensor relies on "Proof of Intelligence", which requires miners to run models that can produce output for inference requests.


The translation of the content is:

Incentive Model.


Bittensor initially relied on the Mixture of Experts (MoE) model to generate output. When an inference request is submitted, the MoE model does not rely on a universal model, but instead passes the inference request to the model that is most accurate for the specific input type. This can be likened to hiring various experts to handle different aspects of the construction process (such as architects, engineers, painters, builders, etc.) when building a house. MoE applies this to machine learning models, attempting to utilize the outputs of different models based on the input. As Bittensor founder Ala Shaabana explains, this is like "talking to a group of smart people to get the best answer, rather than talking to one person." Due to the challenges of ensuring proper routing, synchronizing messages to the correct model, and incentivization, this approach has been put on hold until the project is more mature.


In the Bittensor network, there are two main roles: validators and miners. Validators are responsible for sending inference requests to miners, reviewing their outputs, and ranking them based on the quality of their responses. To ensure the reliability of their rankings, validators are assigned a "vtrust" score based on the consistency of their rankings with those of other validators. The higher a validator's vtrust score, the more TAO they can earn. This is designed to encourage validators to reach a consensus on model rankings over time, as the more validators that reach a consensus on model rankings, the higher their individual vtrust scores will be.


Miners, also known as servers, are network participants who run actual machine learning models. Miners compete with each other to provide the most accurate output for a given query, and the more accurate the output, the more TAO issuance they receive. Miners can generate these outputs in any way they want. For example, in a future scenario, a Bittensor miner could train the model in advance on Gensyn and then use it to earn TAO.


Nowadays, most interactions occur directly between validators and miners. Validators submit inputs to miners and request outputs (i.e. training the model). Once validators query the miners in the network and receive their responses, they rank the validators and submit their rankings to the network.


The interaction between validators (relying on PoS) and miners (relying on Proof of Model, a form of PoW) is called Yuma consensus. It aims to encourage miners to produce the best output to earn TAO issuance, and to encourage validators to accurately rank miner outputs to earn higher vtrust scores and increase their TAO rewards, forming a consensus mechanism for the network.


Subnet and Application


As mentioned earlier, the interaction on Bittensor mainly involves validators submitting requests to miners and evaluating their outputs. However, as the quality of contributing miners improves and the overall growth of artificial intelligence in the network increases, Bittensor will create an application layer on top of its existing stack, allowing developers to build applications that query the Bittensor network.


In October 2023, Bittensor took a significant step towards achieving its goal with the introduction of subnets through its Revolution upgrade. Subnets are independent networks on Bittensor that incentivize specific behaviors. Revolution opens up the network to anyone interested in creating a subnet. In the months since its release, over 32 subnets have been launched, including those for text prompts, data scraping, image generation, and storage. As subnets mature and products become ready, subnet creators will also create application integrations that allow teams to build applications that query specific subnets. Some applications, such as chatbots, image generators, Twitter reply bots, and prediction markets, already exist today, but validators have no formal incentive to accept and forward these queries other than funding from the Bittensor Foundation.


In order to provide a clearer explanation, the following image is an example of what the Bittensor integrated application may look like after running.



The subnet obtains TAO based on the performance evaluated by the root network. The root network is located at the top of all subnets, essentially acting as a special type of subnet, and is managed by 64 maximum subnet validators in proportion. Root network validators rank subnets based on their performance and regularly allocate TAO to subnets. In this way, each subnet acts as a miner for the root network.


Bittensor Outlook


Bittensor is still experiencing growing pains as it expands protocol functionality to incentivize intelligent generation across multiple subnets. Miners continue to devise new ways to attack the network in order to earn more TAO rewards, such as submitting multiple variants by slightly modifying the output of their high-rated inference models. Governance proposals that affect the entire network can only be submitted and implemented by the Triumvirate, which is composed entirely of stakeholders of the Opentensor Foundation (note that proposals require approval from Bittensor validators before implementation). The token economics of the project are being improved to enhance incentives for TAO usage in subnets. The project has also gained rapid recognition for its unique approach, with the CEO of the popular artificial intelligence website HuggingFace suggesting that Bittensor should add its resources to the site.


In a recent article titled "Bittensor Paradigm" published by core developers, the team outlined their vision for Bittensor to ultimately evolve into being "agnostic to what is being measured." In theory, this could allow Bittensor to develop subnets incentivizing any type of behavior, all supported by TAO. However, there are still significant practical constraints - most notably, proving that these networks can scale to handle such diverse processes and that the underlying incentive mechanisms drive progress beyond what centralization provides.


Building a Decentralized Computing Stack for AI Models


The above section provides a high-level overview of various types of decentralized artificial intelligence computing protocols that are currently under development. Although they are still in the early stages of development and adoption, they provide a foundation for an ecosystem that could eventually facilitate the creation of "AI building blocks" similar to the concept of "DeFi Lego". The composability of permissionless blockchains opens up the possibility for each protocol to be built upon by others, providing a more comprehensive decentralized artificial intelligence ecosystem.


For example, here is one way that Akash, Gensyn, and Bittensor may interact with each other to respond to inference requests.



It should be noted that this is just an example of what could happen in the future, not a statement about the current ecosystem, existing partners, or potential outcomes. Today, interoperability limitations and other considerations described below greatly restrict integration possibilities. In addition, fragmentation of liquidity and the need to use multiple tokens may have a negative impact on user experience, as pointed out by the founders of Akash and Bittensor.


Other Decentralized Products


除了计算之外,还有几种其他去中心化基础设施服务,以支持加密货币新兴的人工智能生态系统。列举它们所有的内容超出了本报告的范围,但其中一些有趣而具有代表性的例子包括:

Besides computing, there are several other decentralized infrastructure services to support the emerging cryptocurrency AI ecosystem. Listing all of them is beyond the scope of this report, but some interesting and representative examples include:


Ocean: A decentralized data marketplace. Users can create data NFTs that represent their data and use data tokens to purchase them. Users can both monetize their data and have greater sovereignty, while also providing a means for teams engaged in AI development and model training to access the necessary data.


Grass: A decentralized bandwidth market. Users can sell their excess bandwidth to artificial intelligence companies that use it to fetch data from the internet. This market is built on the Wynd network, which not only allows individuals to monetize their bandwidth, but also provides buyers with a more diverse perspective on the content that individual users see online (because individuals typically use the internet for their specific IP addresses).


HiveMapper: Building a decentralized map product that includes information collected from car drivers. HiveMapper relies on artificial intelligence to interpret images collected from users' car dashboard cameras, and rewards users for helping to improve the AI model through Reinforcement Human Learning Feedback (RHLF).


Overall, these all point to exploring decentralized market models that support artificial intelligence models, or the almost endless opportunities to support the peripheral infrastructure needed to develop these models. Currently, most of these projects are in the concept verification stage and require further research and development to prove that they can provide comprehensive artificial intelligence services at the required scale.


Outlook


Decentralized computing products are still in the early stages of development. They have just begun to use the most advanced computing power to train the most powerful artificial intelligence models in production. In order to gain meaningful market share, they need to demonstrate real advantages compared to centralized alternatives. Potential incentives for wider adoption include:




















Other developing solutions, called "co-processors," include RiscZero, Axiom, and Ritual. The term "co-processor" is mostly semantic in nature - these networks serve many different roles, including verifying off-chain calculations on-chain. Like EZKL, Giza, and Modulus, their goal is to fully abstract the zero-knowledge proof generation process, creating a zero-knowledge virtual machine that can essentially execute programs off-chain and generate proofs for on-chain verification. RiscZero and Axiom can handle simple AI models as they are more general-purpose co-processors, while Ritual is specifically built to work with AI models.


Infernet is the first instance of Ritual and includes an Infernet SDK that allows developers to submit inference requests to the network and receive output and proof (optional) upon return. Infernet nodes receive these requests and process computations off-chain before returning output. For example, a DAO can create a process to ensure that all new governance proposals meet certain prerequisites before submission. Each time a new proposal is submitted, the governance contract triggers an inference request through Infernet, calling a governance-trained AI model specific to the DAO. The model reviews the proposal to ensure all necessary conditions are met and returns output and proof to either approve or reject the proposal submission.


Within the next year, the Ritual team plans to launch additional features that make up the infrastructure layer, called Ritual Superchain. Many of the previously discussed projects can be inserted into Ritual as service providers. Currently, the Ritual team has integrated with EZKL for proof generation and may soon add functionality from other leading providers. Infernet nodes on Ritual can also use Akash or io.net GPUs and query models trained on the Bittensor subnet. Their ultimate goal is to become the preferred provider of open AI infrastructure, capable of providing machine learning and other AI-related task services for any workload on any network.


Application


zkML helps reconcile the conflict between blockchain and artificial intelligence. The former is inherently resource-constrained, while the latter requires a lot of computation and data. As one of the founders of Giza said, "The use cases are so rich... It's a bit like asking what are the use cases for smart contracts in the early days of Ethereum... We're expanding the use cases for smart contracts." However, as emphasized earlier, today's development mainly focuses on the tool and infrastructure level. Applications are still in the exploration stage, and teams face the challenge of proving that the value of using zkML to implement models outweighs the complexity and cost of doing so.


Some of today's applications include:


DeFi. zkML expands the functionality of smart contracts and extends DeFi. DeFi protocols provide machine learning models with a large amount of verifiable and invariant data that can be used to generate revenue or trading strategies, risk analysis, UX, etc. For example, Giza collaborated with Yearn Finance to build a proof-of-concept automated risk assessment engine for Yearn's new v3 insurance vault. Modulus Labs collaborated with Lyra Finance to incorporate machine learning into its AMMs, collaborated with Ion Protocol to validate validator risk models, and helped Upshot validate its AI-based NFT price data source. Protocols like NOYA (using EZKL) and Mozaic offer the use of proprietary chain models, allowing users to use automatic APY machine gun pools while verifying data input and proof on the chain. Spectral Finance is building an on-chain credit scoring engine to predict the likelihood of default by Compound or Aave borrowers. These so-called "De-Ai-Fi" products are expected to become more common in the coming years, thanks to zkML.


Game. People have always believed that the time has come for blockchain to disrupt and enhance gaming. zkML makes it possible to use artificial intelligence for gaming on the chain. Modulus Labs has implemented a proof of concept for a simple on-chain game. "Leela vs the World" is a game of game theory chess, where users compete against an AI chess model, and zkML verifies that every move made by Leela is based on the model running as described in the game. Similarly, teams have used the EZKL framework to build simple singing competitions and on-chain tic-tac-toe. Cartridge is using Giza to enable teams to deploy fully on-chain games, and recently launched a simple AI driving game where users compete to create better models to steer cars around obstacles. Although simple, these proof of concepts point to a future where more complex on-chain verification is possible, such as game economic interactions with advanced NPC characters in the AI Arena, a game similar to Super Mario where players train their warriors and then deploy them as AI models to battle.


Identity, traceability, and privacy. Encryption has been used as a means of verifying authenticity and combating increasingly sophisticated AI-generated/manipulated content and deepfakes. zkML can advance these efforts. WorldCoin is a proof-of-personhood solution that requires users to scan their irises to generate a unique ID. In the future, biometric IDs can be self-stored on personal devices using encryption, with the required models for local verification of these biometric information. Users can provide proof of their biometric information without revealing their identity, thus resisting witch attacks while ensuring privacy. This can also be applied to other inference needs, such as using models to analyze medical data/images for disease detection, verifying individual identities and developing matching algorithms in dating applications, or for insurance and lending institutions that require verification of financial information.


Outlook


zkML is still in the experimental stage, and most projects are focused on building infrastructure prototypes and concept verification. The current challenges include computing costs, memory limitations, model complexity, limited tools and infrastructure, and development talent. In short, there is still a lot of work to be done before zkML can achieve the scale required for consumer products.


However, as the field matures and these limitations are addressed, zkML will become a key component in the integration of artificial intelligence and encryption. At its core, zkML promises to bring off-chain computation of any scale onto the chain while maintaining the same or similar security guarantees as on-chain computation. However, early adopters of this technology will continue to have to balance the privacy and security of zkML with the efficiency of alternative solutions until this vision is realized.


Artificial Intelligence Agent


One of the most exciting integrations between artificial intelligence and cryptocurrency is the ongoing experiment with AI agents. Agents are autonomous robots that can receive, interpret, and execute tasks using AI models. Agents can be anything from a personal assistant that is always available and optimized according to your preferences, to a financial agent that manages and adjusts investment portfolios based on user risk preferences.


Proxy and encryption are well combined because encryption provides a payment infrastructure that is permissionless and trustless. Once trained, proxies can have a wallet so that they can transact with smart contracts on their own. For example, simple proxies today can search for information on the internet and then trade on prediction markets based on a model.


Proxy Provider


Morpheus is one of the latest open-source proxy projects launched on Ethereum and Arbitrum in 2024. Its whitepaper was anonymously released in September 2023, providing a foundation for community formation and establishment (including well-known figures like Erik Vorhees). The whitepaper includes a downloadable smart proxy protocol, which is an open-source LLM that can be run locally, managed by the user's wallet, and interacted with smart contracts. It uses smart contract ranking to help proxies determine which smart contracts are safe to interact with based on criteria such as the number of processed transactions.


The white paper also provides a framework for building the Morpheus network, such as the incentive structure and infrastructure needed to implement the intelligent agent protocol. This includes incentivizing contributors to build interfaces for interacting with agents on the front end, APIs for developers to build applications that can plug in agents for mutual interaction, and cloud solutions for users to use the computing and storage required to run agents on edge devices. The initial funding for the project was launched in the first quarter of 24, and the full protocol is expected to be launched at that time.


Decentralized Autonomous Infrastructure Network (DAIN) is a new proxy infrastructure protocol that builds a proxy-to-proxy economy on Solana. DAIN aims to enable proxies from different enterprises to seamlessly interact with each other through a common API, greatly opening up the design space for AI proxies, with a focus on enabling proxies to interact with web2 and web3 products. In January, DAIN announced its first collaboration with Asset Shield, allowing users to add "proxy signers" to their multi-signature, who can interpret transactions based on user-set rules and approve/reject them.


Fetch.AI is one of the earliest deployed artificial intelligence agent protocols, and has developed an ecosystem for building, deploying, and using agents on-chain using its FET token and Fetch.AI wallet. The protocol provides a comprehensive set of tools and applications for using agents, including wallet functionality for interacting with and issuing commands to agents.


Autonolas' founders include former members of the Fetch team, and they are an open marketplace for creating and using decentralized AI agents. Autonolas also provides a set of tools for developers to build off-chain hosted AI agents that can connect to multiple blockchains, including Polygon, Ethereum, Gnosis Chain, and Solana. They currently have some active agent proof-of-concept products, including products for market prediction and DAO governance.


SingularityNet is building a decentralized marketplace for AI agents, where people can deploy AI agents focused on specific fields, which can be hired by others or agents to perform complex tasks. Other projects, such as AlteredStateMachine, are building AI agent integrations with NFTs. Users mint NFTs with random attributes that give them advantages and disadvantages for different tasks. These agents can then be trained to enhance certain attributes, used for gaming, DeFi, or as virtual assistants, and traded with other users.


Overall, these projects envision a future ecosystem of agents that can work together to not only perform tasks, but also help build artificial general intelligence. Truly complex agents will be able to autonomously perform any user task. For example, it will not be necessary to ensure that the agent has integrated with external APIs (such as travel booking websites) before using it, as fully autonomous agents will have the ability to figure out how to hire another agent to integrate the API and then perform the task. From the user's perspective, there is no need to check if the agent is capable of performing the task, as the agent can determine this on its own.


Bitcoin and Artificial Intelligence Agents


In July 2023, Lightning Labs launched a proof-of-concept solution called LangChain Bitcoin Suite, which allows the use of proxies on the Lightning Network. This product is particularly interesting because it aims to address a growing problem in the Web 2 world - restricted access to web applications and expensive API services.


LangChain solves this problem by providing developers with a set of tools that allow agents to buy, sell, and hold Bitcoin, as well as query API keys and send micro-payments. In traditional payment channels, micro-payments are basically impractical due to cost issues, while on the Lightning Network, agents can send unlimited micro-payments every day and only pay very little fees. When used in conjunction with LangChain's L402 payment metering API framework, this allows companies to adjust their API usage fees based on usage increases and decreases, rather than setting a single cost-prohibitive standard.


In the future, on-chain activities will be mainly dominated by interactions between proxies. The things mentioned above will be necessary to ensure that proxies can interact with each other in a cost-effective way. This is an early example of how proxies can be used on a permissionless and economically efficient payment channel, opening up possibilities for new markets and economic interactions.


Outlook


The proxy field is still in its early stages.


The project has just launched its function proxy, which can use its infrastructure to handle simple tasks - usually only experienced developers and users can use it.









Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit