header-langage
简体中文
繁體中文
English
Tiếng Việt
한국어
日本語
ภาษาไทย
Türkçe
Scan to Download the APP

Unveiling the AI + Crypto dark horse Bittensor (TAO), who exactly are they?

2023-11-20 17:20
Read this article in 27 Minutes
总结 AI summary
View the summary 收起
Original Title: "A short report on Bittensor and AI"
Author: Knower
Translated by: Luffy, Foresight News

Editor's Note: The concept project of Crypto+AI, Bittensor, has seen a 400% increase in price for its token TAO within a month. According to Coingecko, TAO (Bittensor) has a circulating market value of over $1.5 billion, ranking it among the top 50 cryptocurrencies. Crypto KOL Knower is very optimistic about the future of the combination of artificial intelligence and cryptocurrency, so he wrote this article to help users understand the background, current situation, and token economy of the Bittensor project in detail.


Hello friends, long time no see. I hope all of you can enjoy the recent positive price trends in the cryptocurrency field. In order to gain substantial returns, I have decided to write a formal report on the artificial intelligence cryptocurrency project Bittensor. I am not an expert in cryptocurrency, so you may think that I am not familiar with artificial intelligence. However, in fact, I have spent a lot of free time researching artificial intelligence outside of cryptocurrency, and have been familiarizing myself with important updates, advancements, and existing infrastructure in the field of artificial intelligence over the past 3-4 months. 


Despite some inaccurate and non-analytical tweets, I still want to clarify the facts. After reading this article, your understanding of Bittensor will far exceed expectations. This is a somewhat lengthy report, and I am not intentionally using excessive language, but rather due to the large number of images and screenshots. Please do not input the article into ChatGPT to obtain a summary, as I have invested a lot of time in these contents, and you cannot understand the whole story in this way.


When I was writing this report, a friend (who happens to be a Crypto Twitter KOL) told me that "Artificial Intelligence + Cryptocurrency = The Future of Finance". Please keep this in mind as you read this article.



Is artificial intelligence the final piece of the puzzle for the domination of the world by encryption technology, or is it just a small step towards achieving our goals? The answer is up to you to find, I'm just here to provide some thought-provoking material.  


Background Information


As Bittensor puts it, Bittensor is essentially a language for writing numerous "sub-networks" for decentralized commodity markets or those under a unified token system, with the goal of directing the power of the digital market to the most important digital commodity in society - artificial intelligence.  


Bittensor's mission is to establish a decentralized network that can compete with models previously only achievable by giant companies like OpenAI, through unique incentive mechanisms and advanced subnetwork architecture. It is best to imagine Bittensor as a complete system composed of interoperable components, a machine built on blockchain to better promote the popularization of AI functions on the chain.


There are two key participants in managing the Bittensor network, they are miners and validators. Miners are individuals who submit pre-trained models to the network in exchange for reward shares. Validators are responsible for confirming the validity and accuracy of these model outputs and selecting the most accurate output to return to the user. For example, if a Bittensor user requests an AI chatbot to answer simple questions related to derivatives or historical facts, the question will be answered regardless of how many nodes are currently running in the Bittensor network.  


The steps for users to interact with the Bittensor network are briefly described as follows: Users send queries to validators, validators propagate them to miners, and then validators rank the outputs of miners. The output of the highest ranked miner will be sent back to the user.


This is all very simple.  


Through incentives, models typically provide the best output. Bittensor has created a positive feedback loop where miners compete with each other, introducing more sophisticated, accurate, and high-performance models to gain a larger share of TAO (Bittensor's ecosystem token) and promote a more positive user experience. 


To become a validator, users must be one of the top 64 holders of TAO and have registered a UID on any of Bittensor's subnetworks (which provide access to independent economic markets for various forms of artificial intelligence). For example, subnet 1 focuses on tag prediction through text prompts, while subnet 5 focuses on image generation. These two subnets may use different models because their tasks are completely different and may require different parameters, accuracy, and other specific features. 


Another key aspect of the Bittensor architecture is the Yuma Consensus mechanism, which is similar to allocating Bittensor's available CPU resources across the entire subnet network. Yuma is described as a hybrid of PoW and PoS, with additional features for transmitting and facilitating intelligence off-chain. While Yuma supports most of Bittensor's network, subnets can choose to join or not rely on Yuma consensus. The specific details are complex and vague, and there are various subnets and corresponding Githubs, so if you just want a rough understanding, knowing the top-down approach of Yuma consensus is enough.




But what about the model?


Contrary to popular belief, Bittensor does not train its own models. This is an extremely expensive process that only larger AI labs or research organizations can afford and may take a long time. I attempted to provide a definitive answer on whether Bittensor includes model training, but my only finding was inconclusive. 



The decentralized training mechanism may sound complicated, but it is not difficult to understand. The task of the Bittensor validator is to "continuously evaluate the models generated by the mining workers on the Falcon Refined Web 6T token unmarked dataset" in a continuous game based on two criteria (timestamp and loss relative to other models) to score each miner. The loss function is a machine learning term used to describe the difference between predicted values and actual values in a certain type of simulation, representing the degree of error or inaccuracy of the model output given the input data.



About the loss function, this is the latest performance of sn9 (related subnetwork) that I obtained from Discord yesterday. Please remember that the lowest loss does not necessarily mean the average loss:



If Bittensor doesn't train models itself, what else can it do?!


In fact, the "creation" process of large language models (LLMs) can be divided into three key stages: training, fine-tuning, and contextual learning (adding some reasoning). 


Before we continue with some basic definitions, let's take a look at Sequoia Capital's report on LLM in June 2023. Their survey results showed that "usually, in addition to using LLM API, 15% of companies build custom language models from scratch or based on open source libraries. Custom model training has significantly increased compared to a few months ago. This requires their own computing stack, model center, hosting, training framework, experiment tracking, etc., which come from popular companies such as Hugging Face, Replicate, Foundry, Tecton, Weights & Biases, PyTorch, Scale, etc."



Building a model from scratch is a daunting task, with 85% of surveyed founders and teams unwilling to undertake the work. When most startups and independent developers only want to leverage large language models in external applications or software-based services, the workload of self-hosting, tracking results, creating or importing complex training scenarios, and various other tasks is too great. For 99% of the AI industry, creating something comparable to GPT-4 or Llama 2 is not feasible.  


This is why platforms like Hugging Face are so popular, because you can download pre-trained models from their website, a process that is very familiar and common for those in the AI industry. 


微调更加困难,但适合那些希望提供特定利基领域的基于大语言模型的应用程序或服务的人。这可能是一家法律服务初创公司开发的聊天机器人,其模型根据各种律师特定的数据和示例进行了微调,或者是一家生物技术初创公司正在开发一个模型,该模型专门根据可能存在的生物技术相关信息进行了微调。 



Regardless of the purpose, fine-tuning is aimed at further integrating personality or professional knowledge into your model, making it more suitable and accurate in performing tasks. Although it is undeniably useful and more customizable, everyone thinks it is difficult, even a16z thinks so according to



Although Bittensor did not actually train the model, miners who submit their own models to the network claim to have made some form of fine-tuning to the model, although this information is not made public (or at least difficult to verify). Miners keep their model structure and functionality confidential to protect their competitive advantage, although some are accessible.



We'll give a simple example: if you're participating in a competition with a prize of $1 million, where everyone is competing to have the best performance in LLM, would you reveal that you're using GPT-4 if all your competitors are using GPT-2? Although the actual situation is more complicated than what this example explains, there isn't much difference. Miners have an advantage based on the accuracy of their output, which is better than the tools of miners who fine-tune models less or have lower average performance.


I previously mentioned contextual learning, which may be the last part of non-Bittensor information I will introduce, but contextual learning is a broadly defined process used to guide language models to achieve more ideal outputs. Reasoning is a process that the model constantly goes through when evaluating inputs, and its training results may affect the accuracy of output labels. Although the training cost is high, training is only conducted when the model is ready to reach the training level specified by the team during the model creation process. Inference is always happening and various additional services are used to facilitate the inference process. 


Bittensor Status


After understanding the background knowledge, I will explore some details about Bittensor subnet performance, current features, and future plans. To be honest, it is difficult to find high-quality articles on this topic. Fortunately, Bittensor community members have sent me some information, but even so, forming opinions requires a lot of work. I lurked in their Discord to find answers, and in the process, I realized that I had been a member for about a month, but had not viewed any channels (I don't use Discord much, more often Telegram and Slack).
























With the continuous iteration and implementation of new features in the machine learning community, the definition of AGI often varies, but the basic idea is that AGI can reason, think, and learn just like humans. The core challenge comes from the fact that scientists classify humans as conscious and free-willed beings, which is difficult to quantify in humans, let alone in powerful neural network systems.



It is worth noting that Bittensor is highly efficient in the field of machine learning outside of cryptocurrency. Opentensor and Cerebras released the open-source LLM BTLM-3b-8k as early as July this year. Since then, BTLM has been downloaded over 16,000 times on Hugging Face and has received very positive reviews.


Someone has expressed that due to the lightweight architecture of BTLM, BTLM-3b ranks high in the same category as Mistral-7b and MPT-30b, becoming the "best model for each VRAM". Below is a chart from the same tweet, listing the models and their data accessibility classifications. BTLM-3b received a good rating:




TAO Usage


Don't worry, I haven't forgotten about the tokens.



TAO is both the reward token and access token of the Bittensor network. TAO holders can stake, participate in governance, or use their TAO to build applications on the Bittensor network. 1 TAO is minted every 12 seconds, and the newly minted tokens are evenly distributed to miners and validators. 


In my opinion, the token economics of TAO can easily imagine a world where competition among miners intensifies due to a halving of the release amount, which naturally leads to higher quality models and better overall user experience. However, there is still a problem here, that is, fewer rewards will have the opposite effect and will not attract fierce competition, but will cause the number of deployed models or competing miners to stagnate.


I can continue to discuss the token utility, price prospects, and growth drivers of TAO, but the aforementioned report has done a pretty good job in this regard. Most of Crypto Twitter has identified a very reliable narrative behind Bittensor and TAO, and anything I add at this point would be icing on the cake. From an external perspective, I would say that these are fairly reasonable token economics with nothing unusual. However, I should mention that buying TAO is currently very difficult as it has not yet been listed on most exchanges. This situation may change in 1-2 months, and I would be very surprised if Binance does not list TAO soon. 


Prospect


I am definitely a fan of Bittensor and hope they can achieve their bold mission. As the team stated in their Bittensor Paradigm article, Bitcoin and Ethereum are revolutionary because they democratize financial access and make the concept of a completely permissionless digital market a reality. Bittensor is no exception, with the goal of democratizing artificial intelligence models in the vast intelligent network. Despite my support, it is clear that they are still far from achieving their goals, as is the case for most projects built with cryptocurrency. This is a marathon, not a sprint.  


If Bittensor wants to maintain its leading position, they need to continue to promote friendly competition and innovation among miners, while expanding the possibilities of sparse mixed model architecture, MoE concepts, and decentralized composite intelligence. It is already difficult enough to accomplish all of this alone, and incorporating cryptographic technology will make it even more challenging. 


Bittensor's future road is still long. Although there has been an increase in discussions around TAO in recent weeks, I believe that most of the crypto community is not fully aware of Bittensor's current working principles. There are some obvious problems that do not have simple solutions, some of which are: a) whether high-quality large-scale reasoning can be achieved, b) the problem of attracting users, and c) whether the pursuit of composite large language models is meaningful.  


Whether you believe it or not, supporting decentralized currency is actually a significant challenge, despite rumors of ETFs making it easier.


Establishing a decentralized network composed of intelligent models that can iterate and learn from each other sounds incredibly promising, and part of the reason is because it is indeed true. Given the current limitations of background windows and large language models, it is impossible for a single model to continuously self-improve until it reaches the level of AGI. Even the best models are still limited. Nevertheless, I believe that building Bittensor as a decentralized LLM hosting platform with novel economic incentives and built-in composability is not only positive, but it is actually one of the coolest experiments in the cryptocurrency industry right now.  


Integrating economic incentives into artificial intelligence systems faces challenges. Bittensor states that if miners or validators attempt to game the system in any way, the incentive mechanism will be adjusted accordingly based on the specific situation. Here is an example from June of this year where token release was reduced by 90%:



This is completely predictable in the blockchain system, so let's not pretend that Bitcoin or Ethereum are 100% perfect throughout their entire lifecycle.


For outsiders, the adoption of cryptocurrency has always been a hard pill to swallow, and artificial intelligence is equally controversial, if not more so. Combining the two will present challenges for anyone looking to maintain user growth and engagement, and this will take some time. If Bittensor is ultimately able to achieve its goal of a composite large language model, it could be a significant achievement.


Original article link


Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

举报 Correction/Report
This platform has fully integrated the Farcaster protocol. If you have a Farcaster account, you canLogin to comment
Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit