NEAR breaks through $8, why can it catch up with AI?

24-03-13 12:30
Read this article in 9 Minutes
总结 AI summary
View the summary 收起
Original author: Haotian, encryption researcher

Editor's note: Recently, perhaps influenced by news related to the NVIDIA GTC24 conference, AI concept tokens have experienced a general rise. NEAR Protocol co-founder Illia Polosukhin will attend the "Transforming AI" keynote speech and panel discussion during the GTC24 conference, and NVIDIA founder Jensen Huang will also participate in the same event. Today, NEAR broke above $8 and was trading at $8.305 at press time. Encryption researcher Haotian analyzed on X why NEAR, which has been doing chain abstraction, suddenly changed. To become the leading AI public chain, BlockBeats reprinted the full text as follows:


Recently, the news that NEAR founder @ilblackdragon will appear at the NVIDIA AI Conference has made the NEAR public chain very profitable. Eyeballs, the market price trend is also gratifying. Many friends are wondering, isn't the NEAR chain All in doing chain abstraction? Why has it become an AI head public chain inexplicably? Next, I will share my observations and popularize some knowledge about AI model training:


1) NEAR founder Illia Polosukhin has had a long experience Time's AI background and co-builder of the Transformer architecture. The Transformer architecture is the basic architecture of ChatGPT for LLMs large language model training today, which is enough to prove that the boss of NEAR did have experience in creating and leading AI large model systems before establishing NEAR.


2) NRAR has launched NEAR Tasks at NEARCON 2023. The goal is to train and improve artificial intelligence models. Simply put, model training needs Vendors can issue task requests on the platform and upload basic data materials. Users (Taskers) can participate in answering tasks and perform manual operations such as text annotation and image recognition for data. After the task is completed, the platform will reward the user with NEAR tokens, and these manually labeled data will be used to train the corresponding AI model.


For example: the AI model needs to improve its ability to identify objects in pictures. Vendor can upload a large number of original pictures with different objects in the pictures to the Tasks platform, and then the user manually annotates the pictures. By looking at the object location, a large amount of "picture-object location" data can be generated, and AI can use this data to learn autonomously to improve image recognition capabilities.


At first glance, NEAR Tasks doesn’t just want to socialize artificial engineering to provide basic services for AI models. Is it really that important? Add a little popular science knowledge about AI models here.


Normally, a complete AI model training includes data collection, data preprocessing and annotation, model design and training, model tuning, Processes such as fine-tuning, model verification testing, model deployment, model monitoring and updating, among which data annotation and preprocessing are the manual part, while model training and optimization are the machine part.


Obviously, most people understand that the machine part is significantly greater than the manual part. After all, it seems more high-tech, but in actual circumstances, manual annotation Crucialin the entire model training.


Manual annotation can add labels to objects (people, places, things) in images for computers to improve visual model learning; manual annotation can also add labels to speech. The content is converted into text, and specific syllables, word phrases, etc. are marked to help the computer train the speech recognition model; manual annotation can also add some emotional tags such as happiness, sadness, anger, etc. to the text, allowing artificial intelligence to enhance emotional analysis skills, etc.


It is not difficult to see that manual annotation is the basis for machine-based deep learning models. Without high-quality annotated data, the model cannot learn efficiently. If the amount of annotated data is not enough, Large, model performance will also be limited.


At present, in the field of minimally invasive AI, there are many vertical directions for secondary fine-tuning or special training based on the ChatGPT large model, which are essentially based on OpenAI data. , additionally add new data sources, especially manually labeled data, to perform model training.


For example, if a medical company wants to do model training based on medical imaging AI and provide a set of online AI consultation services for hospitals, it only needs to use a large amount of original medical imaging data. Upload to the Task platform, and then allow users to annotate and complete tasks, thereby generating manually annotated data. This data will then be fine-tuned and optimized for the ChatGPT large model, turning this general AI tool into an expert in the vertical field.


However, it is obviously not enough for NEAR to become the leading AI public chain simply by relying on the Tasks platform. NEAR is actually still in the ecosystem. AI Agent service is provided to automatically execute all user behaviors and operations on the chain. Users only need authorization to freely buy and sell assets in the market. This is somewhat similar to Intent-centric, using AI automated execution to improve user on-chain interaction experience. In addition, NEAR's powerful DA capabilities allow it to play a role in the traceability of AI data sources and track the validity and authenticity of AI model training data.


In short, backed by high-performance chain functions, NEAR's technical extension and narrative guidance in the direction of AI seem to be much more powerful than pure chain abstraction.


When I was analyzing the NRAR chain abstraction half a month ago, I saw the advantages of NEAR chain performance + the team’s super web2 resource integration capabilities

strong>, I never expected that the chain abstraction has not yet become popular enough to harvest the fruits, and this wave of AI empowerment has once again amplified imagination.


Note: Long-term attention still depends on NEAR's layout and product advancement in "chain abstraction". AI will be a good bonus and bull market catalyst! #NEAR


Original link


欢迎加入律动 BlockBeats 官方社群:

Telegram 订阅群:https://t.me/theblockbeats

Telegram 交流群:https://t.me/BlockBeats_App

Twitter 官方账号:https://twitter.com/BlockBeatsAsia

Choose Library
Add Library
Cancel
Finish
Add Library
Visible to myself only
Public
Save
Correction/Report
Submit