Original title: Dialogue with io.net COO: Hope to compete with AWS cloud services and provide more convenient decentralized GPU (with airdrop interactive tutorial)
Original author: AYLO
Original source: Shenchao TechFlow
Editor's note: On February 29, the Solana-based DePIN protocol io.net will launch its first phase on March 1 The first phase of the points reward program will last until April 28. Yesterday, io.net announced the completion of a $30 million Series A round of financing, led by Hack VC, with participation from Multicoin Capital and others. The funds raised will be used to build the world's largest decentralized GPU network and solve the AI computing shortage problem. AYLO had a conversation with io.net COO and talked about io.net’s tokens, market size, and development prospects. Finally, we also introduced how to obtain the io.net airdrop.
Today I bring you an interview with another project that I am very optimistic about.
This project covers some currently popular vertical fields: AI + DePin + Solana. io.net Cloud is an advanced decentralized computing network that allows machine learning engineers to access distributed cloud clusters at a much lower cost than centralized services. I spoke with COO Tory Green for more information. The IO token will be launching on Solana soon and I highly recommend you read this article. I will also include information on how to participate in the airdrop (at the end of the article). I am a private investor in io.net and strongly believe in their platform as their GPU cluster solution is truly unique.
· Decentralized AWS for ML (machine learning) training on GPUs
· Instant, permissionless access to a global GPU and CPU network, currently online
· They have 25,000 nodes
· Revolutionary technology to cluster GPU clouds together
· Can save 90% of computing costs for large-scale AI startups
· Integrated Render and Filecoin
· Based on Solana
They just announced a $30 million funding round, Attracting the largest supporters in the field.
We not only have to compete with other crypto projects, but also with cloud computing. One of the main advantages we offer our customers is significantly lower prices, which can be up to 90% cheaper. What we really offer is consumer choice, and that's where it gets really interesting. Yes, you can get a GPU for 90% cheaper on our platform and I highly recommend you give it a try. You get access to cheap, fully decentralized consumer-grade GPUs at heavily discounted prices. However, if you need high performance, you can recreate an AWS-like experience using top-of-the-line hardware like the A100, maybe only 30% cheaper, but still cheaper than AWS. In some cases, we even provide better performance than AWS, which can be critical for certain industries like hedge funds.
For one of our major customers, we offer services that are 70% better than AWS and 40% better than what they would get elsewhere. Our platform is user-friendly and permissionless, unlike AWS which may require detailed details like a business plan. Anyone can join and spin up a cluster instantly, whereas with AWS it can take days or weeks.
If you try to acquire a cluster on a platform like Akash compared to its decentralized competitors, you will find that it is not instantaneous. They are more like a travel agency, calling their data center to find available GPUs, which can take weeks. With us, it's instant, cheaper and no permission required. We want to embody the spirit of Web3 while defeating AWS and GCP.
It is divided into business roadmap and technology roadmap. From a business perspective, TGE is coming. We are planning a summit this year where we will be announcing a lot of product-related content. More of our focus is on continuing to build the network because, despite all the excitement about TGE, we consider ourselves a real business and a legitimate competitor to AWS.
We will continue to vigorously develop our sales team. We want to follow the example of companies like Chainlink and Polygon and focus on recruiting senior sales executives from companies like Amazon and Google to build a world-class sales team. This will help us attract AI customers and build partnerships with entities like Hugging Face and Prettybase.
Our initial customer base is large AI startups facing huge AI computing costs. I'm part of a team of tech CFOs in the Bay Area, and one of the biggest issues is the high cost of AI computing. There was a Series A SaaS startup that was spending $700,000 a month on AI computing, which was unsustainable. Our goal is to significantly reduce costs for businesses like theirs.
Once we prove the concept works with these initial customers, we will look into adjacent markets. With two SOC-compliant GPUs on our network, we can target large tech companies or businesses like JP Morgan or Procter & Gamble, which certainly have their own internal AI departments. Our technology can support clusters of up to 500,000 GPUs, potentially allowing us to surpass AWS or GCP in capacity since they cannot physically deploy that many GPUs in one location. This could attract important AI projects like OpenAI for future versions of GPT. However, building a market requires balancing supply and demand; we currently have 25,000 GPUs in the network and 200,000 on the waiting list. Over time, our goal is to expand the network to meet growing demand. This is the business roadmap.
From a technical perspective, there is obviously a lot to do. Currently, we support Ray, and we also have support for Kubernetes, which we are actively developing. But as I mentioned, we're looking at expanding our offerings. If you think about how AI works, for example when you use ChatGPT, that's the application, and ChatGPT is built on a model, and that model is GPT-3, and GPT-3 runs all its inferences on GPUs. We can finally start with the GPU and build the entire stack.
We are also working with Filecoin, and many of these partner data centers already have large amounts of CPU and storage, so we can start storing models as well. This will allow us to provide computation, model storage, and SDKs for building applications, creating a fully decentralized AI ecosystem, almost like a decentralized app store.
At a high level, this is a utility token that will be used to pay for computation on the network. This is the simplest explanation. I also recommend you check out the website bc8.ai.
This is a proof of concept we have built, a stable diffusion clone that I believe is the only AI dApp currently fully on-chain. Users can use Solana to conduct micro-transactions and create images via crypto payments. Each transaction compensates four key stakeholders who created the image: the application creator, the model creator, us, and the GPU used all get paid. Currently we let people use it for free because we are both the application owner and the model owner, but this is more of a proof of concept than an actual business.
We plan to expand the network to enable others to host models and build fully decentralized AI applications. IO tokens will power not only our model, but any model created. Token economics are still being finalized and may not be announced until around April.
I think there are two reasons. Firstly, we really like the community, and secondly, frankly, this is the only blockchain that can support us. If you look at our cost analysis, every time someone does inference, there are about five trades. You have the reasoning and then it pays all the stakeholders. So when we do a cost analysis and we have 60, 70, 100,000 transactions, all of those transactions have to be 1/100th of a cent or 1/10th of a cent. Given our volume, Solana was really our only option. Plus, they offer a lot of help as a partner and the community is very strong. This is almost a no-brainer.
I think this is impossible Predictive, you know. We come up with numbers like a trillion, but even then it's still difficult to fully appreciate the full scope. For example, forecasts from companies like Gartner suggest that model training could account for 1% of GDP by 2030, equivalent to about $300 billion. This statistic is relatively easy to find. However, when you consider that Nvidia’s CEO mentioned that only 10% of the AI market is dedicated to training, then our perspective changes. If inference and training combined represent a $300 billion market, then the entire AI GPU market, compute services alone, could be a $3 trillion market. Then, Kathy Wood predicted that the entire AI market could be $80 trillion. This shows that the potential size of the market is almost beyond our comprehension.
Building a market is difficult, and while it may be easier in the crypto space, it still has its challenges. For example, most of our customers request the A100, a top-tier, enterprise-grade GPU that costs about $30,000 each and is in short supply. Nowadays they are very difficult to find. Our sales team is working hard to source these GPUs, which is a significant challenge as they are in short supply and cost prohibitive.
We still have a lot of 3090s, this is more of a consumer product and the demand is not that high. This means we have to adapt our strategy and find customers who are specifically looking for these types of GPUs. However, this is something you will find in any market, but we address this by hiring the right people and implementing an effective marketing strategy.
From a strategic perspective, as I mentioned, it is concerning that we are the only ones currently able to build decentralized clusters across different geographies platform. This is now our moat. In the short term, we have a significant competitive advantage, and I think that extends to the medium term as well. For collaborators like Render, there's no point in trying to copy our model if they can leverage our network and retain 95% of the value.
It took the team about two years to develop this functionality. So it's not an easy thing to do. However, there's always the possibility that someone else will figure it out in the future. By then, we hope to have established a sufficient moat. We are already the largest decentralized GPU network by orders of magnitude, with 25,000 GPUs, compared to only 300 for Akash and a few thousand for Render.
Our goal is to reach the level of 100,000 GPUs and 500 customers, creating a network effect similar to Facebook, and the question becomes "Where else can you go?" Our goal is to become the platform of choice for anyone who needs GPU computing, just like Uber dominated ride-sharing and Airbnb dominated accommodation. The key is to move quickly to secure our place in the market and become decentralized Synonymous with GPU computing.
There are two main ways to do so Eligible for IO airdrop:
They are running a campaign called "Ignition"'s Galaxe event. You just have to complete the task. This will require you to prove you are human by minting a Galaxe Passport, which is good as this cannot be counterfeited.
Provide your GPU/CPU to io.net. Just follow the instructions stated in the documentation. Even if you're not tech savvy, it can be done in about 10-15 minutes and it's pretty simple.
Welcome to join the official BlockBeats community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Discussion Group: https://t.me/BlockBeats_App
Official Twitter Account: https://twitter.com/BlockBeatsAsia