Original Title: "Apple Vision Pro Release: Reflections on the Future of XR, RNDR, and Spatial Computing after One Month"
Original Author: Scarlett Wu, Mint Ventures
On the early morning of June 6th, during the WWDC (Apple Worldwide Developers Conference), which is also the fifth day of discovering my COVID-19 positive result, I chatted with my friend while drinking health tea. One hour passed, and I wondered if this time's "One More Thing" would be delayed again?
So when Cook appeared at 2am, he waved his hand and said "One More Thing", my friends and I cheered together on this end of the screen.
Macintosh introduced personal computing, iPhone introduced portable computing, and Apple Vision Pro is going to introduce Spacial Computing
Macintosh computers ushered in the era of personal computing, iPhone ushered in the era of mobile internet, and Apple Vision Pro will usher in the era of spatial computing.
As a cutting-edge technology enthusiast, I cheer for the new toys that will be available next year. However, as a Web3 investor who focuses on gaming, metaverse, and AI, this is a new era symbol that makes me tremble.
You may be skeptical about the relationship between MR hardware upgrades and Web3. So, let's start with Mint Ventures' thesis on the metaverse track.
The premium of assets in the blockchain world, source:
1. Trusted underlying transactions bring about a reduction in transaction costs: The asset ownership and ownership protection of physical goods are based on the mandatory ownership protection of national violent agencies, while the asset ownership of the virtual world is based on the "trust in data that cannot (or should not) be tampered with under consensus" and the recognition of the assets themselves after ownership protection. Although it can be copied and pasted by right-clicking, BAYC still has the price of a set of houses in a third-tier city. This is not because there are many differences between the copied and pasted pictures and the pictures of NFT metadata, but because the asset can be securitized only under the premise of consensus on "non-replicability" in the market.
2. The high securitization of assets brings about a liquidity premium.
3. The "permissionless premium" brought by the non-permissioned transactions corresponding to the decentralized consensus mechanism.
Virtual world goods are easier to securitize than physical goods:
· From the popularization history of paying for digital assets, it can be seen that people's habit of paying for virtual content is not developed overnight, but it is undeniable that payment for virtual assets has penetrated into the lives of the public. In April 2003, the launch of iTunes Store made people realize that in addition to downloading songs into their portable music players from the rampant piracy on the Internet, there is also an option to purchase genuine digital music to support their favorite creators; in 2008, the App Store was launched, and one-time purchase of apps became popular worldwide, and the subsequent in-app purchases continued to contribute to Apple's digital asset revenue.
· There is also a hidden trend in the gaming industry's payment model changes. The initial version of the gaming industry was Arcade Game, and the payment model during the arcade era was "pay for experience" (similar to movies). During the console era, the payment model was "pay for cartridges/discs" (similar to movies and music albums). In the later stage of the console era, pure digital games began to be sold, and at the same time, Steam's digital game market appeared, as well as in-game purchases that made some games a revenue myth. The history of game payment model updates is also a history of decreasing distribution costs, from arcade to console, to personal computers and mobile game digital distribution platforms that everyone can log in to, and the games themselves that players are already immersed in. The overall trend of game assets is that the distribution cost of technology is getting lower and lower, and the audience is getting wider and wider, while game assets have changed from "a part of the experience" to "purchasable goods". (Although the small trend in the past decade has shifted to the increasing distribution cost of digital assets, this is mainly due to the low growth and high competition of the Internet, as well as the monopoly of traffic entrances on attention.)
So, what's next? Tradable virtual world assets will be a theme we always look forward to.
With the improvement of the virtual world experience, people will spend more and more time immersed in the virtual world, leading to a shift in attention. This shift in attention will also lead to a premium valuation shifting from being strongly attached to physical assets to virtual assets. The release of Apple Vision Pro will completely change the way humans interact with the virtual world, leading to an increase in immersion time and a significant improvement in the immersive experience.
Source: @FEhrsam
Note: This is our definition of a variant of pricing strategy. In premium pricing strategy, brands set prices in a price range that is much higher than the cost, and fill the gap between pricing and cost with brand stories and experiences. In addition, cost pricing, competition pricing, supply and demand relationships, etc. are also factors to consider when pricing products. This section only focuses on premium pricing.
The exploration of XR (Extended Reality, including VR and AR) in modern society began more than a decade ago:
· In 2010, Magic Leap was founded. In 2015, Magic Leap's whale jumping in the gym advertisement caused a sensation in the entire technology industry, but when the product was officially launched in 2018, it received a lot of criticism due to its extremely poor product experience. In 2021, the company raised $500 million in funding at a post-investment valuation of $2.5 billion, which is only three-fifths of its total funding of $3.5 billion. In January 2022, it was reported that the Saudi Arabian sovereign wealth fund acquired a majority stake through a $450 million equity and debt transaction, and the company's actual valuation fell to less than $1 billion.
· In 2010, Microsoft began developing Hololens and released the first AR device in 2016, followed by a second in 2019. The price is $3,000, but the actual experience is not satisfactory.
· In 2011, the Google Glass prototype was released, and the first product was launched in 2013. It was once very popular and highly anticipated, but due to privacy issues with the camera and poor actual user experience, it ended up with dismal sales, with only tens of thousands sold. In 2019, the enterprise version was released, and a new test version was tested on-site in 2022, with a mediocre response. In 2014, Google's Carboard VR development platform and SDK were released. In 2016, the Daydream VR was released, which is currently the most widely used VR platform adapted for Android.
· In 2011, Sony PlayStation began developing its VR platform. In 2016, the PSVR made its debut. Although users were initially enthusiastic about purchasing it due to their trust in PlayStation, the subsequent response was not favorable.
· In 2012, Oculus was founded and was acquired by Facebook in 2014. In 2016, Oculus Rift was launched and a total of 4 models were released, focusing on portability and lower pricing, making it a device with a relatively high market share in the market.
· In 2014, Snap acquired Vergence Labs, a company founded in 2011 that focused on AR glasses. This became the prototype for Snap Spectacles. The first version was released in 2016, followed by three updated versions. Like many of the products mentioned above, Snap Spectacles initially attracted a lot of attention, with people lining up outside stores. However, the number of users dwindled over time, and in 2022, Snap closed its hardware division and refocused on AR based on smartphones.
· In 2017, Amazon began developing AR glasses based on Alexa. The first Echo Frames were released in 2019, with a second version released in 2021.
When we look back at the history of XR, we can see that the expansion and cultivation of this industry is far beyond the expectations of everyone in the market, whether it is the technology giants with abundant resources and numerous scientists, or the smart and capable start-ups that focus on XR with billions of dollars in financing. Since the release of the consumer-grade VR product Oculus Rift in 2016, the cumulative shipment volume of all VR brands, such as Samsung's Gear, Byte's Pico, Valve's Index, Sony's Playstation VR, HTC's Vive, etc., is less than 45 million units. As the most widely used application of VR devices is still gaming, AR devices that people are willing to use occasionally have not appeared before the release of Vision Pro. According to SteamVR's data, it can be roughly inferred that the monthly active users of VR devices may only be a few million.
Why aren't XR devices popular? Countless failed experiences from start-up companies and summaries from investment institutions can provide some answers:
Visually, VR devices have a wider field of view and are closer to the eyes, making it difficult to ignore the pixels on the screen even with the top devices. A resolution of single-eye 4k, or double-eye 8k, is required for full immersion. In addition, refresh rate is a core element in maintaining the visual experience. It is generally believed that XR devices need to maintain a refresh rate of 120 HZ, or even 240 HZ, to achieve anti-dizziness effects and maintain a similar experience to the real world. Refresh rate is also a factor that needs to be balanced with rendering level under the same computing power: Fortnite supports 4k resolution at 60 HZ refresh rate, while only supporting 1440p resolution at 120 HZ refresh rate.
Compared to visual perception, auditory perception may seem insignificant in the short term, and most VR devices have not put much effort into this detail. However, imagine that in a space, whether it is a person on the left or right, the sound of their voice is consistently transmitted from above, which will greatly reduce the sense of immersion. And when the digital Avatar in the AR space is fixed in the living room, the volume of the Avatar's speech heard by the player when walking from the bedroom to the living room is the same, which will subtly reduce the sense of reality in the space.
Interaction wise, traditional VR devices come with controllers, and devices like HTC Vive require the installation of cameras at home to confirm the player's movement status. Although Quest Pro has eye tracking, the delay is high and the sensitivity is average, mainly used for local rendering enhancement, and actual interactive operations are still mainly done with controllers. At the same time, Oculus has also installed 4-12 cameras on the headset to confirm the user's scene status, achieving a certain degree of gesture interaction experience (for example, picking up a virtual phone with the left hand and clicking on the confirmation button in the air with the right index finger in the VR world).
Weight wise, the device should feel comfortable on the body and weigh between 400-700g (although compared to normal glasses weighing around 20g, this is still a heavy object). However, in order to achieve the clarity, refresh rate, interaction level, and computing power (chip performance, size, and quantity) required to match its rendering requirements, as well as several hours of basic battery life, the weight of XR devices is a difficult trade-off process.
Overall, in order for XR to become the next generation of smartphones and a new generation of popular hardware, it requires a device with a resolution of 8k or higher and a refresh rate greater than 120 HZ to avoid user dizziness. This device should have a dozen or more cameras, a battery life of 4 hours or even longer (only needs to be removed during lunch/dinner breaks), minimal or no heat generation, a weight of less than 500g, and a price as low as $500 - $1000 USD. Despite the technological advancements made since the previous XR craze from 2015-2019, achieving these standards still poses a challenge.
However, even so, if users start experiencing existing MR (VR + AR) devices, they will find that the current experience, although not perfect, is an immersive experience that cannot be compared to 2D screens. But there is still considerable room for improvement in this experience - taking Oculus Quest 2 as an example, most of the VR videos that can be watched are only 1440p, and they have not even reached the upper limit of Quest 2's 4K resolution, and the refresh rate is far from 90HZ. And existing VR games only have relatively poor modeling, and there are not many choices available for trying.
Source: VRChat
The "unreleased" Killer App has its historical reasons for being trapped by hardware - even if Meta tries to compress profit margins, the hundreds of dollars of MR headsets and relatively simple ecology are still not attractive compared to existing game consoles with rich ecology and a large user base. The number of VR devices is between 25-30 million, while the terminal devices (PS5, Xbox, Switch, PC) for 3A games have a total of 350 million. Therefore, most manufacturers have given up supporting VR, and the few games that support VR devices are "incidentally layout VR platforms" rather than "only supporting VR devices". In addition, due to the issues mentioned in the first point, such as pixelation, dizziness, poor battery life, and excessive weight, the VR experience is not better than traditional 3A game terminals. The "immersive" advantage emphasized by VR supporters is difficult to achieve ideal experience because of the insufficient number of devices, and developers who "incidentally layout VR devices" rarely design experiences and interaction modes specifically for VR.
Therefore, the current situation is that when players choose VR games instead of non-VR games, they are not only "choosing a new game", but also "giving up the experience of socializing with most friends". Such game scenes often prioritize gameplay and immersive experience over socializing. Of course, you may mention VR Chat, but if you dig deeper, you will find that 90% of its users are not VR users, but players who want to experience socializing with new friends through various avatars on a regular screen. Therefore, it is not surprising that the most popular game in VR software is rhythm games like "Beat Saber".
Therefore, we believe that the emergence of a Killer App requires the following elements:
· Significant improvement in hardware performance and all-round details. As mentioned in "Hardware not ready", this is not a simple operation of "improving the screen, improving the chip, improving the speaker...", but the all-round coordination of chips, accessories, interaction design, and operating systems - and this is exactly what Apple is good at: Compared with the iPod and iPhone of more than a decade ago, Apple has completed the coordination of multiple device operating systems through decades of accumulation.
· The eve of the explosion of user device ownership. As mentioned in the previous analysis of developer and user attitudes, the "chicken or egg" problem makes it difficult for a Killer App to appear when XR device MAU is only a few million. At the peak of "The Legend of Zelda: Breath of the Wild," game cartridges sold even more than the number of Switch devices in the United States - this is an excellent case of "how new hardware enters mass adoption." People who buy devices to experience XR will gradually become disappointed due to limited content, and talk about how their headsets are gathering dust; but players attracted by Zelda will largely stay because they discover other games within the Switch ecosystem.
Source: The Verge
· And, unified operating habits, and relatively stable device compatibility with updates. The former is easy to understand - with or without a handle, it brings two different user-machine interaction habits and experiences, which is the difference between Apple Vision Pro and other VR devices on the market. The latter can be seen in the iteration of Oculus hardware - the significant improvement in hardware performance within the same generation will actually limit the user experience. The Meta Quest Pro, released in 2022, has significantly improved hardware performance compared to the Oculus Quest 2 (also known as Meta Quest 2) released in 2020: Quest Pro's resolution has been increased from Quest 2's 4K display to 5.25K, color contrast has been increased by 75%, and the refresh rate has been increased from the original 90 HZ to 120 HZ. In addition to the 4 cameras used to understand the VR external environment in Quest 2, 8 external cameras have been added, turning black and white environmental images into color and significantly improving hand tracking, as well as adding facial and eye tracking. At the same time, Quest Pro also uses "gaze rendering" to concentrate computing power on the area where the eyes are looking and weaken the fidelity of other parts, thereby saving computing power and power consumption. As mentioned above, Quest Pro's features are much more powerful than Quest 2, but users of Quest Pro may be less than 5% of Quest 2. This means that developers will develop games for both devices at the same time, which will greatly limit the use of Quest Pro's advantages and reverse the attractiveness of Quest Pro to users. History Rhymes, the same story has also happened repeatedly in game consoles, which is why console manufacturers update software and hardware every 6-8 years. Users who bought the first generation of Switch do not have to worry about the incompatibility of new game software brought by hardware such as Switch OLED, but users who bought the Wii series cannot play games in the Switch ecosystem. For software developers targeting console games, the games they produce are not for products with a large user base (350 million vs. billions) and strong user dependence (leisure at home vs. carrying all day) like mobile phones. They need a stable hardware experience within several development cycles to avoid excessive user diversion, or, like VR software developers now, downward compatibility to ensure a sufficient user base.
So, can Vision Pro solve the above problems? And what kind of changes will it bring to the industry?
At the June 7th press conference, Apple Vision Pro was released. Based on the framework we analyzed regarding the challenges faced by MR in hardware and software, the following analogy can be made:
· Visuals: Visually, Vision Pro uses two 4K screens with a total of approximately 6K pixels, making it one of the top MR devices currently available. The refresh rate can support up to 96 HZ and it also supports HDR video playback. According to a technology blogger's experience, not only is the clarity very high, but there is almost no feeling of dizziness.
· Hearing: Apple has been using spatial audio on Airpods since 2020, allowing users to hear sound from different directions to achieve a 3D audio experience. However, Vision Pro is expected to go further by using "audio beamforming technology" to fully integrate LiDAR scanning in the device, analyze the acoustic characteristics of the room (physical materials, etc.), and create a "spatial audio effect" that matches the room, with direction and depth.
· Interaction, without the need for any handles, gestures and eye tracking, makes the interaction experience extremely smooth (according to the technology media's actual test experience, it is almost impossible to feel the delay, which is not only the sensor accuracy and calculation speed, but also introduces prediction of eye movement. This will be further introduced in the following text.)
· Battery Life: The battery life of Vision Pro is 2 hours, which is similar to Meta Quest Pro (not impressive and is currently criticized for this aspect). However, since Vision Pro is an external power source and has a 5000mA battery in the headset, it can be speculated that there is room for replacing the power source to extend the battery life.
· Weight wise, according to tech media's experience, it weighs around 1 pound (454g), which is comparable to Pico and Oculus Quest 2, and should be lighter than Meta Quest Pro, making it a decent experience in MR devices (although this does not include the weight of the power supply attached to the waist). However, compared to pure AR glasses weighing around 80g (such as Nreal, Rokid, etc.), it is still heavy and stuffy. Of course, most pure AR glasses need to be connected to other devices and can only be used as an extended screen. In comparison, MR with its own chip and real immersive experience may be a completely different experience.
· In addition, in terms of hardware performance, Vision Pro is not only equipped with the currently most advanced M2 series chip for system and program operation, but also added an R1 chip specially developed for MR screen, surrounding environment monitoring, eye and gesture monitoring, etc., for MR proprietary display and interaction functions.
On software, Apple can not only achieve a certain degree of migration with its millions of developers, but has actually already laid out a series of ecosystems with the release of AR Kit:
As early as 2017, Apple released AR Kit: a virtual reality development framework compatible with iOS devices, allowing developers to create augmented reality applications and utilize the hardware and software capabilities of iOS devices. VR Kit can create a map of the area using the camera on the iOS device, and use CoreMotion data to detect objects such as desks, floors, and devices in physical space to enable digital assets to interact with the real world under the camera, such as seeing Pokemon buried in the ground or perched on trees in Pokemon Go, rather than simply displayed on the screen and moving with the camera. Users do not need to calibrate anything - this is a seamless AR experience.
Source: See here
· In 2017, AR Kit was released, which can automatically detect location, topology, and user's facial expressions for modeling and expression capture.
· In 2018, AR Kit 2 was released, bringing improved CoreMotion experience, multiplayer AR games, tracking of 2D images, and detection of known 3D objects such as sculptures, toys, and furniture.
· In 2019, AR Kit 3 was released, adding further augmented reality capabilities. People Occlusion can be used to display AR content in front of or behind people, with a maximum of three faces tracked. It also supports collaborative sessions, enabling a new AR shared gaming experience. Motion capture can be used to understand body position and movement, tracking joints and bones, resulting in new AR experiences involving people rather than just objects.
· In 2020, AR Kit 4 was released, which can utilize the LiDAR sensor built into the 2020 iPhone and iPad to improve tracking and object detection. ARKit 4 also added Location Anchors, which uses Apple Maps data to place augmented reality experiences at specific geographic coordinates.
· In 2021, AR Kit 5 was released, allowing developers to build custom shaders, programmatic mesh generation, object capture, and character control. In addition, developers can use built-in APIs and capture objects using LiDAR and cameras in iOS 15 devices. Developers can scan an object and immediately convert it to a USDZ file, which can be imported into Xcode and used as a 3D model in your ARKit scene or application. This greatly improves the efficiency of 3D model production.
· In 2022, AR Kit 6 was released, which includes the "MotionCapture" feature. This new version of ARKit can track characters in video frames and provide developers with a "skeleton" of predicted human head and limb positions, allowing developers to create applications that overlay AR content onto characters or hide it behind them, creating a more seamless integration with the scene.
Looking back at the layout of AR Kit that started seven years ago, it can be seen that Apple's accumulation of AR technology is not achieved overnight, but rather subtly integrating AR experience into widely spread devices. By the time of the release of Vision Pro, Apple had already completed certain content and developer accumulation. At the same time, due to the compatibility of AR Kit development, the products developed are not only for Vision Pro users, but also to some extent adaptable to iPhone and iPad users. Developers may not need to be limited by the ceiling of 3 million monthly active users to develop products, but potentially target hundreds of millions of iPhone and iPad users for testing and experience.
In addition, Vision Pro's 3D video shooting partially solves the problem of limited content in MR today: content production. Existing VR videos are mostly 1440p, which looks pixelated in the circular screen experience of MR headsets. However, Vision Pro's shooting combines high-resolution spatial video and decent spatial audio experience, which may significantly improve the content consumption experience of MR.
Despite the impressive configuration mentioned above, Apple's imagination for MR does not stop there. On the day of the release of Apple MR, a developer who claimed to have participated in Apple's neural science development, @sterlingcrispin, said the following:
Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences.
In general, many of the work I engage in involves detecting the psychological state of users through their body and brain data in immersive experiences.
So, a user is in a mixed reality or virtual reality experience, and AI models are trying to predict if you are feeling curious, mind wandering, scared, paying attention, remembering a past experience, or some other cognitive state. And these may be inferred through measurements like eye tracking, electrical activity in the brain, heart beats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance etc.
Users are in mixed reality or virtual reality experiences, and AI models attempt to predict whether they feel curious, distracted, scared, focused, remember past experiences, or other cognitive states. These states can be measured through eye tracking, electroencephalography (EEG), heart rate and rhythm, muscle activity, brain blood density, blood pressure, skin conductance, and other measurements.
There were a lot of tricks involved to make specific predictions possible, which the handful of patents I』m named on go into detail about. One of the coolest results involved predicting a user was going to click on something before they actually did. That was a ton of work and something I』m proud of. Your pupil reacts before you click in part because you expect something will happen after you click. So you can create biofeedback with a user』s brain by monitoring their eye behavior, and redesigning the UI in real time to create more of this anticipatory pupil response. It』s a crude brain computer interface via the eyes, but very cool. And I』d take that over invasive brain surgery any day.
In order to achieve specific predictions, we have used many techniques, which are detailed in several patents under my name. One of the coolest results is predicting which target a user will click on before they actually do so. This is a challenging task and I am proud of it. Your pupils react before you click, partly because you expect something to happen after the click. Therefore, by monitoring a user's eye movements and dynamically redesigning the user interface, we can create more expected pupil reactions and provide biofeedback to the user's brain. This is a cool, rough brain-computer interface through the eyes. Compared to invasive brain surgery, I prefer this method.
Other tricks to infer cognitive state involved quickly flashing visuals or sounds to a user in ways they may not perceive, and then measuring their reaction to it.
Other techniques for inferring cognitive states include quickly flashing visual or auditory stimuli in ways that users may not consciously perceive, and measuring their reactions to them.
Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you』re seeing and hearing in the background.
Another patent details using machine learning and signals from the body and brain to predict how focused, relaxed, or effective your learning is, and updating virtual environments based on these states. So imagine an adaptive immersive environment that helps you learn, work, or relax by changing what you see and hear in the background.
These highly related technologies to neuroscience may mark a new way of synchronization between machines and human will.
Of course, Vision Pro is not without its flaws, such as its sky-high price of $3499, which is more than twice that of Meta Quest Pro and more than seven times that of Oculus Quest 2. Regarding this, Runway's CEO Siqi Chen said:
it might be useful to remember that in inflation adjusted dollars, the apple vision pro is priced at less than half the original 1984 macintosh at launch (over $7K in today』s dollars)
Perhaps it is worth remembering that, adjusted for inflation, the price of Apple Vision Pro is less than half of what the 1984 Macintosh cost (equivalent to over $7,000 today).
Under this analogy, the pricing of Apple Vision Pro doesn't seem too outrageous... However, the first generation of Macintosh only sold 372,000 units, and it's hard to imagine that Apple, who has put a lot of effort into MR, can accept a similar embarrassing situation - the reality in the next few years may not change much, AR may not necessarily require glasses, and in the short term, Vision Pro may be difficult to popularize on a large scale, and may only serve as a tool for developer experience and testing, a production tool for creators, and an expensive toy for digital enthusiasts.
Source: Google Trend
However, we can see that Apple's MR device has already begun to stir up the market, redirecting the attraction of ordinary users from digital products to MR and making the public aware that MR is already a more mature product, no longer just a ppt/demo video type of product. It lets users know that there is a choice beyond tablets, TVs, and phones - a head-mounted immersive display; it lets developers know that MR may truly become the new trend of the next generation of hardware; and it lets VCs know that this may be a high-ceiling investment field.
RNDR Introduction
In the past six months, RNDR has become a meme that combines the concepts of metaverse, AI, and MR, and has led the market multiple times.
The project behind RNDR is Render Network, a protocol that utilizes decentralized networks to achieve distributed rendering. The company behind Render Network, OTOY.Inc, was founded in 2009 and has optimized its rendering software, OctaneRender, for GPU rendering. For ordinary creators, local rendering can be resource-intensive, leading to a demand for cloud rendering. However, renting servers from vendors such as AWS and Azure for rendering can be costly. Render Network connects creators with ordinary users who have spare GPUs, allowing for cheap, fast, and efficient rendering without hardware limitations. Node users can earn some extra money by utilizing their idle GPUs.
For Render Network, there are two types of participants:
· Node Provider (Idle GPU Owner): Idle GPU owners can apply to become node providers and their reputation for completing previous tasks will determine whether they receive priority matching. After the node completes rendering, the creator will inspect the rendered file and download it. Once downloaded, the fee locked in the smart contract will be paid to the node provider's wallet.
RNDR's tokenomics also underwent changes in February of this year, which is one of the reasons for its significant price increase (however, as of the time of this article's publication, Render Network has not yet implemented the new tokenomics into the network and has not provided a specific launch date):
Previously, in the network, the purchasing power of $RNDR and Credit were the same, with 1 credit equaling 1 euro. When the price of $RNDR is less than 1 euro, purchasing $RNDR is more cost-effective than purchasing Credit with fiat currency. However, when the price of $RNDR rises above 1 euro, $RNDR may lose its use case as people tend to use fiat currency for purchases. (Although protocol revenue may repurchase $RNDR, other players in the market have no incentive to buy $RNDR.)
The updated economic model adopts Helium's "BME" (Burn-Mint-Emission) mode. When creators purchase rendering services, whether using fiat currency or $RNDR, they will destroy 95% of the fiat currency value of $RNDR, and the remaining 5% will flow to the foundation as revenue for engine usage. When nodes provide services, they no longer directly receive the income from creators' purchase of rendering services, but receive newly minted token rewards based not only on task completion indicators, but also on other comprehensive factors such as customer satisfaction.
It is worth noting that each new epoch (a specific time period, the duration of which has not been specified) will have new $RNDR tokens minted, and the amount of tokens minted will be strictly limited and will decrease over time, regardless of the amount of tokens burned (see the official whitepaper's release document). Therefore, this will bring changes to the distribution of benefits for the following stakeholders:
· Creator / Network service user: Every epoch, a portion of the RNDR consumed by the creator will be returned, with the proportion gradually decreasing over time.
· Node Operator: The node operator will receive rewards based on factors such as completed workload and real-time online activity.
· Liquidity Providers: Liquidity providers on Dex will also receive rewards to ensure there is sufficient $RNDR available for burning.
Source: See here
Compared to the previous model of income (irregular) repurchase, under the new model, miners can earn more income than before when there is insufficient demand for rendering tasks, while they will earn less income than the original model when the total task price corresponding to the demand for rendering tasks is greater than the total amount of released $RNDR rewards (burned tokens > newly minted tokens), and $RNDR Token will also enter a deflationary state.
Although the $RNDR has seen a significant increase in value in the past six months, the business situation of Render Network has not grown as much as the coin price. The number of nodes has not fluctuated significantly in the past two years, and the amount of $RNDR allocated to nodes each month has not increased significantly. However, the number of rendering tasks has increased, indicating that creators are gradually shifting from assigning large amounts of tasks to the network to assigning multiple smaller tasks.
Source: See here
Although it cannot catch up with the five-fold increase in the price of the currency in one year, Render Network's GMV has indeed seen significant growth, with a 70% increase in GMV (Gross Merchandise Value) compared to last year in 2022. According to the total amount of $RNDR allocated to nodes on the Dune board, the GMV in the first half of 2023 is about $1.19M, which is basically unchanged compared to the same period in 2022. Such GMV is obviously not enough for a market capitalization of $700 million.
Source: See here
(Note: As per the instructions given, I have not translated the English words and phrases such as "here" and "source" in the above content.)
The potential impact of Vision Pro on RNDR
In an article published on Medium on June 10th, Render Network claimed that the rendering capabilities of Octane for M1 and M2 are unparalleled - as Vision Pro also uses the M2 chip, rendering in Vision Pro is no different from regular desktop rendering.
But the question is: why release rendering tasks on a device with only 2 hours of battery life, mainly used for entertainment and not productivity? If the Vision Pro's price drops, battery life improves, and weight decreases, achieving true mass adoption, then perhaps there will be a time when Octane can be utilized...
What can be confirmed is that the migration of digital assets from flat devices to MR devices will indeed bring about an increase in demand for infrastructure. The announcement of a partnership with Apple to research how to make the game engine Unity more compatible with Vision Pro resulted in a 17% increase in stock price on the day, indicating market optimism. With the collaboration between Disney and Apple, the 3D transformation of traditional film and television content may also see similar demand growth. Render Network, which specializes in film and television rendering, launched its AI-assisted 3D rendering technology NeRFs in February of this year, using artificial intelligence calculation and 3D rendering to create real-time immersive 3D assets that can be viewed on MR devices - with the support of Apple AR Kit, anyone can use a high-configured iPhone to Photoscan objects to generate 3D assets, and NeRF technology uses AI-enhanced rendering to render the rudimentary Photoscan 3D into immersive 3D assets that can refract different light from different angles - this spatial rendering will be an important tool for MR device content production, providing potential demand for Render Network.
But will this demand be met by RNDR? Observing its GMV of $2 million in 2022, it is a drop in the bucket compared to the costs invested in the film and television industry. Therefore, while RNDR may continue to thrive in the "metaverse, XR, AI" meme during the hot trend, generating revenue that matches its valuation remains a significant challenge.
Although I believe that the fundamental changes are limited, the topics related to MR seem to revolve around several large metaverse projects, including Monkey's Otherside, Animoca's The Sandbox, the oldest blockchain metaverse Decentraland, and Highstreet, which aims to create a VR world like Shopify. (For a detailed analysis of the metaverse track, please refer to section 4. Business Analysis - Industry Analysis and Potential in https://research.mintventures.fund/2022/10/14/zh-apecoin-values-revisited-with-regulations-overhang-and-staking-rollout/)
But as analyzed in the previous section "Killer App has not yet appeared", most of the existing developers who support VR are not "VR-only" (even if they are at the top of the industry in a million-level MAU sub-market, being at the top is not a crushing competitive advantage), and existing products have not made detailed adaptations to the habits and interactive operations of MR users. The projects that have not yet been launched are actually on the same starting line as all other big companies and startups that see the potential of Vision Pro: after better integration with Unity and Vision Pro, the learning cost of MR ecological game development is expected to be reduced, and the experience accumulated in the relatively narrow market in the past is difficult to reuse in a product that is about to move towards mass adoption.
Of course, if we are talking about the advantage of being an early mover, projects that have already laid out their VR plans may have a slight advantage in terms of development progress, technology, and talent accumulation.
If you haven't watched the video below, this will be your most intuitive experience of the MR world: convenient, immersive, but also chaotic and disordered. The fusion of virtual and reality is so seamless that people who are spoiled by virtual reality regard "losing their identity in the device" as a major event like the end of the world. The details in the video still seem a bit sci-fi and difficult to understand to us now, but this is likely what we will face in the future, in the next few years.
The video can be found at this link.
This reminds me of another video. In 2011, 12 years ago, Microsoft released Windows Phone 7 (as a Gen Z who doesn't have much memory of that era, it's hard to remember that Microsoft also put a lot of effort into mobile phones), and made a satirical ad about smartphones called "Really?": people in the ad were holding their phones tightly all the time, staring at their phones while riding bikes, staring at their phones while sunbathing on the beach, holding their phones while taking a shower, falling down the stairs at a party because they were looking at their phones, and even dropping their phones into the urinal because they were distracted... Microsoft's intention was to show users that "Microsoft's phone release will save us from addiction to phones" - this was of course a failed attempt, and the name of the "Really?" ad could even be changed to "Reality". The "presence" of smartphones and intuitive interaction design is more addictive than the anti-human "mobile version of Windows PC", just as the combination of reality and virtuality is more addictive than pure reality.
The video can be found at this link.
How to grasp such a future? We have several directions that we are exploring:
· Immersive Experience and Narrative Creation: First of all, there is video. After the release of Vision Pro, shooting "3D depth" movies has never been easier, and this will also change the way people consume digital content - from "distant appreciation" to "immersive experience". In addition to video shooting, "3D spaces with content experience" may be another track worth paying attention to. This does not mean building thousands of scenes from a template library, or extracting a few seemingly explorable spaces from a game, but rather spaces that offer "interactive, native content, and 3D-friendly" experiences. Such spaces may include a handsome piano coach who sits on the same piano bench as you, highlights the corresponding piano keys, and gently encourages you when you are frustrated; a little elf hiding in a corner of your room with the key to the next level of the game; or a virtual girlfriend who understands you and accompanies you for a walk... The creator economy that emerges here can be well trusted, automatically settled, digitized, and low-communication-wear transactions using blockchain underlying technology. Creators can better use them to interact with fans without the need to register a company and set up Stripe for payment, or worry about the platform taking 10% (Substack) to 70% (Roblox) of the revenue, or even worry about the platform going bankrupt and taking away your hard work... A wallet, a composable content platform, and decentralized storage can solve the problem. Similar upgrades will occur in gaming and social spaces, and it can even be said that the boundaries between gaming, movies, and social spaces will become increasingly blurred: when the experience is no longer a large screen floating a few meters away, but is close at hand, with depth, distance, and spatial audio interaction, players are no longer "viewers" but participants in the scene, and their actions can even affect the virtual world environment (such as raising their hand in the jungle, causing a butterfly to fly to their fingertips).
· 3D Digital Asset Infrastructure and Community: Vision Pro's 3D shooting function will greatly reduce the difficulty of creating 3D videos, thus creating a new market for content production and consumption. The corresponding upstream and downstream infrastructures such as material trading and editing may continue to be dominated by existing giants, or may be opened up by start-ups like AIGC.
· Hardware/software upgrades to enhance immersive experience: Whether it's Apple's research on "more detailed observation of the human body to create adaptive environments" or the addition of touch, taste, and other immersive experiences, these are all promising areas with significant potential.
Of course, entrepreneurs in this field are likely to have a deeper understanding, thinking, and more creative exploration than us - welcome to DM @0xscarlettw to communicate and explore the possibilities of the era of spatial computing.
(No translation needed as it contains HTML tags and no Chinese characters)
Thanks to the partner @fanyayun and research partner @xuxiaopengmint of Mint Ventures for their suggestions, review, and proofreading during the writing process of this article. The analysis framework of XR comes from the series of articles by @ballmatthew, Apple WWDC and developer courses, as well as the author's experience with various XR devices on the market.
This article is from a submission and does not represent the views of BlockBeats.
Welcome to join the official BlockBeats community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Discussion Group: https://t.me/BlockBeats_App
Official Twitter Account: https://twitter.com/BlockBeatsAsia