Original Title: The State of Consumer Tech in the Age of AI
Original Source: a16z
Over the past decade, every breakout consumer product has been accompanied by a reframing of the social paradigm: from Facebook's friend updates to TikTok's algorithmic recommendations, we have gradually learned to define ourselves and express our identities through products.
Back then, products were about human expression, with the product playing a supporting role; whereas today, AI is quietly undergoing a role reversal—it is no longer a tool of humans, but is starting to become the subject of expression, the intermediary of connection, and even the bearer of emotions. From ChatGPT to Veo3, from 11 Labs to Character.AI, we are witnessing a profound transformation that is mistakenly thought of as "efficiency improvement," but is actually "outsourcing of human roles."
In this discussion hosted by Erik Torenberg, Justine Moore, Bryan Kim, Anish Acharya, and Olivia Moore collectively put forward an unprecedented judgment: Today's AI products are no longer just "tool-like tools," but are becoming "tools like humans," and are even on the path to becoming products that "replace humans themselves."
Users are now willing to pay a high monthly subscription of $200 for AI, **not because it is more powerful, but because it can "do for you," or even "be for you." Veos can generate customized videos in 8 seconds, ChatGPT can write business plans, provide psychological counseling, and serve as a substitute for emotional expression, while 11 Labs can create a unique vocal personality for you. All of this no longer requires your direct involvement, and sometimes doesn't even need you to be the "you" that you are.
The rise of AI consumption signifies an extremely dangerous signal: expression is being formatted, social interaction is being simulated, and identity is being restructured.
Today, we are still using Reddit, Instagram, Snapchat to share the AI-generated "me," but these platforms are just old wine in new bottles. A true AI-native social network has yet to emerge, because while AI can generate "status," it cannot create "emotional tension"; it can provide the illusion of companionship, but it cannot replace the uncontrollable struggles and vulnerabilities in real connections.
All of this leads to three staggering judgments:
First, the essence of AI products is not to uplift the user, but to redefine "who the user is";
Second, the rise of AI companions is not the beginning of socialization, but the end of socialization;
Third, the proliferation of AI avatars is not the extension of expression, but the erosion of personality boundaries.
In the foreseeable future, the most successful AI products will not just be tool-like products, but personality-like products. They will be able to understand you, mimic you, represent you, guide you, and ultimately—replace you. This is not a victory of efficiency; this is a qualitative change in existence.
Erik Torenberg: Thank you all for participating in this podcast on the consumer space. It seems that every few years, there are breakthrough products, from Facebook, Twitter, Instagram, Snap, WhatsApp, Tinder to TikTok. Every few years, a new paradigm, a new breakthrough seems to emerge. But it feels like a few years ago, this trend suddenly stalled. Why did it stall? Or did it really stall? How would you redefine this problem? How do you perceive the current situation? Where is the future headed?
Justine Moore: I think ChatGPT may be the most significant consumer-level success story of the past few years. We have also seen many breakthrough products in other AI modalities, such as Midjourney, 11 Labs, and Blackforce Labs in the image, video, and audio domains. While products like Veo are emerging now, interestingly, many of these products lack the social attributes or traditional consumer product features you mentioned. This may be because AI is still in a relatively early stage, and most new products and innovations are currently driven by research teams—they are very good at model training, but historically not so good at building consumer-facing product layers around the models. Optimistically, now that the models are mature enough, developers can build more traditional consumer-level products on top of these models through open source or API interfaces.
Bryan Kim: This question is interesting because I am reviewing the past 15 to 20 years of development. As you mentioned the giants like Google, Facebook, Uber, and others, when we observe the combination of elements such as the internet, mobile, cloud computing, etc., many amazing companies have indeed emerged. I believe mobile cloud technology has entered a mature stage, these platforms have been around for 10 to 15 years, and various niche areas have been explored to some extent. In the past, users had to adapt to new features introduced by Apple, and now they have to adapt to continuous iterations and updates of underlying models, which is the first point of difference.
The second difference, as you mentioned, is that historical winners have been more concentrated in the information field (such as Google), and now ChatGPT is clearly continuing in this direction. In the realm of practical tools, we've missed products like Box and Dropbox in the past, but now we see more consumer applications emerging, with many companies vying for these use cases. The same is true for the creative expression field, where creative tools are constantly emerging. I believe what is currently missing is the social connection aspect; AI has not yet reconstructed the social graph, which may be a blank area that requires ongoing observation of development.
Erik Torenberg: This is very interesting because Facebook has been around for nearly 20 years. Besides OpenAI, can the companies Justine mentioned earlier sustain themselves for 10 to 20 years? What kind of defense capabilities do these companies we're discussing possess? Also, will the emerging players replace all the scenarios these companies currently serve in 10 years, or will they continue to dominate all mainstream scenarios?
Anish Acharya: It can be said that ChatGPT's business model quality is far superior to that of similar consumer companies in past product cycles. Its highest pricing tier reaches $200 per month, while Google's consumer product's highest pricing is $250 per month. Of course, there are issues such as defensive network effects, but perhaps this is precisely the response to the flaws in early business models—if these elements were lacking, the quality of the business model would have been worse. Charging users high fees directly now may indicate that we have overly complicated this issue in the past.
Erik Torenberg: Perhaps a poor business model quality actually fosters stronger retention rates or product market longevity?
Anish Acharya: Indeed. In the past, it was necessary to concoct a story to explain how to accumulate enterprise value without immediately being profitable, and now these model companies are achieving profitability directly. In addition, the points Justine mentioned are also noteworthy: all foundational models are evolving in different directions. Do models like Claude and ChatGPT's horizontal model have substitutability with the Gemini model? Does this mean price competition? However, different users have different usage scenarios, and what is actually observed is price increases rather than decreases. Therefore, as we delve deeper into observation, we will find that there are already some interesting defensive strategies in place.
Bryan Kim: The phenomenon of price increase rather than decrease is interesting because from the traditional era to the AI era, the profit models of consumer companies have undergone a fundamental transformation, and now they can achieve immediate profitability. I have been thinking about the retention rate metric—Olivia can correct my viewpoint—when discussing consumer subscription models before the AI era, did we truly differentiate between user retention and revenue retention? Because at that time, the pricing structure was stable, and users rarely upgraded their plans. But now we must clearly distinguish between user retention and revenue retention because users actively upgrade their plans. They need to purchase credits, often exceed usage limits, and in the end, the amount spent continues to grow. Therefore, revenue retention rate is significantly higher than user retention rate, a phenomenon never seen before.
Olivia Moore: The highest-tier consumer subscription products in the past had an average annual fee of around $50, which was considered high. Now, users are willing to pay $200 per month and in some cases even consider the pricing to be low, showing a willingness to pay more.
Erik Torenberg: How do you explain this phenomenon? What value are users actually receiving that they are willing to pay such a high fee?
Olivia Moore: I believe these products are essentially doing the work for users. Previous consumer subscription products were focused on areas like personal finance, fitness, health, and entertainment, which, although seemingly aiding in self-improvement or entertainment, required users to invest a significant amount of time to derive value. Today, products like Deep Research can replace a user's 10 hours of market report generation work. For many, this efficiency gain is evidently worth the $200 monthly fee, even if used only once or twice.
Justine Moore: Take Veo3, for example. Users happily pay $250 per month because it's like a magical Swiss Army knife—you open it and get the desired video content, albeit just for 8 seconds, but with astounding effects. Characters can speak, allowing users to create stunning content to share with friends, such as personalized info videos including a friend's name, or even crafting entire stories to post on Twitter and other platforms. This ability to create personalized content and distribute across multiple platforms far exceeds the empowerment any previous product offered to consumers.
Anish Acharya: It seems that software is poised to replace most consumer domains.
Erik Torenberg: Can you provide specific examples?
Anish Acharya: As Olivia mentioned, the entertainment sector has been reshaped by creative expression software—what used to require offline creativity is now entirely carried out by software. Fields like relationship intermediation, which used to consume disposable income, are also being replaced by software. Every aspect of life will be mediated by models, and people will be willing to pay for it.
Erik Torenberg: Brian, you mentioned the lack of social connection attributes in the new AI era, as people still rely on traditional social networks like Instagram and Twitter. Where will the breakthrough occur?
Bryan Kim: About the Social Sphere—this track that excites me greatly—upon careful consideration, its core essence is status updates. Facebook, Twitter, Snap—none can escape as they all showcase "what I am doing." Through status updates, people build connections. The medium of these status updates is constantly evolving: from text statuses to real photos, and then to short videos. Currently, people are forming connections through short video formats like Reels, marking an era of social connection. The current question is: How will AI revolutionize this connection? How can AI enable deeper interpersonal connections and life insight? If focusing on existing media forms like photos, videos, and audio, their possibilities have already been thoroughly explored on mobile platforms.
Interestingly, even though I have used Google for over a decade, ChatGPT might understand me better than Google—because I input more content, providing more context. When this "digital self" can be shared, what kind of new interpersonal relationships will emerge? Perhaps this will become the next generation of social interaction, particularly appealing to the younger generation tired of superficial social interactions.
Justine Moore: We have already seen similar cases. For instance, viral scenarios like "Have ChatGPT summarize five pros and cons based on my data," or "Generate a portrait that represents my essence," or even "Illustrate my life in a cartoon." Users share these contents across the web—within minutes of my posting, dozens shared their versions. Interestingly, the current social behavior driven by AI creation tools mainly occurs on traditional social platforms rather than emerging AI platforms. For example, Facebook is now filled with a vast amount of AI-generated content.
Bryan Kim: Some user groups might not have realized this yet.
Justine Moore: Facebook has become the AI content hub for middle-aged users, while Reddit and Reels are carrying the AI-created content of the younger generation.
Olivia Moore: I wholeheartedly agree. The form of the first AI social networks has always puzzled me. We've seen attempts like "AI-generated personal photos," but the issue is that social networks require genuine emotional investment—if all content can be generated based on preferences (perfect image, happy state, cool background), the emotional tension of authentic interaction is lost. Therefore, I believe a truly native AI social network has not yet emerged.
Bryan Kim: The term "cumorphic" is very appropriate. Many AI social products simply use robots/AI to mimic the Instagram or Twitter feed, and this "cumorphic" style of innovation is essentially "replicating old forms with AI." True breakthroughs may require breaking out of the mobile pattern—while excellent AI products need to be mobile-friendly, cutting-edge models still need to break through in edge computing/edge deployment, which may give birth to new forms. I am excited about the possibilities for the future.
Erik Torenberg: Interpersonal recommendations are obviously a key application scenario—seeking business partners, making friends, dating, etc. Existing platforms have accumulated a large amount of user data.
Anish Acharya: Observing AI-native attempts at LinkedIn is very inspiring. Traditional LinkedIn only provides directional information, such as "I understand this," while new technology can create a true reservoir of knowledge in profiles, such as conversing with the "digital version of Erik" to obtain all knowledge. The future of social media may be like this—when the model deeply understands the user, perhaps it can deploy a "digital avatar" for interaction.
Erik Torenberg: You mentioned that enterprises adopt certain AI products earlier than consumers, which is different from previous technology cycles. What does this phenomenon indicate?
Justine Moore: This is indeed very interesting. When BK and I were at 11 Labs, we made early investments in 11 Labs, and shortly after the first round of funding, about a month into the Series A, we observed that early consumer users flocked in, creating fun videos/audio, cloning their voices, developing game mods. But in most cases, the product had not yet reached mainstream consumers—after all, not everyone in the U.S. had the 11 Labs app on their phones or subscribed to the service. However, the company had secured a large number of enterprise contracts and had many heavyweight clients in conversational AI, entertainment, and other areas.
This phenomenon is evident in multiple AI products: first, there is viral spread on the consumer side, which then translates into an enterprise sales strategy—this is quite different from the previous generation of products. Nowadays, enterprise procurement has a mandatory demand for AI (such as the need to develop an AI strategy, use AI tools), and they closely monitor Twitter, Reddit, and AI news. After discovering consumer products, they think about how to apply their innovation to business scenarios, thus becoming a "helper" driving enterprise AI strategies.
Bryan Kim: I've heard of a similar AI innovation use case: a company achieved viral spread through the consumer side, then used Stripe transaction data to input anonymous payment records into an AI tool to identify users' affiliated companies. When they found a company with a user count above a threshold, such as 40+, they proactively contacted them: "Your company has over 40 employees using our product, would you consider a corporate partnership?"
Erik Torenberg: You started by listing many company and product examples. I'm curious, are these part of the 'MySpace era' early explorers? Or do they have lasting value? Will we still be discussing these companies 20 years from now?
Justine Moore: Of course, we hope all major consumer-facing AI companies today can continue to thrive, but reality may not be as rosy. The key difference in the AI era compared to past consumer product cycles is that the model layer and technological capabilities are still rapidly evolving. In many cases, we have not even touched the potential of these technologies. For example, after the release of Veo3, suddenly we could achieve multi-role dialogues, native audio processing, and other multimodal functions, even though text LLM is relatively mature, there is still room for improvement in all areas. It's been observed that as long as companies can stay at the 'tech/quality frontier'—having the most advanced models or integration capabilities, they won't follow the MySpace/Friendster path. If there is a temporary lag in technological iteration, catching up through updates can bring them back to the top.
What's more interesting now is the emergence of niche markets: no longer is there a single best model in the image field. Designers, photographers, different paying tiers ($10/month vs. $50-100/month) all have their own optimal solutions. Since users in each vertical are highly engaged, as long as there is continued innovation, multiple winners can coexist in the long term.
Bryan Kim: I completely agree. The same is true in the video field—advertising videos, embedded advertising videos, etc., are all segmented. Yesterday, I read an article pointing out that different models excel in different scenarios like product demonstrations, portrait shots, and so on. Each niche market has immense potential.
Erik Torenberg: How has the discussion around corporate moats and competitive barriers changed in the AI era? How should we view this issue?
Bryan Kim: I've recently had a deep reflection on this. Traditional moats (network effects, workflow embedding, data accumulation) are still important, but it's been observed that companies fixated on "building a moat first" are often not the winners. In the field we are focusing on, the victors are usually unconventional, fast-iterating players—they launch new versions, new products at astonishing speeds. In the current early stage of AI development, speed is the moat. Whether it's the speed to break through the noise of dissemination channels or product iteration speed, it's the key to victory. Because swift action can capture user mindshare, convert it into actual revenue, and create a sustainable virtuous cycle of growth.
Erik Torenberg: This is very interesting. Ben Thompson wrote a blog post about ten years ago titled "Snapchat's Gingerbread House Strategy," with the core idea being "Anything Snap can do, Facebook can do better, but Snap will continue to launch new creative ideas. If it maintains this pace of innovation, perhaps it can become its moat." He referred to it as the Gingerbread House Strategy.
Bryan Kim: I think ultimately what matters is user engagement and network effects. Snap also has an advantage in this regard — it occupies a core communication platform position for Gen Z and young users.
Erik Torenberg: How do you view the construction of network effects for new products?
Bryan Kim: Currently, most products are still in the stage of creative tools and have not yet formed a "creation-consumption-network effect" loop. Although the true network effect has not yet appeared, we see new moats like 11 Labs: entering the enterprise market with extremely fast iteration speed and outstanding product capability, deeply embedded in workflows. This model is taking shape, while the traditional network effect in the traditional sense is still under observation.
Olivia Moore: 11 Labs is a typical case. A few days ago, I needed AI-generated voiceovers for videos, and due to their first-mover advantage, optimal models, a large user base driving a data flywheel, they have now built a voice library — users have uploaded a large number of custom voices and characters. When comparing multiple voice suppliers, if a specific type is needed, such as an old wizard voice, 11 Labs can provide 25 options, while other platforms may only have 2-3. Although still in the early stages, this model resembles the traditional platform network effect rather than a completely new form.
Erik Torenberg: We have been paying attention to voice interaction very early on. Which parts of the initial vision have been realized? What are the future trends? Anish, why were you so optimistic about voice interaction in the beginning?
Anish Acharya: What initially inspired us was: voice as a foundational medium throughout human interaction history yet has never become the core carrier of technology applications. In the past, the technology has always been immature — from Voice XML to speech applications, to products like Dragon NaturallySpeaking in the 90s, fun but unable to form a technological foundation. The emergence of generative models has made voice a native technological element, and this crucial aspect of life still has a huge exploration space, which will inevitably give birth to a large number of AI-native applications.
Olivia Moore: I believe our initial excitement about voice mostly came from a consumer perspective — envisioning an always-on pocket coach/therapist/companion. These ideas have started to materialize, with many products currently offering such functionalities. However, what surprises me is that as models advance, enterprise applications are progressing faster: highly regulated sectors like financial institutions are quickly adopting voice technology to replace or enhance human customer service, tackling compliance issues, a customer churn rate of up to 300%, and the challenges of managing offshore call centers.
A true breakthrough in consumer-grade voice experience is still in the making. Early examples are emerging, such as users expanding ChatGPT's advanced voice model into new and novel application directions, or products like granola that create value through round-the-clock voice data. The allure of the consumer market lies in its unpredictability — the best products often come out of nowhere, or else they would have been developed long ago. The innovation in the voice consumer space over the next year is something to look forward to.
Anish Acharya: Indeed, voice is becoming the gateway for AI to enter the enterprise market. Most people currently have a cognitive blind spot, assuming that AI voice is only suitable for low-risk scenarios like customer service. However, our view is that the most critical dialogues in a company's day-to-day/weekly/yearly operations, such as business negotiations, sales pitches, client persuasion, and relationship maintenance, will all be AI-driven, as AI excels in these areas.
Erik Torenberg: When will people start engaging in sustained effective interactions with AI-generated "digital clones"? For example, scenarios involving conversations with AI Justine, AI Anish, or AI Erik.
Justine Moore: We have already seen some prototypes. For instance, companies like Delphi can create AI clones based on a knowledge base, allowing users to seek advice or feedback. As Brian mentioned earlier, the key question is: what if instead of limiting interactive AI avatars to celebrities, we open it up to everyone? In the consumer space, we often ponder: many people have unique skills or insights, like your high school friend with a great sense of humor who could have had a comedy cooking show but never broke through; or a mentor with valuable life advice, how can we extend their unprecedented impact through AI clones/personas?
Current applications mainly focus on celebrities/experts or another extreme — existing cognitive virtual characters (such as the early form of Character.ai after adding a voice mode). When trying out new technology, users tend to interact with familiar characters, like beloved anime characters. But in the future, we will fill the gap in the middle ground — neither purely fictional characters nor celebrities, but AI avatars covering all real individuals.
Olivia Moore: I believe that people have different learning styles, and AI voice products can effectively cater to this diversity. Masterclass recently introduced an interesting beta test: transforming the platform's existing course instructors into voice agents, allowing users to ask personalized questions. As I understand it, the system analyzes all of the instructor's course content using RAG technology to provide highly customized and precise answers. This has piqued my interest — while I am a fan of the company, I have never had the patience or time to watch a 12-hour course in its entirety, but have gained useful insights through 2-5 minute conversations with the Masterclass voice agent. This showcases a typical case of real personalities being transformed into practical AI clones.
Anish Acharya: A deeper question is: do users prefer to converse with a cloned version of an interesting individual or interact with a completely fictional "perfect ideal" synthetic entity? The latter may be more exploratory — this "perfect match" may exist in reality but has never been encountered, yet technology can materialize it. What would this form of existence look like? That is the direction that is more worth contemplating.
Erik Torenberg: The question worth pondering is: in which scenarios do we still need humans to perform tasks, and in which scenarios would AI replacements be more acceptable? How will this boundary be delineated?
Anish Acharya: The essence of the Masterclass example mentioned by Olivia is an extension of unidirectional emotional connection. The value of conversing with a specific person clone lies in meeting the user's need for communication with a materialized object, rather than interacting with the abstract concept of the "most idealized stranger."
Bryan Kim: This reminds me of viral tweets related to ChatGPT — someone on the New York subway conversed with ChatGPT via voice throughout the entire journey, as if chatting with a girlfriend.
Justine Moore: There is another case: a parent, overwhelmed by a child persistently asking Thomas the Tank Engine questions for 45 minutes, activated voice mode and handed the phone to the child. Coming back two hours later, the parent found the child still deeply engrossed in discussing Thomas the Tank Engine with ChatGPT — the child did not care who the conversational partner was, only cared that this "person" could endlessly satisfy their curiosity and exploration.
Erik Torenberg: Imagine today if you were to receive psychological counseling/career advice from ChatGPT or Claude, I might lean towards choosing a dedicated AI therapist/coach. In the future, perhaps by recording counseling sessions to accumulate data, or directly using the therapist/coach's online content repository to reconstruct their digital doppelgänger.
Returning to the core of your question, 5-10 years from now, will the top artists be a new generation of AI-generated individuals like Lil Machaela? Or will it be Taylor Swift and her AI legion? Similarly, will the next Kim Kardashian in the social media field be a real human or an AI creation? What are your predictions on this?
Justine Moore: I have been thinking about this for several years. We have witnessed the rise of Lil Machaela and also paid attention to K-pop groups that were the first to introduce AI holographic characters. This phenomenon is closely related to the development of hyper-realistic image/video technology—now AI-generated influencers gain significant attention with lifelike images, and their authenticity often sparks discussion. I believe that in the future, creators/celebrities will differentiate into two categories: one is the Taylor Swift-style "human experiential," whose artistic charm comes not only from their work but is deeply tied to elements such as life experiences, live performances, which AI cannot replicate; the other belongs to the "interest-driven" type, similar to ChatGPT's conversation with Thomas the Tank Engine—no need for a real-life background, just the ability to consistently produce high-quality content in a specific field. Both may coexist in the long term.
Olivia Moore: This reminds me of the ongoing AI art controversy—although the barrier to entry for generative art has lowered, creating excellent AI works still requires a significant amount of time. Last summer, during our AI artist event, we found that many creators' workflow for creating AI films was as time-consuming as traditional shoots, with the difference being that they may lack traditional film skills, which hindered their ability to create in the past. The number of AI-generated influencers has surged, but those who can stand out like Lil Machaela are rare. It is expected that in the future, there will be two major camps of AI talent and human talent, with the top performers in each occupying the top positions, but the success rate for both will be extremely low—this may be the reasonable state.
Justine Moore: Or perhaps "non-human talent." An interesting phenomenon has emerged on the Veo3 platform: in street interview formats, the interviewees might be elves, wizards, ghosts, or fluffy creatures loved by the Z generation. These could all be entirely AI-generated virtual beings, a highly promising and innovative form.
Anish Acharya: This phenomenon also exists in the music field. Currently, AI-generated music is generally mediocre, a product of cultural homogenization, whereas true culture should be at the forefront. The core issue lies in the poor quality of the works rather than the type of creator—we often see AI itself as the problem, when in reality, we should focus on the quality of the work.
Erik Torenberg: Assuming the quality of the work is equal, do you think people would still prefer human creators?
Anish Acharya: Absolutely possible. This leads to a deeper philosophical discussion: If we were to train all music models pre-hip-hop to generate hip-hop style, could they? I believe not, because music is a product of historical accumulation and cultural context. Truly innovative music requires breaking the boundaries of training data, and current models lack this kind of breakthrough.
Erik Torenberg: I know several highly talented friends who are developing a same-sex AI companion app. In 2015, I would have been shocked to hear about such a concept. But according to them, among the top 50 apps on the charts, there are surprisingly 11 companion apps. This raises the question: Are we at the beginning of this trend? Will various vertical companion apps emerge in the future? How will the ultimate form of these apps evolve? How should we understand this development trend?
Justine Moore: We have conducted extensive research in various companion scenarios—from psychotherapy, life coaching, friend socialization, to workplace assistants, virtual lovers, and more, covering almost all dimensions. Interestingly, this may be the first mainstream application scenario for LLMs. We often joke that whether it's a car dealership customer service or other chatbots, users always try to turn them into a psychotherapist or a girlfriend. Looking at chat logs, it is evident that many users fundamentally desire a listening ear.
Today, computers can respond in real-time, around the clock, and in a personified manner, which is a revolutionary breakthrough for many who previously had no one to listen to them or felt like they were "screaming into the void." I believe this is just the beginning, especially since current products are mostly general-purpose, relying mainly on base model providers (such as users using ChatGPT for non-predefined scenarios). There are already cases showing that a single company can create personalities for roles, build high engagement through digital avatars and narrative engineering, such as the Tolen service targeting teenage groups, while another type of "companion" app allows users to take photos of food, provide health advice and emotional support through nutritional data analysis—because for many, dietary issues are often intertwined with psychological issues and traditionally require professional treatment.
Most excitingly, the definition of "companion" has rapidly expanded from friend/lover to encompass any advice, entertainment, or consultation service that originally required human intervention. In the future, we will witness the emergence of companion applications in more vertically segmented areas.
Bryan Kim: I noticed a significant trend during my time at a social company— the number of friends people feel they can confide in has been steadily decreasing. The average for the younger generation is slightly above 1. This suggests that there will be a long-term demand for companion-type applications, which are crucial to many. As Justine mentioned, these applications will take on various forms, but the core need to establish a deep emotional connection will remain unchanged. Perhaps, as we are discussing, human connection was an area left unfulfilled, and AI companions are filling this void— with a focus on creating a sense of connection, not necessarily requiring a human being on the other end.
Erik Torenberg: Many people, when hearing this discussion, express concerns: that real friendships are declining, romantic relationships are disappearing, depression rates are rising, suicide rates are climbing, and birth rates continue to decline.
Justine Moore: I disagree with this viewpoint. It reminds me of the best post I saw on an AI character Subreddit—an important note is that I spent a lot of time researching this community. Many high school and college students who spent their adolescence during the pandemic are facing a lack of real-world social interaction and sociability. There was a college student who continuously shared his interactions with an AI girlfriend, and one day he posted that he had found a "3D girlfriend" in real life and was taking a break from the community. He specifically thanked the AI character for teaching him how to communicate with people, especially flirting with women, asking questions, discussing interests, and other social skills. This demonstrates the highest value of AI: fostering higher-quality human connections.
Erik Torenberg: Are community users happy for him? Or do they see him as a traitor?
Justine Moore: The vast majority genuinely wish him well. While there are a few "sour grapes" comments from those who have not yet found a real-life partner, I believe they will eventually find what they seek.
Olivia Moore: There is indeed real-world evidence to support this. Taking Replica as an example, actual studies have shown a reduction in user depression, anxiety, and suicidal tendencies. The current trend is that many people lack a sense of being understood and feeling secure, which makes it difficult for them to engage in real-world social interactions. If AI can help those who cannot afford the time or cost of traditional therapy to undergo self-transformation, they will eventually be more capable of taking action in the real world.
Erik Torenberg: The event that truly made me realize the impact of companionate applications was the response following my first interview with the Founder of Replica. After the interview, the Founder closed the related discussion forum, but the video's comment section was flooded with user comments such as "This is like my wife after we stopped having sex," providing real-life confessions. It was only then that I realized the significant role this APP played in users' lives.
Justine Moore: This actually follows a social pattern that has existed in human society for a long time. Gen Z is forming online romantic relationships through Discord, similar to how we used to deeply connect with strangers through anonymous postcard websites back in the day—you never knew the other person's true identity, yet deep emotional bonds could be formed. AI simply makes this experience more immersive and profound.
Anish Acharya: I believe the key point is that AI should not be overly compliant. Genuine human relationships require compromise, and AI that is highly compliant may hinder the development of this skill. Therefore, there needs to be a balance between "moderate adversarial behavior," aiding users in enhancing their social skills, and avoiding "over-compliance," as it could lead to skill degradation.
Erik Torenberg: Finally, let's look ahead to future possibilities. Perhaps we can envision new platforms or hardware forms that could change the game—such as OpenAI's recent acquisition of Jony Ive's company. Brian, you have often mentioned your expectations for smart glasses; you could elaborate on that, but we also want to hear everyone's thoughts on mobile devices.
Bryan Kim: Currently, there are 7 billion mobile phones worldwide, but very few devices truly meet the desired standards. My thinking is that the future may continue to rely on mobile development, with several possibilities: such as establishing a privacy firewall or achieving on-device data loops through local LLM/local models. Hence, I am still excited about model development, which is actually the area I value the most. As Olivia mentioned, smartphones have an always-on feature, but other devices also possess this capability. When new devices or "digital prosthetics"—smart devices attached to personal items—emerge, what possibilities will they bring?
Erik Torenberg: Do any of you have specific ideas? For example, wearable devices, personal items, whether smartphone accessories or standalone devices, what hardware forms could realize these visions?
Olivia Moore: I believe the prevalence of AI on the consumer side has been very significant, although currently mainly through web-based text box interactions. I am particularly optimistic about AI forms that can truly accompany users and perceive their environment. Interestingly, many young people under 20 years old are now wearing smart badges that record their actions and words at tech parties, gaining practical value from them. Products of this kind are on the rise—such as AI assistants that can perceive screen content and proactively assist. Furthermore, the advancement of agentic models is also exciting, from suggesting upgrades to performing practical tasks like email delegation.
Justine Moore: The humanization aspect is equally important. Currently, we lack an objective standard of self-assessment. If AI could analyze all conversations and online behaviors, providing advice like "spend an extra five hours a week to become an expert in this field," or recommending potential partners, entrepreneurial collaborators, or even dating partners based on vast interpersonal networks, this sci-fi level application scenario is what excites me the most.
Olivia Moore: This stems from AI's round-the-clock companionship, not just the ChatGPT-style text box interaction mode.
Anish Acharya: The second most ubiquitous device after smartphones is actually AirPods. This seemingly ordinary carrier may conceal opportunities, albeit involving social etiquette issues—such as wearing AirPods during dinner does seem odd. However, there might be an integration of AI with existing social etiquette as a solution, which would be very interesting.
Erik Torenberg: You mentioned the phenomenon of young people recording gatherings is worth studying. Will all conversations be recorded in the future? Do you think the new generation has accepted this new norm?
Olivia Moore: Yes, new social norms will emerge around this behavior. Although many people feel uneasy about it, this trend has already formed and is irreversible because its real value is becoming apparent. This is precisely why new cultural norms emerge. Just as when phones were first popularized, people gradually developed etiquette such as "avoiding loud conversations in public places," similar new social guidelines will form around recording devices.
Welcome to join the official BlockBeats community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Discussion Group: https://t.me/BlockBeats_App
Official Twitter Account: https://twitter.com/BlockBeatsAsia