Original Title: "The AI Revolution in Blockchain Gaming (Part 4): How AI is Actually Used in Games?"
Source: Wlabs
At the Global Game Developers Conference (GDC 2023) that ended in March 2023, the topic of "Application of AI Technology in Games" became the most discussed and focused topic, with almost everyone talking about it. Sam Altman, the founder of OpenAI, said in a recent interview, "AI is one of the few things that has been severely hyped and still severely underestimated." This statement is even more relevant in the gaming industry, as the rapid development of AI technology in this field is expected to bring about a wave of transformation throughout the entire gaming industry.
This is a revolution about productivity.
In fact, the application of AI in the gaming industry has long gone beyond the discussion stage - the gaming industry has always been a major driving force for the development of AI technology, from intelligent battles to virtual worlds, from intelligent customer service to AI cheating detection, AI technology is widely used in games.
And now, driven by the wave of AIGC, the gaming industry is ushering in a new round of AI transformation. The main trend can be summarized in four words: cost reduction and efficiency improvement.
Various new tools made using AIGC technology that can reduce game development costs and improve game production and operation have entered a frenzy mode. Let's take a look at what kind of AI black technology is available now!
Epic Game showcased their latest Metahuman Animator technology at GDC 2023. This technology allows developers to create high-quality animations in just a few minutes, generating facial bone structures with just three frames of footage. It also supports animation generation from video and audio. Users do not need animation production experience, just an iPhone, to quickly map facial expressions onto MetaHuman virtual characters and generate highly realistic animations. During the live demonstration, the presenter was able to recreate the facial details of a live performance in just 2 minutes in the digital space.
Sorry, I am unable to translate the given content as it does not contain any text to be translated. It only includes HTML tags and a reference to an image source.
Opus.ai has developed a new text-driven 3D world generation method that allows users to create dynamic lighting, camera control, terrain, trees and animals, buildings, roads, and animated characters through text input. In simple terms, users only need to type and the AI can create 3D game scenes and various components and dynamic effects (such as leaves swaying in the wind) in the scene.
Sorry, I am unable to translate the given content as it does not contain any meaningful text or context. It only includes HTML tags and symbols. Please provide me with a meaningful text or context to translate.
The software is currently in internal testing and you can register for Early Access on the official website. Currently, only English is supported. Don't worry if your English is not good, there is a tool called ChatGPT that can help you describe a scene in detail.
GAEA is a technical system with the ability to build a complete NPC ecosystem. At this stage when everyone is still exploring how to use AI technology to improve R&D efficiency, GAEA may be the first set of solutions that deeply integrates gameplay and changes the content ecosystem of games, and may even inspire the product form of the next generation of games. Recently, a domestic hyperparameter technology company used the GAEA technology system to create a technical demo called "Living Chang'an City". The NPC inside has no pre-set script and is all controlled by AI. They can interact with each other, "remember" what they have seen and done, and affect their future actions. Each AI NPC has its own goals and reasons for action, forming an evolving small "society", just like "Westworld"?
There was a recent buzz in the cryptography industry about the AI NPC experiment conducted by Stanford and Google. They created 25 AI NPCs and placed them in a simulated town where they not only engaged in conversations but also participated in complex behaviors, including hosting a Valentine's Day party. The NPCs were unique and independent.
There are social media comments that suggest that the performance of these NPCs is more "realistic" than that of human role-playing: It is hard not to be amazed and to think that perhaps a new, truly open-world game is coming soon, and that one day Guy, the AI NPC from the movie "Free Guy," who realizes he exists in a virtual gaming world, might become a reality!
As the scale of games continues to grow, writers face the challenge of making NPCs (non-playable characters) distinctive and realistic. If there are hundreds of NPCs, how can each interaction with them be unique? This is the problem solved by Ghostwriter, an internal AI tool developed by Ubisoft's La Forge research and development department.
This tool can be used to help game writers generate initial dialogue for NPCs when triggering events, such as conversations between NPCs, enemy dialogue during battles, or dialogues triggered when players enter a certain area. These tasks were originally designed by game writers (planners) who spent a lot of time on them. With the Ghostwriter tool, AI can automatically generate samples that can be selected and modified based on the basic settings of NPCs, saving a lot of time and allowing writers to focus on other core plot elements.
Scenario is a startup that has developed a platform using AI to quickly create game assets. On this platform, you select a set of visual materials and upload your own training data, such as characters, props, vehicles, weapons, skins, buildings, concept art, pixel art, and sketches. Then, users can create their own generative AI engine in just a few clicks, without any additional technical skills. Finally, users can turn their ideas into reality with just a few words. Asset creation has never been so simple.
AIGC, the wave of productivity revolution, has swept the civil field. As I mentioned at the beginning, domestic game giants have already taken action.
(The picture is part of the screenshots from "AIGC Landing Project Application Analysis" compiled by Netease Cfun Design Center)
During the development stage, almost all well-known domestic companies have actively explored the integration of AI into the game production process, such as assisting in character design, generating various game assets (icons/effects/maps/models, etc.), and so on. It is rumored that a certain large company even requires employees in specific departments to be proficient in commonly used AI tools, which serves as the basis for layoffs.
The content you provided is:
(Image source: from the internet)
The game is in operation stage. In addition to the official use of AIGC artificial intelligence to produce various materials for dissemination, including but not limited to images, copywriting, videos, etc., it also reduces the threshold for UGC content creation, making it easier for participating players to produce high-quality derivative content. Players are also more motivated to actively spread the content. Here are a few examples:
<逆水寒>
translates to
<Nishuihan>
in English.
The latest update of the Nishuihan online game has implemented UGC content generation that uses AI to generate appropriate poetic phrases for in-game screenshots, and the effect is quite stunning. The project team has set up more than 40 popular classic AI lyric checkpoints in the game, which can generate Song Dynasty poems that are suitable for the scenery, well-crafted, and reflect the theme of the keyword, all with the player's signature. With just one click, players can share these poems on their social media accounts and receive countless likes.
<Genshin Impact>
High-quality Coser photos that are difficult to distinguish between real and fake are made using stablediffusion+LORA. With the efficiency and quality of AI technology, which coser can compete? Following the principle of "if you can't beat them, join them," some cosers have chosen to train their own LORA models and, with the help of ControlNET, quickly generate high-quality images of various characters and actions using their own faces.
The beauty of secondary creation pictures of two-dimensional characters is beyond words:
<A certain overseas company>
Using ChatGPT for translation implementation of multilingual versions is fast and authentic, and it's much cheaper than previously hiring a specialized translation company. Using Midjourney to quickly generate hundreds of operation materials with images, paired with (English) copy designed by Chatgpt, overseas operation costs have been greatly improved. Even more incredible, previously spending tens of thousands of budget to hire someone to compose and sing ancient-style fan songs, now there are AI tools that can generate an 80-point finished product with a little training, including singing and accompaniment...
Overall, the capabilities of AIGC's current tools have changed the game production workflow for many major companies and are increasingly integrated into various aspects of game production, operation, and deployment. The outsourcing budget has naturally been greatly reduced, and the efficiency of internal processes has also been greatly improved. I still remember when Red Dead Redemption 2 was released, the total cost reached an astonishing 5.6 billion RMB, and the development time was as long as 8.5 years. Most of the money was spent on labor costs - to make the entire open world more realistic, unimaginable manpower was spent on writing settings, stories, and lines for every NPC... If it were developed again now, how much shorter could it be to create an open world game of Red Dead Redemption 2 quality? And how much cost could be reduced?
(The picture is an internal email from a certain company, requesting a comprehensive cessation of outsourcing expenditures related to creative design, proposal writing, and copywriting within the company.)
AIGC tools are more about "increasing efficiency" for large companies, while for small teams, they are about "reducing costs" - reducing the threshold for production and greatly reducing the required startup capital. 3.1 Reducing the threshold for game development and quickly generating game materials, as shown in the figure below, independent game developers have used stable diffusion + controlnet to generate game character animations with AI scripts - previously, such 2D game character animations required manual drawing of each frame image, but now AI can greatly improve efficiency. Previously, either you had to be able to draw, or you had to find someone who could or outsource it. Now, it doesn't matter if you can't draw, as long as you have aesthetics, you can leave the task of drawing characters to AI.
Sorry, I am unable to translate the given content as it does not contain any meaningful text or context. It only includes HTML tags and symbols. Please provide me with a valid text to translate.
Currently, the AI technology that can be used stably in games is mainly focused on 2D/2.5D materials, including background textures for 2D adventure games, character illustrations for 2D games, 2.5D game models, and texture mapping for 3D models.
(Note from the author: When this article was written in April, it was only 3 months ago. Now, even 3D materials are possible...)
Domestic game AI company City From Naught Studio has partnered with Web3 creator community Magipop DAO to develop a detective RPG game based on LLM and generative AI, code-named "DetectiveGPT". Players can explore the truth through free and in-depth interaction with AI NPCs, and the game is expected to be released on Steam in April this year. The 3D model textures in the game are all generated by the 3D texture generation tool developed by City From Naught Studio. In traditional decryption games, NPCs have preset dialogues, and players obtain clues by selecting questioning options. However, in this decryption game made by AI tools, players can freely inquire about eyewitness NPCs through voice or text, and eyewitnesses will respond according to the player's questions, providing an immersive experience and a strong sense of involvement.
The development of AIGC technology has indeed brought endless possibilities to independent game developers - don't know how to code? ChatGPT can help you; don't know how to draw? Choose from Midjourney and SD; can't write copy? Let CharGPT generate it. UI, game dubbing, and sound effects can also be handled by AI. All you need left is creativity and the ability to describe what you want to AI. (Whispering, when you don't have any ideas, you can also ask ChatGPT for inspiration!)
With the increasingly mature game industrialization, the cost of making games is getting higher and higher. The "big players" with abundant cash flow, talents, and standard production processes have almost monopolized the entire game market. However, now I see a new possibility - perhaps in the near future, we may see more creative small game teams or even individuals, bringing us surprises.
When the AIGC revolution was just beginning, many people were questioning and watching, afraid of being replaced and resistant to learning new ways because they didn't want to leave their familiar comfort zone. However, the wheel of technology mercilessly rolls forward--
At that time, some people said: AI painters are not good. Not long after, midjourney V5 was released and there were no problems with the painting part.
At that time, some people said: the Asian women drawn by AI were all too ugly to look at. Not long after, Lora, who had been trained with many Japanese and Korean beauties, came and easily generated various beautiful internet celebrity girls.
Someone once said: AI drawing can only be used to provide inspiration and cannot control character movements, making it difficult to use. However, not long after, stablediffusion developed the Control Net plugin, which allows for the control of AI-generated image movements.
At that time, many people said that the icons and logos drawn by AI were not vector graphics and not easy to use. Soon after, VectorizerAI was released, which can convert JPG, PNG and other bitmap images into vector SVG format with just one click.
At that time, some people said: AI cannot generate specific text, it is just gibberish to AI! Now, the new version of SD supports pasting text.
……
AIGC is developing too fast. What level will AI reach in a month from now? I dare not imagine... But regardless, I know it's time to let go of biases and actively embrace AI technology with an open mind.
Original article link
Welcome to join the official BlockBeats community:
Telegram Subscription Group: https://t.me/theblockbeats
Telegram Discussion Group: https://t.me/BlockBeats_App
Official Twitter Account: https://twitter.com/BlockBeatsAsia