Over the past year, the hot topic in tech has been AI: It’s seemingly everywhere you look now, from AI-generated artwork to essays and articles written by ChatGPT. It’s frequently talked about like a force of nature, an inevitable change that’s sweeping across the planet that will either permanently alter the way we communicate or possibly even wipe us out completely. Whatever happens, it’s a safe bet that we’ll be dealing with some form of AI for the foreseeable future, and so it’s important to separate the facts from the hype. What is AI, really? And what does it mean for games?
First things first: We are not facing an imminent Skynet-style robot uprising, or a The Matrix situation where AI takes control of humanity and turns us all into meat batteries. The self-aware kind of AI that can really “think” like humans do in sci-fi stories is known as artificial general intelligence, or AGI. While companies like OpenAI and DeepMind are working on developing AGI, they’re still a long way from achieving it—it’s still the stuff of science fiction rather than science fact. Large language models (or LLMs) like ChatGPT and text-to-image models like Stable Diffusion and DALL-E are what are known as narrow artificial intelligences: programs trained to accomplish a single, specific task. While they use machine learning methods that can be compared to human neural networks, they are not thinking, conscious machines. Nonetheless, ChatGPT and its ilk are potentially powerful tools that could reshape game development. There’s some concern, in fact, that they’re so powerful that they could end up replacing developers by writing code, creating dialogue, and producing art assets for games—putting the people who used to do these things out of work.
That view probably overestimates the capacity of most generative AI tools at this point, though. LLMs and text-to-image algorithms still struggle with many basic tasks, including making proper citations and rendering human hands with the correct number of fingers. Much of the writing and imagery they create looks okay at a glance, but anyone looking at it for more than a few seconds will start finding serious errors. AI tools face another major problem: They potentially run afoul of copyright law, and several pending legal cases will likely have major impacts on the future of these models. Some of the big ones include a suit brought by Universal Music Group against Amazon-backed AI company Anthropic over its Claude 2 chatbot; the New York Times’ infringement lawsuit against Microsoft and OpenAI, alleging that the companies are using the Times’ copy-protected work to create AI products; a lawsuit against Midjourney and Stability AI from a group of visual artists who say the companies have misused their work; and another suit against OpenAI brought by a group of popular fiction writers that includes George R.R. Martin and John Grisham. Regardless of how these cases pan out, this should be sending red flags up to any company planning on using these products in their production pipelines. Even if you’re not trying to copy someone else’s protected work, there’s no good way to know that the output you get from ChatGPT or DALL-E isn’t infringing on an existing copyright. That means anyone publishing a game (or novel, or movie) with AI-generated content in it could face unexpected legal exposure from the original copyright owners of images and text the model used as training data.
While laws in most countries still have a lot of catching up to do to account for AI, some companies have begun taking steps to shield themselves from the potential copyright issues raised by generative AI. Valve, notably, has said it will not approve games that use AI-generated art that infringes on existing copyright—although it’s unclear for now how the company makes a determination over what constitutes infringement in AI artwork. Valve has also said it doesn’t want to discourage the use of generative AI in development. Whatever the case, the upshot of all this is that human developers remain crucial to the video game production process. Even work done by generative AI will need to be checked over by a human to make sure that it’s high quality and clear of any legality issues.
That won’t stop major companies from trying, though. The appeal of increased profitability through labor cost cutting is just too tempting to pass up. In his annual New Year’s letter, Square Enix president Takashi Kiryu promised “to be aggressive in applying AI” in content development, and said that he hopes “to leverage those technologies to create new forms of content for consumers.” Kiryu’s letter doesn’t specify what those “new forms of content” might look like, but one is reminded of the similarly pitched Final Fantasy VII NFT trading cards Square Enix was hyping last year...which have not gone over well with fans. Other major publishers are hopping on the AI hype train as well. Ubisoft has unveiled a ChatGPT-style tool that can supposedly write dialogue for background NPCs, and Microsoft has partnered with Inworld AI, a generative software suite that can write scripts and create characters on its own. Giving new life to artists’ fears that AI may be a cheap way to cut them out of their jobs, audio designers for The Finals confirmed that their game uses AI text-to-speech instead of human voice-over for almost all of its dialogue. Meanwhile, game illustrators and graphic artists in China have reported that gigs have rapidly dried up as tools like DALL-E and Midjourney have become more widely available. There are plenty of good reasons to be concerned about the damage AI could do to the games industry in the short term, as executives chase technological trends in the hopes of cutting costs and boosting shareholder value. However, that’s not to say there’s no positive role for AI to play in game development.
Popular software like Adobe Creative Suite uses generative tools to make tasks like object selection and blending easier. Developers use similar tools to create convincingly random patterns of foliage and ground cover in big open-world games. If you’ve played roguelike games like Hades or The Binding of Isaac, you’re no doubt familiar with an earlier form of procedural generation, the type used to build “randomized” levels each time you begin a new run. This technique is a perfect illustration of the divide splitting generative tools: When thoughtfully used, they can produce uniquely compelling games, but when they’re leaned on in order to avoid the labor of design, the result is usually boring and drab—levels that are all “different,” but in the exact same way every time. Graphics hardware giant Nvidia has also found ways to use AI to boost game performance. Its Deep Learning Super Sampling technology, known as DLSS, uses machine learning to upscale game visuals in real time. DLSS can take a lower resolution game image and upscale it to a user’s screen resolution or beyond, allowing players with lower-end hardware to play in 4K with only a minimal decrease in framerate. The latest version, DLSS 3.0, can even generate frames on its own, further boosting framerates and giving players a silky smooth experience in graphically demanding games like Cyberpunk 2077. There’s plenty of room for skepticism over the tech fad version of AI that’s been making popular headlines and capturing the interest of corporate boardrooms over the past year. At the same time, we’re already benefiting from some of the subtler forms of AI that don’t evoke nightmares of a sci-fi apocalypse. Going forward, game publishers and developers keen to use generative AI will have to ask themselves whether anyone’s really going to be interested in playing games that nobody wanted to actually make.
Using AI in game development will dilute the vision of the artists behind the game. Advanced AI can definitely work in video games like for NPCs in open world games to behave a bit more life-like but AI being used for things like character models, animations and voice just robs the game of any true life. I'm someone who's against Chat GPT'ing everything so I'm critical about AI being used in video games but it's only time that will tell if AI does ruin games or not.
2024-01-10
Author likedI'm slightly disappointed we aren't facing a robot uprising!
2024-01-09
Author likedTyrone Havens
2024-01-09
yes it will.
2024-01-10