Grey Goo Futures: Another Response to ‘Review Code’

Happy International Worker’s Day y’all! Dan Stapleton’s back with another edition of his column explaining in broad strokes the games media business – such as it is, decrepit, shambling along, etc. – and this time he’s got a real interesting topic to discuss: AI and its possible incorporation into criticism. I invite you to go check it out because – some goofy parts aside that we’ll no doubt get to – he’s broadly extremely correct in this one. Spoiler alert: Dan doesn’t think AI can replace critics. Good news!

This time, Stapleton starts off by addressing a common Gamer Refrain every critic has heard at least once: “Don’t give me your opinion, just tell me if this game is good or not.” He rightly points out that this is impossible, as all reviews are ultimately opinions, and all art – commercial and otherwise – is made with the explicit goal of eliciting some kind of emotion from the consumer. If anything could be objective, it would be AI: “It has no concept of things like hype or disappointment; it can’t be swayed by genre preferences, brand loyalties, or personal grudges against a developer that ran over its dog; it can’t be intentionally contrarian; and it can’t be bribed,” he writes.

Before we get too far into this, I think it should be clear what we’re talking about when we talk about artificial intelligence as it is (maybe erroneously) called. Most “AI” we interact with is some form of “large language model”-informed program, like a chatbot or generative art tool trained on a fuckton of information from the rest of the Internet. Large language models are impressive, and the things they can generate are sometimes even convincingly constructed – but they’re functionally just offshoots of shit like your phone learning your basic routine and browsing preferences through machine learning algorithms. AI ethicists like Timnit Gebru and Emily Bender call LLMs “stochastic parrots,” and warn against “the tendency of human interlocutors to impute meaning where there is none.”[1]Bender, Emily M., et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, … Continue reading What AI isn’t is sentient, there’s no risk of it becoming sentient probably in our lifetimes, and maybe most importantly, many of the people who are the most invested in AI becoming sentient fucking suck, either espousing directly white supremacist and eugenicist beliefs or circling the fascist drain in pursuit of a deus ex machina that will quite literally never exist.

Back to Dan. He points out that while AI could be objective purely on the basis that it isn’t sentient, a consequence of this objectivity is that it wouldn’t even be able to discern what would be important to talk about in a review:

It wouldn’t be able to tell you if a weapon is creatively designed or if an enemy was annoying to deal with. It wouldn’t know if managing your inventory was tedious or if upgrades were meaningful and rewarding. It couldn’t distinguish good graphics from bad ones beyond counting pixels and frame rates, and it would be incapable of telling you if the music got stuck in its head. The same is true of movies or TV shows – it can analyze the data on a screen and even identify people and objects present, but it simply doesn’t understand what makes writing and acting work.

And like, yeah – if you were to ask ChatGPT, for example, to “write a movie script,” it would probably try to replicate the form and patterns of a movie script document based on the data it’s been fed in that regard; but the actual content of such a script would likely be incomprehensible. Dan even goes on to prove this, first by asking ChatGPT to review Forspoken (it just gave him the Pythagorean Theorem lmao) and then by asking it to review Top Gun: Maverick. With this latter example, we see the chatbot produce convincing review copy: “The movie showcases some spectacular and thrilling aerial sequences that are sure to leave audiences on the edge of their seats,” the bot writes. “The visual effects are top-notch and the action scenes are well-choreographed.” But even here, the thing isn’t coming up with opinions of its own – it’s scouring the already-written reviews of the movie to come up with an aggregate summary of what critics believe. To be honest I wouldn’t be surprised if the “review” it came up with wasn’t wholly plagiarized from a variety of sources.

If we take “Review Code” to be a candid look inside the workings of a major games media company, you might wonder why this kind of silly thought experiment is being run in the first place. That’s easy, though: it’s because the whole media industry, at the executive level, is toying with the idea of inserting AI into daily operations. In some areas, AI is already there, doing (or pretending to do) reporters’ jobs in a way that should honestly be alarming – again, not because AI is sentient or a living replacement for anything, but because it’s stupid. Executives are stupid.

AI isn’t going to replace critics, not entirely. It can’t, as Stapleton shows – it needs critics to provide the information it generates its summaries from. But even this is an unacceptable position for it to occupy. As a friend pointed out to me, there is a future not far off where whole grifts of “AI-driven media orgs” poach IGN content – and everybody else’s content too – to deliver empty, plagiarized shells of articles that suck up all the SEO-based oxygen in the room. And there is a future where companies like IGN might decide that AI will be more profitable to use than actual people in the process of reviewing. Without strong protections against companies doing shit like this, without things like unions to organize with and prevent those bosses from just implementing programs right and left that threaten employees’ lives and livelihoods, it won’t be Skynet we have to worry about.

References

References
1 Bender, Emily M., et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 2021, pp. 610–23. ACM Digital Library, https://doi.org/10.1145/3442188.3445922.