Weighing in on AI Hype and Risk
There’s been a fight over the past few days about Casey Newton's article, "The phony comforts of AI skepticism."
Casey Newton wrote a piece last Thursday titled “The phony comforts of AI skepticism” — and wow did people have some thoughts about it:
- Gary Marcus responded against the mischaracterization of his position
- Edward Ongweso had quite a few more things to say
- Casey Newton responded while basically doubling down on his argument
- Dave Karpf waded in with his perspective and a recap of the debate
I’m sure there’s more out there but you get the idea. I almost didn’t write something myself, but one part of this conversation still feels like it’s missing to me. So let’s take a brief look at Newton’s argument before diving in.
Newton’s argument in a nutshell
To briefly summarize, the core thrust of Newton’s piece is that the entire range of AI discussion boils down to two groups: critics external to firms and organizations directly working on or studying AI who believe “AI is fake and sucks” and internal critics building AI who understand “AI is real and dangerous” given current and future capabilities. Newton aligns himself with the latter and suggests that the former camp are not only incapable of recognizing genuine innovations made in this field, but risk leaving us blind to threats enabled by those advances.
It’s a widely reductive argument and Marcus and Ongweso do a great job of breaking down the issues with it, and in particular the straw man Newton builds for himself to attack. I’m not going to wade in there, but I recommend reading their posts for context.
I want to talk about how smart journalists like Newton can accidentally (or intentionally) fall for the hype, and in the process fail to correctly frame the conversation for their readers. And I want to be clear up front: I think this was a huge miss.
A false dichotomy
My particular issue with Newton’s article is about the placement of what I’ll call skepticism (“AI is fake and sucks”) and hype (“AI is real and dangerous”) as polar opposites on a spectrum. It’s so wildly off I’m surprised Newton chose to defend it. As many people noted in comments on Mastodon and Bluesky, the skeptics Newton is attacking are more likely to fall into the “AI is real and dangerous and is also fake and sucks” category.
Understanding how current generative AI models, and large language models (LLMs) in particular, fail to deliver on their promise usually means you also understand how dangerous these models are now, and will be in the future. Those shortcomings are especially dangerous when they are released to a public that doesn’t understand what those shortcomings are, and into a media environment that is still grappling with the ramifications of social media.
Sorting out risk from hype
My bigger issue is that Newton seems to fall for the hype from the people building and most likely to benefit from advances in AI. And make no mistake, making unsubstantiated claims about the future dangers of AI is a way of creating hype around the technology. It’s probably the oldest form of hype in computer science – we’ve been talking about imagined AI risks since at least the mid-19th century.
The problem here is that hypothetical risks are treated the same as actual risks. We know that the current GenAI models are accelerants to the issues of propaganda and misinformation that were already widely present on social media and other communication channels before ChatGPT was released. GenAI models are also wrong quite often, despite attempts to improve them, leading to a new form of misinformation when users aren’t appropriately skeptical of the responses they receive. And that doesn’t get into whether LLMs are even a viable path to true general intelligence or not.
We should be talking about emerging risks, but we need a lot more grounding in the conversation from journalists like Newton. And we should differentiate these as hypothetical risks, and they should be treated with more skepticism, especially when they come from the people most likely to benefit from them.
Final thoughts
As Marcus outlines at the start his essay, Newton does get a lot right, including that we should be preparing now for AI to get more dangerous. And we should be documenting the different attitudes that are emerging around AI. I agree with all this.
But we need the media to do a better job of framing the emerging discussion and attitudes towards AI without serving as hype for the individuals and firms building and benefitting from these models.