I’m a fan of Vox. But this one critique misses the mark
This essay is a reaction to Kelsey Piper’s recent defense of AI at Vox.com.
Let me start by saying that I’m a big fan of Vox and have been since they were founded. Their reporting is generally great and you should read them too!
But this particular article on AI is a bad take. It both repeats industry propaganda about what LLMs are and misrepresents what we critics mean when we complain that these companies are “pushing AI down our throats.”
Let’s start with the latter. Google has recently been putting LLM-generated “AI Overviews” at the top of its search results. This prime placement, often followed by ads, often means that real, organic results come “below the fold.” We users thus have to scroll down and parse a large amount of text before we can find the link we were actually looking for. This was the main event that prompted complaints about AI being forced upon us and it’s surprising that Kelsey’s article doesn’t even mention it.
This prominent example of AI-generated content literally displacing organic content is far from the only such example. Every day, while casually browsing Facebook, Quora, etc, people are increasingly having to wade through AI-generated slop. Here, we’re asked to be patient. “I’m not too worried about there being lots of bad AI content” she assures us. “We’re in the early stages of figuring out how to make this tool useful” But, useful for whom, exactly?
This is where the author’s ignorance of AI shows. Her argument, never fleshed out, relies on the two assumptions that (1) AI-powered spam-detection algorithms will eventually catch up and filter out all this slop and/or that (2) eventually, with better models, this AI slop will get so good we might not even mind it. In defense of this latter position, she claims that LLMs are already “ridiculously helpful to programmers.”
With respect, they are not. They are pretty good at writing simple functions in a few, popular languages. But if you’re building anything that’s even a little more complicated or uses an obscure library, the LLMs quickly become more of a hindrance than a help to real programmers. And programming is a game of complete information to a computer. Companies training LLMs can spin up virtual coding environments for the LLMs to play with and learn from the same way AIs have learned Chess and Go by simulating millions of games.
But the real world is not a game. You can try training your AI on simulated realities, but it will only ever get as good as your simulator. And we have no idea how to “simulate” the things that really matter: history, politics, ethics, consciousness…
But what LLMs can do is simulate language: the stuff of thought and communication, deliberation and democracy. And, yes, if you throw a few thousand terabytes of data and a few thousand petaflops of compute at it, AI can simulate language. Well enough to confuse many humans at least. Congratulations, you’ve passed the Turing Test.
The problem here is that if we humans are now struggling to tell whether a Tweet was from a real person or from a Russian spambot, how is an AI filter supposed to tell? Based on the text, it can’t. Any bias an AI-detecting bot could use to tell generated text from human text can also be used by the LLM to write a more convincing tweet in the first place. No, you can’t just filter out this stuff. And Meta is giving away their LLM to every hacker and spam farm on the planet, totally free of charge.
This is what we critics mean when we say that tech companies are pushing AI down our throats: they are putting unhelpful AI prominently in their products and are recklessly giving away generative models to hackers to flood the internet with unfilterable spam. These things are already having a noticeable effect on the quality of our interpersonal communications. I agree with Kelsey Piper that we are still in the early days of AI. But that doesn’t assuage my fears. In fact, that’s why I’m worried.