Yes, Tech Companies are Pushing AI Content Down Our Throats

I’m a fan of Vox. But this one critique misses the mark

This essay is a reaction to Kelsey Piper’s recent defense of AI at Vox.com.

Let me start by saying that I’m a big fan of Vox and have been since they were founded. Their reporting is generally great and you should read them too!

But this particular article on AI is a bad take. It both repeats industry propaganda about what LLMs are and misrepresents what we critics mean when we complain that these companies are “pushing AI down our throats.”

Let’s start with the latter. Google has recently been putting LLM-generated “AI Overviews” at the top of its search results. This prime placement, often followed by ads, often means that real, organic results come “below the fold.” We users thus have to scroll down and parse a large amount of text before we can find the link we were actually looking for. This was the main event that prompted complaints about AI being forced upon us and it’s surprising that Kelsey’s article doesn’t even mention it.

This prominent example of AI-generated content literally displacing organic content is far from the only such example. Every day, while casually browsing Facebook, Quora, etc, people are increasingly having to wade through AI-generated slop. Here, we’re asked to be patient. “I’m not too worried about there being lots of bad AI content” she assures us. “We’re in the early stages of figuring out how to make this tool useful” But, useful for whom, exactly?

This is where the author’s ignorance of AI shows. Her argument, never fleshed out, relies on the two assumptions that (1) AI-powered spam-detection algorithms will eventually catch up and filter out all this slop and/or that (2) eventually, with better models, this AI slop will get so good we might not even mind it. In defense of this latter position, she claims that LLMs are already “ridiculously helpful to programmers.”

With respect, they are not. They are pretty good at writing simple functions in a few, popular languages. But if you’re building anything that’s even a little more complicated or uses an obscure library, the LLMs quickly become more of a hindrance than a help to real programmers. And programming is a game of complete information to a computer. Companies training LLMs can spin up virtual coding environments for the LLMs to play with and learn from the same way AIs have learned Chess and Go by simulating millions of games.

But the real world is not a game. You can try training your AI on simulated realities, but it will only ever get as good as your simulator. And we have no idea how to “simulate” the things that really matter: history, politics, ethics, consciousness…

But what LLMs can do is simulate language: the stuff of thought and communication, deliberation and democracy. And, yes, if you throw a few thousand terabytes of data and a few thousand petaflops of compute at it, AI can simulate language. Well enough to confuse many humans at least. Congratulations, you’ve passed the Turing Test.

The problem here is that if we humans are now struggling to tell whether a Tweet was from a real person or from a Russian spambot, how is an AI filter supposed to tell? Based on the text, it can’t. Any bias an AI-detecting bot could use to tell generated text from human text can also be used by the LLM to write a more convincing tweet in the first place. No, you can’t just filter out this stuff. And Meta is giving away their LLM to every hacker and spam farm on the planet, totally free of charge.

This is what we critics mean when we say that tech companies are pushing AI down our throats: they are putting unhelpful AI prominently in their products and are recklessly giving away generative models to hackers to flood the internet with unfilterable spam. These things are already having a noticeable effect on the quality of our interpersonal communications. I agree with Kelsey Piper that we are still in the early days of AI. But that doesn’t assuage my fears. In fact, that’s why I’m worried.

7 Likes

At first, Google would give you an option to filter out the AI stuff (granted you had to apply it every time you searched something). That disappeared about a month ago. I force myself to scroll past all the AI stuff every time I do a search.

It’s always a moment of temptation and mindful restraint. Sometimes I’m saying to myself, oh Beth, just look at it and be done with it. Who’ll know? Seriously, what difference does it make? It forces the regular person to give in, eventually. (I consider myself a regular person but still don’t give in when it comes to the Google search engine.)

Yes, down our throats. More often in a creepy kind of way where you don’t realize that’s what’s going on.

Also, she over-simplifies what the “bad AI content” is. Actually, she doesn’t really go into that. Fair enough. But the non-ethical part of the AI pie is the slippery slope that people simply won’t be able to navigate without really persistent mindfulness. It’s going to be too pervasive and impossible for people to discern whether they’re interacting with human intelligence.

Hence my crude AI 15% Pie graphic below. The only thing I used compute intelligence for is the perfect circle. I didn’t delete the extra white space.

I was thinking 5% but I’ll go with 15%.

As a non-programmer who appreciates programmers, I totally agree. Thanks for fleshing this out a bit.

And I can’t stop thinking about the compute capacity part with downstream to data centers and energy infrastructure. What always gets lost by people like this author: Just where are all the rare-earth metals coming from? How?

Because I use Facebook, I saw an ad this morning for the new Betwixt app.

I am naive. I did not know such things exist. I never played computer games.

Anyway, here we have an AI app for this – Betwixt (from the research paper they highlight
The Magic of the In-Between_SHarmon.pdf (169.4 KB):

We created Betwixt as an experiment to determine if interactive narrative could help users strengthen mental resilience in an engaging, constructive, and collaborative way over the long term.

OK, so far so good. Note the subtle use of interactive.

Betwixt is experienced by completing a sequence of dreams, which are similar to interactive book chapters. During a dream, one can interact with their surroundings and, at times, with a chatbot. Conversing with the chatbot provides opportunities for understanding the fantasy world, listening to stories, and self-reflection. Between dreams, users unlock optional quests that allow them to explore journal prompts and a library of guided meditations, among other resources.

The article mentions the term chatbot five times. There is no mention of AI or any hint of it except by use of the chatbot term.

For effective self-reflection, one must be able to focus on, understand, and reinterpret their negative emotions and experiences. Additionally, this reinterpretation process is less likely to fail (i.e., lead to rumination and the escalation of negative affect) when done from a self-distanced perspective [2,3]. In this case, self-distancing means that the person in question is able to view themselves from the perspective of an observer during their analysis.

Well that sure sounds like establishing mindfulness and moving into initial samatha. I mean, I realize we won’t ever land on a consensus EBT meditation manual. But at least most of us agree that, absence physical illness and various neurological situations, the average human being has capacity to establish mindfulness and gain some basic insight.

Without a computer.

In so doing, Betwixt frames self-reflection and reframing as a collaborative conversation, and encourages self-expression. Users found Betwixt to support a healthy balance between guiding (just enough to prevent a lack of connection or “freezing up”) and “allowing your own mind and personal experiences to fill in the blanks to find meaning”. It is possible that these combined capacities of Betwixt encourage a human-computer therapeutic alliance. Traditionally, the term therapeutic alliance is applied in the context between a human patient and a human therapist.

Well, once that cat’s out of the bag, I don’t know how you ever get it back in.

Finally, to assuage any concerns about dependency on the chatbot, we have this:

Taken together, these findings are promising for future mental health apps that wish to secure regular user engagement while preventing addiction.

That’s the only reference to addiction. That said, it’s cleverly talked to in roundabout ways throughout the article.

It’s useful to review the paper’s references at the end, including:

Cowden, R.G., Meyer-Weitz, A.: Self-reflection and self-insight predict resilience and stress in competitive tennis. Social Behavior and Personality: an international journal 44(7), 1133–1149 (2016)

:elephant: :pray:

2 Likes

Thanks. There needs to be more informed pushback. There’s a huge shift, I believe, in public perception of AI, at least among those informed enough to understand it. There simply aren’t solutions for most of the problems.

You do the same thing as if people are putting actual cats in bags: you make laws making it a crime to do so, and you prosecute people for it.

2 Likes

Perhaps it’s similar to the way people forget about the power of vaccines when the worst diseases are eliminated. Or the importance of (physical) industrial regulation once there are no longer disasters in the news. History is littered with the dangers of massive change and societies’ successful response to it through government regulation.

There is no reason to take a “Whelp, can’t do anything about it now” attitude.

5 Likes

I basically agree with the headline.

Nobody asked for AI.

Tech companies smelled fresh food in the buzz terms and a chance to make sales on AI, real AI or ordinary applications labeled as such.

Nobody else wants this.

It was common for early automobiles to break down on the side of roads and people in horse drawn carriages to laugh at them. Now look at cars.

AI is going to get better.

At the least, it will be the total death of personal privacy. Too much data to analyze every phone call the NSA has access too or all of the pictures of the many surveillance cameras. Improved AI has the potential to fill that gap.

Closer in the future is AI generated posts on the Internet with lazy AI generated replies. People flinging cyber poo at each other like apes in a cage.

1 Like

That’s really the now, all the big social media sites are full of AI bots.

Facebook in particular has been overrun with these revolting AI images of things like starving children next to a big birthday cake, with a caption saying “it’s my birthday!”. It’s so obviously AI, but there’s a stream of replies, dozens or hundreds, wishing them a happy birthday.

So how this particular con works, apparently, is that there are a network of youtubers and similar who publish how-tos on becoming a sponsored creator on Facebook. This is targeted on poor people in India primarily, who are promised that Facebook will pay them for content. Then the scammers give them a basic tutorial, and of course you have to pay for the “secret” stuff for real success.

Basically people are taught to enter a series of words into an AI image generator (typically Bing I think), the idea being to create something that is as emotionally triggering as possible. Then you post the images on Facebook. Now, Facebook has a sponsored creator scheme. Not sure exactly how it works, but basically if you post enough things with enough regularity that get enough views, Facebook will pay you per post. So this cycle of utter absurdity is a perverse motivation stemming from Facebook itself.

The reality, as with all cons, is that the people getting conned are the one who think they are doing the con. The payment from Facebook is rarely enough to justify the work, so the only ones making money are those selling the how-to courses.

When asked, Facebook just said they’re not doing anything against the rules.

We were promised computers that could work out the answer to the life, universe, and everything. And this is what we get. :face_vomiting:

2 Likes

Money + anything = :face_vomiting: .

It turns out in an economy where people want control over each other based on property, this is what we get. Until society starts to work a way to overcome greed with Compassion, and hatred with Love, the world will plummet into war and famine. And the root of the problem like I said, is lust for money.

2 Likes

It seems to be all about stock prices. The hype makes general public investors turn any stock involving AI into a “meme stock” and the stockholders get richer. So every company is throwing an AI “assistant” into their products. They are also laying off every white collar worker that they can find so their margins will look even better. Their profits are higher than ever, and they are laying people off to make even more money. It’s pretty insane here in the US at this point. (And the rest of the world is still trying to be like us … hint hint we’re insane, don’t be like us!)

I mean, it’s so bad Youtubers who give career advice are making videos like this one:

1 Like

I was going to ask about X’s Grok, as X is not a publicly traded company any more and has no stock to pump… but it seems xAI is technically a separate company which has raised billions of dollars in “startup” capital from the Saudis, etc. So it seems @cdpatton is right again :sweat_smile:

1 Like

He does rather make a habit of it, I find.

2 Likes