AI-9: The “G” in AGI stands for “Monopoly”

AI is a marketing term. Once upon a time, we took AI to mean something that was genuinely intelligent and aware. But these things are not “intelligent”, nor do they even mimic intelligence, since mimicry assumes intention. They merely output streams of characters that human beings fool themselves into thinking represent intelligence.

The meaning of the term AI has been massively eroded. Compare Tesla’s “full self driving”, whose immanent arrival was confidently predicted by Musk since 2016. In 2024, Musk is still claiming it’s just around the corner. Of course they market their software as “full self driving”, when it is very far from that. Incidentally, when carmakers tried to make AI self driving work in Australia, they got beaten by the kangaroos. Onya, roos!

Current (early 2024) AI machines specialize in a particular domain, generating either text, or images, or video, etc. The next step is what they call Artificial General Intelligence (AGI), another marketing term. Altman has defined AGI as “AI systems that are generally smarter than humans.”

Soon, I expect, we’ll find this sense eroded to mean a machine that can do what several machines do today. There is hype, though no evidence, that the next OpenAI model, GPT-5, might achieve this. OpenAI is pushing the narrative that its products, such as the newly-demoed Sora video generator, are steps towards AGI.

But what can that mean? A video generator is no more “intelligent” than a word generator. Put the two together and it’s still no closer to being conscious. What’s happening is that AGI, as a marketing term, is eroding from Altman’s claimed capacity to exceed human intelligence across arbitrary domains, to something that generates output in a few circumscribed cases. AGI in this sense does not signify any progress towards actual awareness. It is merely a machine whose output is designed to fool human beings in a greater range of cases.

The real impact of AGI will not be in consciousness, but in market dominance. It will not only replace human workers, it will render obsolete most of the current small and medium AI projects.

The tech industry has an age-old pattern where early diversity and experimentation is replaced by a monopoly or oligopoly. This happened with operating systems, word processors, browsers, social media, mobile phones, online marketplaces … you name it. You end up with one or at most two or three massive companies dominating the market while everyone else carves out a tiny niche for specialist needs. This situation arises because, in the absence of effective anti-monopolistic action by the US government, the leading player uses unethical or illegal means to destroy their competition.

This method was pioneered by Microsoft, who called it “embrace, extend, and extinguish”, and who because of it lost a huge anti-monopoly case in 2001. These days people think of Bill Gates as philanthropist, but let’s not forget how he got so rich. From Wikipedia:

Bill Gates was called “evasive and nonresponsive” by a source present at his videotaped deposition. He argued over the definitions of words such as “compete”, “concerned”, “ask”, and “we”; certain portions of the proceeding would later provoke laughter from the judge when an excerpted version was shown in court. Businessweek reported that “early rounds of his deposition show him offering obfuscatory answers and saying ‘I don’t recall’ so many times that even the presiding judge had to chuckle. Many of Gates’s denials and pleas of ignorance were directly refuted by prosecutors with snippets of e-mails Gates both sent and received.”

We have seen similar behavior from various tech gurus over the years. Now Microsoft is OpenAI’s biggest partner. And it seems that the childish and evasive behavior of the tech oligarchs hasn’t changed.

As I write in March 2024, an interview with OpenAI’s CTO Mira Murati is doing the rounds. (Speaking of misunderstanding sci-fi, FastCompany reports that one of her favorite movies, 2001: A Space Odyssey, “centers on a rogue AI that kills everyone.”) When asked what the sources are for OpenAI’s video generator Sora, she said, pulling straight from the Gates playbook, “I’m actually not sure about that”. What’s chilling is not that she’s lying—she did after all learn from the best at Tesla and Goldman Sachs—but that she knows she can just get away with it.

Everyone in the industry is well aware that video generation capabilities such as Sora require massive amounts of training data, and that it is extremely likely that such data comes from legally dubious sources. By scraping and using the work of everyone else, OpenAI’s road leads not just to them threatening the livelihoods of actors and screenwriters, but that of other AI companies as well. The current boom in AI startups, which is characterized by massive speculative investment in companies with hazy paths to profitability, is headed towards a massive bust. When the smoke clears, only a few monoliths will be standing.

The monopolizing process is well under way in AI, where OpenAI is dominating both mind share and market share. Given the chaotic nature of its governance, it is too early to call it for certain, but it sure looks like it will achieve a monopoly in the near future. That means we will all be force-fed its content, for good or for ill, and subject to the decisions made by its leaders.

8 Likes

We have evidence that Altman, probably the single leading figure of the AI hype, understands how power works in institutions. He was president of Y Combinator, probably the single most influential venture capital firm and startup incubator in tech. It is quite literally the venture capitalist’s job to deploy their capital to buy control of businesses such that they make a profit. More recently, Altman was fired by the board, only to outmaneuver them and return. He’s clearly canny, and he’s put together a compelling pitch for companies: Complete control. Capitalists want control over labor the way that kings wanted gold, and Sam Altman is an alchemist promising no more complaining workers, with their annoying, incessant demands for higher wages, family leave, and even bathroom breaks.

Critics have spilt much ink discussing the terminology here, trying to distinguish LLMs from AI from AGI, marketing speak from fields of science, etc., but the AI hype continues to evade critics’ attempts to use precise language because it is not a real event. Just because alchemists often did practice chemistry doesn’t make alchemy real. AI is an idea that began as a subfield of computer science, until it was so distorted that it popped, detaching itself from reality. Now, this orphaned concept has grown to a life of its own, as our discussion of AI eclipses any meaningful definition of it as a real, definable thing.

Consider LLMs, the beating heart of the AI hype. They are fluent but not knowledgeable. Though speaking fluently often coincides with speaking knowledgeably, neither guarantees the other. This conflation is at the heart of much of the flawed AI research that we’ve already discussed, but seen through Agre’s argument here, it takes on new significance. Companies are training LLMs on all the data that they can find, but this data is not the world, but discourse about the world.6 The rank-and-file developers at these companies, in their naivete, do not see that distinction. They instead see the first general purpose tool that comes with an “ontological grid” (as Agre calls it) that can coherently fit the entire world. This tool can interface with the world just as we developers do, since we too only ever output symbols, be it human language, computer language, diagrams, etc. So, as these LLMs become increasingly but asymptotically fluent, tantalizingly close to accuracy but ultimately incomplete, developers complain that they are short on data. They have their general purpose computer program, and if they only had the entire world in data form to shove into it, then it would be complete.

This will never happen. It will be forever just around the corner, because the AI hype is millenarian, even going so far as to contain literal apocalyptic prophesy. Goalposts will forever move — if only they had more data, or more energy, or more hardware. The meanings of words in previous predictions will be fudged, then squeezed until they’ve been drained of all sense, only then to be discarded and replaced with new words in a new media cycle to keep the story forever alive and constantly changing.

3 Likes

One of the signs of ennervation of meaning is how institutions are crushing dissent, through expelling or silencing those who criticize AI. One of the first signs of this was when AI ethicist Timnit Gebru was kicked out of Google for raising ethical issues. She just shared a recent experience when submitting a grant application:

https://x.com/timnitGebru/status/1836492467287507243

We received feedback from a grant application that included “While your impact metrics & thoughtful approach to addressing systemic issues in AI are impressive, some reviewers noted the inherent risks of navigating this space without alignment with larger corporate players,”

2 Likes

Sad but not surprising.

This has also been going on in medical care in the US for decades – someone needs an MRI and in a number of cases it won’t get done until it is approved by someone on the insurance company payroll. Maybe the same for government-run care.

And, sometimes tests or treatments are denied entirely.
Danger trumped by profit.

Apparently, the same goes for AI research that is not in the interests of enhancing $$$.

1 Like

Thanks for the reference. Now I’m on Mastodon and following her there. I don’t follow X. So happy to let others filter out the worthwhile stuff!

1 Like

Yeah, that was kinda hilarious. Shows how serious they are about “ethics”

Well, it does show that Google is still innovating as an industry leader in some ways. After all, they were followed by OpenAI, Microsoft, Twitter, and others. I’m not sure if that’s the kind of industry leadership we’re looking for, though.

1 Like

Didn’t everyone get the memo? Google motto is now don’t be evil.

Seriously, they dropped “don’t be evil” from their code of conduct because it is no longer compatible with their values.

I once had an argument with Eric Schmidt - former chairman of Google - about Google’s lack of privacy, and his reply was “Anyone who uses any Google services should have zero expectations of privacy. Anyone who is concerned should stop using Google services.”

Since then (about 12 years ago) I have consciously avoided using any Google service, not search, not maps, and I stay away from using any version of Android, or Chrome, or Chromebook.

1 Like

The insane energy consumption for AI by these giants deserves more attention. Even by very conservative and potentially biased estimates, a single GPT query consumes 10X the energy consumed by a query to a regular search engine. Real-world numbers could be much higher.

According to the World Economic Forum Microsoft’s and Google’s CO2 emissions have increased 30% and 50% since 2019-2020. Their own reports are available for you to read.

Here’s Microsoft’s https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1lMjE
And Google’s https://www.gstatic.com/gumdrop/sustainability/google-2024-environmental-report.pdf

These companies are flush with cash, plunging deeper into AI, building bigger data centres and raising more money. Increasingly, a lot of cash-rich countries (ex: in the middle-east) are getting into the game, with funding, facilities, tax breaks, and other incentives to attract businesses. Even if electricity is priced higher for AI use, these giants can afford to throw more money at it, or move to other jurisdictions with even cheaper energy or lax regulations.

This is a scenario where I just don’t see a sustainable outcome by simply leaving it to market forces. So, what can be done?

1 Like

Support legislation to regulate the tech industry and break up monopolies.

1 Like

Ahem. Recent interview with Altman in Business Insider (sub required) confirms that AGI is defined in market terms.

Thus “intelligence” means “money-making capacity”. Their understanding of what it means to be human is exactly what you always suspected it was, but could never quite bring yourself to believe.

1 Like