AI-9: The “G” in AGI stands for “Monopoly”

AI is a marketing term. Once upon a time, we took AI to mean something that was genuinely intelligent and aware. But these things are not “intelligent”, nor do they even mimic intelligence, since mimicry assumes intention. They merely output streams of characters that human beings fool themselves into thinking represent intelligence.

The meaning of the term AI has been massively eroded. Compare Tesla’s “full self driving”, whose immanent arrival was confidently predicted by Musk since 2016. In 2024, Musk is still claiming it’s just around the corner. Of course they market their software as “full self driving”, when it is very far from that. Incidentally, when carmakers tried to make AI self driving work in Australia, they got beaten by the kangaroos. Onya, roos!

Current (early 2024) AI machines specialize in a particular domain, generating either text, or images, or video, etc. The next step is what they call Artificial General Intelligence (AGI), another marketing term. Altman has defined AGI as “AI systems that are generally smarter than humans.”

Soon, I expect, we’ll find this sense eroded to mean a machine that can do what several machines do today. There is hype, though no evidence, that the next OpenAI model, GPT-5, might achieve this. OpenAI is pushing the narrative that its products, such as the newly-demoed Sora video generator, are steps towards AGI.

But what can that mean? A video generator is no more “intelligent” than a word generator. Put the two together and it’s still no closer to being conscious. What’s happening is that AGI, as a marketing term, is eroding from Altman’s claimed capacity to exceed human intelligence across arbitrary domains, to something that generates output in a few circumscribed cases. AGI in this sense does not signify any progress towards actual awareness. It is merely a machine whose output is designed to fool human beings in a greater range of cases.

The real impact of AGI will not be in consciousness, but in market dominance. It will not only replace human workers, it will render obsolete most of the current small and medium AI projects.

The tech industry has an age-old pattern where early diversity and experimentation is replaced by a monopoly or oligopoly. This happened with operating systems, word processors, browsers, social media, mobile phones, online marketplaces … you name it. You end up with one or at most two or three massive companies dominating the market while everyone else carves out a tiny niche for specialist needs. This situation arises because, in the absence of effective anti-monopolistic action by the US government, the leading player uses unethical or illegal means to destroy their competition.

This method was pioneered by Microsoft, who called it “embrace, extend, and extinguish”, and who because of it lost a huge anti-monopoly case in 2001. These days people think of Bill Gates as philanthropist, but let’s not forget how he got so rich. From Wikipedia:

Bill Gates was called “evasive and nonresponsive” by a source present at his videotaped deposition. He argued over the definitions of words such as “compete”, “concerned”, “ask”, and “we”; certain portions of the proceeding would later provoke laughter from the judge when an excerpted version was shown in court. Businessweek reported that “early rounds of his deposition show him offering obfuscatory answers and saying ‘I don’t recall’ so many times that even the presiding judge had to chuckle. Many of Gates’s denials and pleas of ignorance were directly refuted by prosecutors with snippets of e-mails Gates both sent and received.”

We have seen similar behavior from various tech gurus over the years. Now Microsoft is OpenAI’s biggest partner. And it seems that the childish and evasive behavior of the tech oligarchs hasn’t changed.

As I write in March 2024, an interview with OpenAI’s CTO Mira Murati is doing the rounds. (Speaking of misunderstanding sci-fi, FastCompany reports that one of her favorite movies, 2001: A Space Odyssey, “centers on a rogue AI that kills everyone.”) When asked what the sources are for OpenAI’s video generator Sora, she said, pulling straight from the Gates playbook, “I’m actually not sure about that”. What’s chilling is not that she’s lying—she did after all learn from the best at Tesla and Goldman Sachs—but that she knows she can just get away with it.

Everyone in the industry is well aware that video generation capabilities such as Sora require massive amounts of training data, and that it is extremely likely that such data comes from legally dubious sources. By scraping and using the work of everyone else, OpenAI’s road leads not just to them threatening the livelihoods of actors and screenwriters, but that of other AI companies as well. The current boom in AI startups, which is characterized by massive speculative investment in companies with hazy paths to profitability, is headed towards a massive bust. When the smoke clears, only a few monoliths will be standing.

The monopolizing process is well under way in AI, where OpenAI is dominating both mind share and market share. Given the chaotic nature of its governance, it is too early to call it for certain, but it sure looks like it will achieve a monopoly in the near future. That means we will all be force-fed its content, for good or for ill, and subject to the decisions made by its leaders.

7 Likes

We have evidence that Altman, probably the single leading figure of the AI hype, understands how power works in institutions. He was president of Y Combinator, probably the single most influential venture capital firm and startup incubator in tech. It is quite literally the venture capitalist’s job to deploy their capital to buy control of businesses such that they make a profit. More recently, Altman was fired by the board, only to outmaneuver them and return. He’s clearly canny, and he’s put together a compelling pitch for companies: Complete control. Capitalists want control over labor the way that kings wanted gold, and Sam Altman is an alchemist promising no more complaining workers, with their annoying, incessant demands for higher wages, family leave, and even bathroom breaks.

Critics have spilt much ink discussing the terminology here, trying to distinguish LLMs from AI from AGI, marketing speak from fields of science, etc., but the AI hype continues to evade critics’ attempts to use precise language because it is not a real event. Just because alchemists often did practice chemistry doesn’t make alchemy real. AI is an idea that began as a subfield of computer science, until it was so distorted that it popped, detaching itself from reality. Now, this orphaned concept has grown to a life of its own, as our discussion of AI eclipses any meaningful definition of it as a real, definable thing.

Consider LLMs, the beating heart of the AI hype. They are fluent but not knowledgeable. Though speaking fluently often coincides with speaking knowledgeably, neither guarantees the other. This conflation is at the heart of much of the flawed AI research that we’ve already discussed, but seen through Agre’s argument here, it takes on new significance. Companies are training LLMs on all the data that they can find, but this data is not the world, but discourse about the world.6 The rank-and-file developers at these companies, in their naivete, do not see that distinction. They instead see the first general purpose tool that comes with an “ontological grid” (as Agre calls it) that can coherently fit the entire world. This tool can interface with the world just as we developers do, since we too only ever output symbols, be it human language, computer language, diagrams, etc. So, as these LLMs become increasingly but asymptotically fluent, tantalizingly close to accuracy but ultimately incomplete, developers complain that they are short on data. They have their general purpose computer program, and if they only had the entire world in data form to shove into it, then it would be complete.

This will never happen. It will be forever just around the corner, because the AI hype is millenarian, even going so far as to contain literal apocalyptic prophesy. Goalposts will forever move — if only they had more data, or more energy, or more hardware. The meanings of words in previous predictions will be fudged, then squeezed until they’ve been drained of all sense, only then to be discarded and replaced with new words in a new media cycle to keep the story forever alive and constantly changing.

3 Likes