AI-15: Regulating AI

Satya Nadella is the CEO of Microsoft, and as such, chief partner of OpenAI and one of the major players in the field. Microsoft is a legacy company now, known for business services rather than innovation. Nadella is seen as a voice of moderation, the effective manager who sheperded Microsoft to its greatest profitability; you won’t find him enthusing on sci-fi fantasies.

He strikes a note of reasonableness when he says:

We need some moral philosophers to guide us on how to think about this technology and deploying this technology.

When I read this, I thought to myself, “I bet they’ve sacked the actual ethics team”. I know, I’m way too cynical. Except, well, according to The Verge:

Microsoft laid off its entire ethics and society team within the artificial intelligence organization … the team has been working to identify risks posed by Microsoft’s adoption of OpenAI’s technology throughout its suite of products.

The layoffs were directly prompted by Nadella’s drive to incorporate OpenAI. They follow the precedent set by Google when they fired Timnit Gebru, an AI ethics researcher, for speaking out, which is literally her job. They’re still doing it, by the way: Google just fired an engineer for opposing their AI project with Israel.

In his conversation with Jack Kornfield, Sam Altman spoke, as he has so often, of the need for a governance model for the AI he is building. It is a curious thing to view him stumbling over ideas, groping towards some conception of how there might be oversight for a major industry. He speaks of getting representation from diverse people, ensuring all have a stake, and admits the difficulty of this and their failure to achieve anything like it. In the end he reduces it to a “Platonic ideal”, revealing that in his heart he believes it will never exist, which is, after all, the defining characteristic of Platonic ideals. In his blithe way, he tells New Yorker his reasoning:

We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board. Because if I weren’t in on this I’d be, like, “Why do these f***ers get to decide what happens to me?”

The idea he is so vaguely and hesitatingly trying to form is well known to us all: it’s democracy. He’s describing democracy. But he just can’t seem to see it.

For centuries, people fought and bled and died so that we could have a democracy, a genuine say in how our countries are run. But in their own minds, our tech overlords don’t live in a democractic world, they live in a capitalist one. Altman says, “capitalism is the worst system except for all the others”, altering the famous Churchillian “democracy is the worst form of Government except for all those other forms”. I’m not sure if he’s aware that he is paraphrasing, but either way it betrays his world. He is swimming so deeply in the waters of modern neoliberalism that he can’t even see that he is underwater.

It seems we need to remind ourselves how things work. Capitalism is a way of running an economy that emphasizes economic growth through private enterprise, fuelled by the expansion of capital. The economy is one part of a nation. The nation as a whole is run by democracy, which is the will of the people as manifested through elected representatives. The elected representatives make the laws by which the economy operates. They can not only make any laws they like, within various constraints, to regulate industries, they have the means to enforce those rules: bureaucracies, and ultimately courts and prisons. Altman and the other tech oligarchs must abide by the law, but it is crucial to restrict their powers before they get too big.

What AIs need in terms of oversight is effective and dynamic Federal government regulation. This in itself does not seem controversial, as indeed Altman himself has been calling for regulation.

But how serious is he? Even Altman says, “you should be skeptical of any company calling for its own regulation”. Look at what they do, not what they say.

When the EU made a start on regulation, OpenAI lobbied to have regulations watered down and their products excluded. And when a group of American stakeholders asked Congress not to apply copyright to AI, they tried to hide the fact that it was OpenAI’s lawyer who drafted the letter, but he accidentally left his fingerprints on the metadata. I guess they’re just not that good with computers.

AI excels in creating low-quality information for fooling people who don’t look too closely. Imagine a world where CEOs can just roll out a business in minutes, its board made of entirely non-existent humans, and flush their cash through it. Well imagine no longer! A company called Vespers Inc. for a short time controlled OpenAI’s $175 million startup fund, but when questioned, OpenAI denied any knowledge. It seems likely the fictitious company was a product of AI. Altman now runs the fund.

The neoliberal assumption is that government is too slow and inefficient. But there are good reasons for that. Governments embody the complexity of centuries of real-world operations, with all the contradictions and mess that entails. The world is slow and inefficient. Evolution is slow and inefficient. Babies are slow and inefficient, which seems to be particularly troubling for Altman:

The thing people forget about human babies is that they take years to learn anything interesting. If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.

Well, I for one am glad that Altman is not in charge of the program for shutting down babies.

Suppose we were to accept Altman’s position and create some kind of new governance-without-government system. What reason do we have to think that his solution would be any better? Tech companies have a long history of thinking they can do everything better than everyone else, only to fail ignominiously: Apple’s failed car, Google’s failed sky-fi (to pick just one among many), Facebooks’s failed metaverse, Amazon’s failed “AI” checkout, IBM’s failed Jeopardy-beating Watson, Microsoft’s failed attempt to monopolize the internet. Look, failure is fine. It’s how we learn. But we can’t bet the future of humanity on the assumption that these are specially gifted people who will not fail. They fail all the time.

Altman talks of getting an AI to produce a better governance model. But governance models aren’t theories, they are how people work. You can’t just create a model and implement it on top of a bunch of people. His own company, a vastly less complex organization than the Government, decided to kick him out, then go crazy in public for a while, then hire him back. These organizations are riddled with ideologues and extremists; they can’t even govern themselves.

The real problem is that he doesn’t want a genuine government oversight, because they could write a law and end his entire enterprise. What he wants is vibes. He wants something that looks and feels like democratic oversight, lending legitimacy to what they would have done anyway.

We wouldn’t put a car on the roads until it was proven to be safe. It’s up to the government to write laws that define the level of safety, and up to the companies to ensure they comply with those laws. We shouldn’t release generative AI until it can be proven safe. And since the makers of it have repeatedly told us that it is not safe, we should take them at their word.

Implementing effective legislation is not easy. Fortunately, AI acolytes have done some of the work, since they have repeatedly identified fundamental flaws with their own technology. These can form a minimum standard to which their products should be held.

Altman, for example, says “all generated content should have to be tagged as generated”. This is clearly correct. If I sell tinned peaches, I have to list on the label exactly what is in the can, and if I don’t I’ll be fined. Since the AI firms are, apparently, so incredibly advanced that they will be creating a cosmic superintelligence any day now, why should we not hold them to the same standard as a tin of peaches?

Of course, Altman knows very well that this will never happen, as metadata can always be stripped. To him, this “seems like a miss”. What it seems like to me is a reason to ban his product. Imagine a tinned peach manager saying, “We have no way of knowing which of our cans are actually peaches and which are cyanide. Oh well, missed opportunity, let’s ship it.”

Tech companies should be required to solve the problems of their product before release, at the bare minimum the problems that they have identified themselves. This is no hypothetical problem. AI generators routinely create harmful and destructive content, and their makers just … get away with it.

In the same Time interview, Altman is asked what are the signs that should trigger a “slowdown” in AI development.

If the models are improving in ways that we don’t fully understand, that would be one. If there’s significant societal disruption, that would be another. If we don’t feel like we’re making sufficient progress on alignment technology for the projected capabilities of the next train run, that would be a third.

All three of these things are true right now. Yet despite many calls for a “slowdown” they just keep charging ahead. Asking the AI industry to slow down is like asking an addict to use in moderation. The problem isn’t the addict. It’s that the people around them don’t understand what is going on, so, thinking they are helping, they just enable.

6 Likes

:clap::clap::clap: exactly, and so did the rest of the companies, it was all horribly cynical

to be fair, most of “AI ethics and safety” was just people trying to make a name for themselves by coming up with sci-fi doomsday scenairos https://twitter.com/ESYudkowsky and due to them crying wolf so many times on LLMs for that past years, people started to care far less about their opinions, which I guess made these layoffs less offensive in the public eye

one of the few people I admire in the field with actual publications on AI safety is Robert Miles: https://www.youtube.com/watch?v=3TYT1QfdfsM

What early scriptures hint at the existence of mental and physical energies continuing to exist after the physical body ceases to function (such as when brain waves and heartbeat stop), and then continue beyond the body?

Hi and welcome!

While I’m not sure this question is pertinent to this thread, see DN28:

"They understand a person’s stream of consciousness, unbroken on both sides, established in both this world and the next.
Purisassa ca viññāṇasotaṁ pajānāti, ubhayato abbocchinnaṁ idha loke patiṭṭhitañca paraloke patiṭṭhitañca.

I do think that’s unfair. Sure, there are some percentage of such TESCREALists, but from what I’ve seen, most “ethicists”, as I mentioned in the essay on ethics, are mainly concerned with mitigating harm. In fact, Timnit Gebru, who was sacked from Google, strongly opposes such doomsayers.

My problem with ethicists is not that they are corrupted, but that they are required to work within a narrow and utilitarian window of what is possible.

1 Like

Thank you for your respons. .