AI-16: What to do?

None of this is inevitable. It is a direct consequence of the manner in which we have decided to structure our societies. Specifically, because we have been conditioned into thinking that private enterprise takes precedence. Fortunately, we still have governments, whose actions matter.

Here are a few things we can do.

  • Massively tax the private and corporate wealth of billionaires. They’re literally begging for it.
  • Break up big tech. Anti-monopolistic legislation is manifestly inadequate.
  • Regulate AI industries. Governments have been aware of these problems for decades and are now starting to act; see the overview at OECD.ai.
  • Develop a capacity for responsive regulation that anticipates developments. One way would be to restore government funding for in-house advanced R&D, so that governments are less dependent on tech companies.
  • Ban outright AI that is, or may be, or aims to be, conscious. It should be considered a moral abomination on the same order as attaching a live person’s head to a pig’s body.
  • Ban outright the rights of any earth citizen to lay claim to any non-terrestrial body.
  • Ban outright the application of AI technology in wartime.

I would encourage all of you to find out what initiatives are underway in your area, and lend them your support if you can.

We should also revise our language around AI, including the word “AI” itself, and stop letting tech evangelists slant the argument by setting its terms. A few thoughts:

  • AI → maybe “human simulator”? or “fake mind”?
  • AI ethics → AI acceptability
  • AI safety → AI harm reduction
  • AGI → multi-modal AI

Be aware of how people are talking about things, and actively deconstruct anthropomorphic language. When the CEOs tell you what it is that they are doing, take them seriously. In your personal and professional life, limit use of AI products as best you can.

If you’re working in the field, try to be the one who has a level head and doesn’t buy into the hype. And ask yourself, “is AI the solution to the problem, or a solution looking for a problem?” If you can find a good way to use the tech, great. If you can find a way to do good things that is simpler and less fraught, even better.

I’m sure I am biassed here, but if you believe in AI, please don’t see critics like myself as your enemy. The field is overheated and it needs a bucket of cold water. If you resent the fact that what you do is conflated with the likes of Altman and Musk, blame them. They’re the ones who are pushing hundreds of billions of dollars around and talking about replacing humanity. I’m just making a few posts, clarifying my own thoughts, and hopefully helping a few folks get a bit of perspective.

The winds are changing. There is a progressive multimedia conference in Austin, Texas called SXSW. It’s exactly the kind of young, creative audience who used to get so excited about new tech. But recently, when they tried to sell them on how “AI makes us more human”, they got booed. There’s a huge disconnect between what the AI evangelists assume people want and what they actually want.

Adam Schiff currently has a bill before the US Congress aiming to mandate transparency in AI usage of copyrighted work. When it was criticized by Jess Miers, a lawyer and ex-Googler working in the field of AI law, hundreds of replies almost unanimously supported the bill. If you’ve ever been on Twitter, you know how hard it is to find unanimity on anything.

I feel like the ground has shifted.

If the oligarchs want my advice on how to win, and if I was Māra, I’d be happy to give it. It’s pretty simple. Follow the same playbook as with climate change. Sponsor protests, get people making memes and writing songs and stories and movies about the dangers of AI. And of course, writing long essays like this one! Keep ’em too distracted to do anything that matters. Like, say, voting for governments who will actually do something. That should do the trick.

I’m not saying people shouldn’t do these things. They should. I’m just saying that these things won’t stop it. The only way to stop the inhuman progress of AI corporations is with something more powerful than the corporations. And, like it or not, the only thing more powerful than the corporations is the government. For the time being, at least, what matters is “we the people”.

The Buddha taught the Dhamma “for one who feels”. It is the beginning and ending of who we are, and what our path is. We are not a factor to be kept “in the loop”. We are the loop. Our world has become transfixed by the machines we have created, looking to them for our salvation and our hope. But we have been here all the time. Long before the age of the machines, and if we are lucky, long after. As spiritual practitioners, we should turn our eyes to ourselves, to human potential and human growth. We should be the example of those who are not swayed by the intoxication of the machine, who stand in our truth when the world falls for illusions.

8 Likes

Okay, I promise, that’s the last one! It’s been a wild ride, and thanks to everyone who has stuck with me.

If I have offended anyone, may you please forgive me! :pray: I’m genuinely grateful and thrilled to have all of you as fellow journeyers in this great river of life.

8 Likes

Number 13 shall keep me especially engaged. It touches very sensible things.

Now, as number 16 is out and is designed as a finishing one: thanks for all your consideration!


I’m thinking to put all 16 parts, perhaps translated to german, together into a small brochure and show a couple of them around some friends, let’s see…

4 Likes

Hey Nessie, thanks so much!

As for making a booklet or something, I’d love to do that, and in fact it was more or less a small book before I broke it into chapters to post here.

But I’m also painfully aware how rough a lot of it is, good enough for a blog post, just, but not really something I’d want immortalized without some serious work. If you or anyone’s interested in taking up a role as a “strong” editor, helping shape the material in a more professional form, I would be super grateful!


Meanwhile, just as I finished publishing these posts, there was a major publication in this area by Timnit Gebru and Émile P. Torres. You may have noticed that I have cited both of them multiple times, and a read of their article will show the many, many ways I am indebted to their work and insights. This paper collects their research in a single systematic, carefully presented form. Abeba Birhane called it “paper of the year”. If you want to learn about the philosophical background behind the modern AGI movement, it’s a must.

https://firstmonday.org/ojs/index.php/fm/article/view/13636

1 Like

After reading all these, what can I say but…

I will follow you into the Butlerian Jihad Bhante!

Thou shalt not make a machine in the likeness of a human mind - The Orange Catholic Bible

1 Like

Hi Venerable -

ummm… don’t know what you really mean here with “professional form”. I do the formatting of articles which are of interest to me just on a casual basis, just layouting/formatting/spellcheck. (little example: PDF, pagesize A5, to produce booklet with small-office-printer, Example_(dt)_a5_2401Jan.pdf (temporary, will be deleted tomorrow) )


Hm, on the other hand, yesterday I even thought better to have your texts carved on stone: to be readable for our (hopefully) surviving offsprings, possibly re-discovering a new stoneage… :fearful:

:whale:

1 Like

I’m talking about the writing. The book format is easy, but comes later.

Ah, think I understand now - seem to have not caught your meaning of “shaping the material” correctly. Unfortunately I seem to have lost much of my mental capacity for doing projects due to some health incident 2 years ago and am not yet well recovered. So I think I (currently?) cannot be of much help for uplifting your material&texts… sorry - perhaps another time :smiling_face_with_tear:

2 Likes

Well, that was a fun project, congratulations for the effort and the research.

I thought of doing something similar a few years ago. I first became concerned about where Big Tech was heading probably around 2012, prior to that it would be fair to describe me as a fangirl. And I probably started being concerned with AI maybe around 2016. The emergence of large scale LLM a few years later initially was a base of hope, which was quickly dashed when I realise what these models are trained on, and what the outputs could potentially be used for.

But I am sanguine about it now. It is what it is. We are now approaching Peak AI, and eventually it too will pass, and we will enter the Valley of Disillusionment. I take refuge that many young people are gradually realising all this, perhaps faster than “we” boomers give them credit for. They are starting to reject being the “product”, enshittification, trapped by attention slavers and subscription overlords that harvest their private lives.

In the end, the Butlerian jihad will be started by the upcoming generation.

A beginning is the time for taking the most delicate care that the balances are correct. This every sister of the Bene Gesserit knows.

Signed,

Princess Irulan

2 Likes

:pray:

1 Like

Thanks Bhante for all your work on this.

I wonder if you would consider a postscript of sorts to wrap back around to the first essay and share your current thoughts on the use of ai related things on SuttaCentral.

And if you are going to have someone work on making the document a bit less impermanent in a digital form, it would be great to change the links to Archive.org versions since I suspect link rot will heavily affect this topic.

Personally I feel like these two alternatives actually legitimize the field. I’d prefer something more like “digital lying machine.” :wink:

Thanks, I’m still unsure where to go from here, will address it better when I get home.

Some good news today:

Bostrom is one of the longtermists cited by Gebru/Torres.

1 Like

“There’s no guaranteed path to safety as artificial intelligence advances, Geoffrey Hinton, AI pioneer, warns. He shares his thoughts on AI’s benefits and dangers with Scott Pelley.”

1 Like

He is featured in this six part podcast from the Guardian that is quite good. Both informative and entertaining. EDIT: and disturbing.

1 Like