AI-8: Artificial intelligence undermines human creativity

Please note, after valuable criticism I have modified this article, and some of the comments refer to things that are no longer there.

There is a moment of purity in any creative’s process. The page is blank. The canvas is clean. The air is silent. There is an openness, an emptiness that cries out to be filled. Something inside the creative’s mind is pushing, some craving, some desire. To say something, paint something, sing something. They start, fumbling, awkward.

It’s bad, they know it is. It’s bad because they’d feel ashamed to show their friends, or because it’s not what they are supposed to do. But really, it’s bad because it destroys the emptiness. If any creative work is to be worthwhile, it must at least be better than the emptiness it ruins.

They wipe it out, try again. Wander around, get a coffee, waste time. Maybe they’ll get writer’s block. Maybe they make something mediocre. They know they can do better. So they wipe it out again.

They go deeper, until they access something that they’ve never seen before. Something new comes up, a spark, a phrase, color, a set of tones. When that is expressed, when it exists out there in the world, it has a presence of its own. It is a struggle, but it’s worth it. It is through the process that we become more fully human.

Every student knows this struggle. Or up till now they did. Now they can just put a prompt into ChaptGPT, get the output, and paste in as an essay. Maybe do a bit of editing and tidying up, maybe not. AI bilge is filling students essays, academic papers, media, social media. Per an article in ArsTechnica, “Fake AI law firms are sending fake DMCA threats to generate fake SEO gains”. Google is indexing it into search results.

All this was predicted in the works of the visionaries of science fiction.

They say AI will unleash an age of creativity and discovery. I doubt it: what kind of person are we creating who has never had a moment of true creativity or struggled to find an idea? It’s impossible to overstate the importance of this. AI doesn’t give, it takes. It takes the creative and intelligent output of humans, mashes it up, and spits it out. Those who advocate it feel a sense of entitlement to the work of others, to appropriating literally all of human culture and making it into something they can sell.

It even threatens to destroy itself via “model collapse”: as AI models consume their own output, they become like inbreds, their output growing increasingly grotesque and distorted in a phenomenon known as Habsburg AI.

AI devotees tell us that ChaptGPT will be great for education. Ben Williamson of the Edinburgh Futures Institute at the University of Edinburgh offers 21 reasons to doubt these claims. Kids are already suffering because teachers have to use AI to check whether their work was made by AI, but it is not very good at it, so the AI falsely flags the kids’ work as produced by AI.

It’s been a while since I was in school, so I’ve taken to asking students about it. So far, not a single one says it’s actually useful. Maybe the future is okay after all.

AI systematically undermines human consciousness, because that is its purpose. It is a product of faith in machines, built by people who have lost faith in humanity. New Yorker quotes Sam Altman reflecting on the near future of humanity that he is creating:

“There’s absolutely no reason to believe that in about thirteen years we won’t have hardware capable of replicating my brain. Yes, certain things still feel particularly human—creativity, flashes of inspiration from nowhere, the ability to feel happy and sad at the same time—but computers will have their own desires and goal systems. When I realized that intelligence can be simulated, I let the idea of our uniqueness go, and it wasn’t as traumatic as I thought.” He stared off. “There are certain advantages to being a machine. We humans are limited by our input-output rate—we learn only two bits a second, so a ton is lost. To a machine, we must seem like slowed-down whale songs.”

Altman calls himself “team human”, which is one of those tech phrases that become weirder the more you think about, as it implies there’s a “team inhuman”. Also notice how Altman, in this quote from 2016, casually says the very specific “thirteen years” as if there’s some scientific or evidential basis for making these kinds of claims. Now we’re in 2024, so we’ve got five years left!

AI “safety” is created by the labor of thousands of workers, mostly in Africa, who are paid a pittance to stare at screens all day and filter out the rape, pedophilia, snuff, torture, and other horrors of the internet. They sit there, hour after hour, day after day, just staring at this stuff, slowly going mad. AI overlords grind human minds into paste to build their machine. And when they have no more use for them, they just get rid of them.

We are told of the breakthroughs and new insights that AI will offer. As my valued respondents have reminded me, we have seen a genuine advance in the use of AI to decode protein structures with AlphaFold. And there are many other processes that use this technology every day, most of which we are not aware of.

The technology itself has advantages and disadvantages, and if these were the only, or the main, uses, then I wouldn’t be writing these “diatribes”. I’m writing them to raise awareness of the costs, so that we stop being complacent about letting this technology invade every sphere of our life. And so that we take back the reins, and ensure that future development is guided by strong and responsible legislation in the public interest.

With due acknowledges to the roles that AI can have in tasks like predicting protein structures, it remains the case that virtually every significant scientific and technological achievement has been made by the human mind. If we want genuine scientific advances, the answer is not to give blind faith to a technology. It is to nurture humans. Stop creating economic structures that end up with young people wanting to be an influencer or a billionaire. Improve STEM education, ensuring that scientists also get a well-rounded education in the humanities. Pay teachers. Feed children. Give decent budgets to scientists, with plenty of down time for relaxation.

Insights arise from the human mind when it is confronted with emptiness and listens to the upwelling of what might fill it. Our machines are great, better than they have ever been. Our minds could do with some love.

11 Likes

With all due respect Bhante, “AI” successfully solved for the shape of almost all proteins, OCR’d the immense backlog of public domain books to make them genuinely accessible, and particularly if you move out of the “sexy” stuff people call “AI”, machine learning is used in an immense amount of boring behind the scenes industrial applications. The vast majority of productive activities (human or machine) are uninteresting to outsiders and don’t get articles written about them but still have meaningful results, because products are goods, even if production isn’t cool.

Whether it’s a human being or an “AI”, boring output is still often worthwhile. Remixing old data is a pretty good summary of most practical science. It’s how many people I know are working (with machine learning) to improve human health and well-being.

I agree with many of your other points - ChatGPT and algorithmic social media are the nemesis of every teacher I know, and I am seriously worried about the implications in my community, where, in addition to everything else you said, the engagement-based strategies promote not just cyber bullying but real life lethal violence.

But something having bad consequences doesn’t mean it doesn’t also have good results. And even for things that are net-negative, it’s important to acknowledge both the benefits and the detriments.

1 Like

I assumed Bhante is specifically using “AI” in the above as a kind of “reverse synecdoche” for specifically these “AGI”-aspiring LLMs and their ilk. I don’t think (?) he’s including e.g. the Post Office’s letter sorting machines, or other such special-purpose algorithms in these diatribes of his… But yes, I invite Bhante @sujato to clarify this issue himself. :pray: What kinds of AI are you talking about here, Bhante? Does a chess AI “undermine human intelligence”? Cause I’d argue my email’s spam filter saves my intelligence (such as it is)! And my spell checker certainly saves my intelligibility :sweat_smile:

Yep, I wanted to mention using LLMs to predict proteins and help with the design of medicine.

Or really just approximating anything in physics that previously required simplyfying equations to a point they are easily managable, yet really inaccurate (I am not an expert in these fields)

I still want to believe that https://www.khanmigo.ai/ will eventually work and help students by “patiently” tutoring them on all kinds of subjects. Even if it would only be partially LLM-based and would require a lot of work from people behind the scenes to generate e-learning material.

(Google) DeepMind, the group responsible for the freely available protein shape data, is explicitly targeting AGI or, as one of the founders sometimes calls it, ASI (“Artificial Superior Intelligence”). They tend to be less… for lack of a better term, whacky, with their definitions, focusing clearly on input-output operations ("cognitive tasks") rather than spooky “ghost in the machine” stuff, but they’re still trying to make AGI.

The issue is that there’s no such thing as a special purpose algorithm, just special purpose models. In the 1800s Gauss invented a bunch of algorithms to determine the properties of space rocks, and then a few decades later they were generalized by Galton to apply to the weather, height, and “human greatness” (he was actually very anti-racist and anti-classist as an individual, but obviously there’s a straight line from his ideas to Hitler). Even before we had machines to do calculations, we had this problem with statistical inference.

Now ML techniques are being developed to play games, recognize pin-up girls, and other trivial tasks, and then applied to develop models that do all sorts of different things. Some great, some terrible, most innocuous.

Right now it’s fundamentally very similar technology that is being used by the USPS to read damaged package labels and the Armed Forces to detect camouflaged targets and kill them with drones.

1 Like

One major difference between humans and AI is that AI requires endless streams of input and data to generate responses and “answers” while, for humans, many of the deepest kinds of understanding and wisdom appear in, and arise from, silence.

I’m not an AI techie, but based on the underlying structure of AI it’s not clear that this will ever be possible for machines.

Indeed, yes. I’ll reword it to acknowledge this genuine role in scientific progress.

Interesting case, because the advances in making knowledge available are being threatened by the absolute flood of AI-generated garbage published as books which is polluting Google search reseults, Google books, potentially nGram, and certainly Amazon.

I’m sure it is. I’m not sure that’s a good thing, though.

Sure, yes.

I’m sorry to hear that.

Lol, everything is useful for something.

Wow, I did not know that.

Indeed, and that is the main point of my post. I’ll reword it to try to bring this to the fore.


Thanks to you all for the genuine criticisms on this piece.

Just as some background, I have been writing these articles on and off for several months, and they’ve been edited and messed around a lot, and from different moods and perspectives. Hence you’ll see some repetition and so on. I’m trying to bring the project to a conclusion by publishing them once and for all, and doing some final revision before doing so. Clearly I didn’t do such a good job on this one, and I hope it is better now.

2 Likes