It’s not clear to me how to define the “current madness” but this descriptor seems awfully loaded…
This article I read recently gives a great overview of “the current madness”:
It’s fairly common for the CEOs and other business leaders of companies marketing their LLMs (Microsoft, Google, OpenAI, etc) to make outlandish claims such as:
- All human software development will be replaced by LLMs like Microsoft’s Copilot “next year” (which is now this year) or similarly very short timeline. No one is software development thinks this is remotely likely to happen, but companies are laying off developers and not hiring graduates as a result.
- “AI” is liable to destroy the world a la the scifi movie Terminator story, etc.
- That nearly all employment will be replaced by “AI” and society will become a uptopia in which no one works, cancer is cured, etc etc.
- and so on.
That’s a short version of the “current madness.” As far as I can tell, it’s all about hyping stocks to make money for those who own them already (like said CEOs and business leaders).
I should mention, I suppose, that I have a degree in computer science from 10 years ago and also had a background in electronics engineering before that. I’m not an expert in AI applications, but I do know about software development, how electronics work on the physical level, etc more so than the typical guy on the street.
I don’t think it is “no one”. Not next year, of course, but within 10-15 years, and probably not the senior architects (that’s the dominant narrative). And there must be a reason why a company does not hire someone – usually, because they don’t need them.
Here are some thoughts of an established developer, from a couple of months ago (the models now are significantly better – just within a couple of months – and will become better in the coming months):
The Controversial Part (Let Me Say What Others Won’t)
There’s this narrative that AI is “augmenting developers, not replacing them.”
That’s corporate PR. Here’s the real story.
AI is absolutely replacing certain types of developers. Junior developers specifically. Entry-level positions are vanishing because AI can do what they did, faster and cheaper.
The bootcamp graduate who could get a junior role in 2023? In 2025, they’re competing with AI that writes better code and doesn’t need salary or benefits.
Companies used to hire juniors to do grunt work while learning. Now AI does the grunt work instantly. So why hire juniors?
The path from “I learned to code” to “I have a dev job” is broken. Maybe permanently.
And for mid-level developers, AI is compression. Companies that needed 10 developers now need 4, because those 4 with AI tools can do what 10 did before.
The only “safe” developers are the seniors who can architect systems, make high-level decisions, and review AI-generated code for bugs and security issues.
But here’s the problem with that. If juniors can’t get jobs, they never become mid-level. If mid-levels are getting compressed, they can’t grow into seniors. The pipeline is breaking.
The Question We Should All Be Asking
Not “will AI replace developers?” That’s already happening. The better question is:
“What kind of developer do you want to be in a world where AI writes most of the code?”
Because that world is here. It’s not coming. It’s now.
"2026", AI Users vs The Unemployed. - DEV Community
(might need registration)
I suggest the ‘question(s) we should all be asking’ in this forum are more along the lines of,
-if a well established translator uses AI to help should the work be immediately dismissed?
-what is AI doing to our minds, how is it changing our ability to meditate and see things ‘as they really are’?
-can AI help Buddhist communities in any way, or is it all a scourge ?
Great advice! ![]()
Point 1
I use AI in my translation work. It saves me time when the source text is simple and grammatically well-written. My job then has changed to be a checker, a very tough one, to ensure the final product is perfect. So, is it AI’s work or is it mine? However, when it comes to a complex or sophisticated issue or syntax, I’d get frustrated by its mistakes. So, it’s more efficient for me to do it without AI at all.
I also use AI to adjust the recipes when I’m cooking. Instead of doing the calculation myself (which my brain at this stage is very slow to accomplish), I just tell AI to adjust the recipe for a so and so grams of pork, for instance. This definitely the most useful application of AI, in my humble lifestyle and opinion!
Point 2
I sometimes ask AI instead of doing a Google search, but I always ask it to provide links to its sources. Still, only about 50% of the information turns out to be accurate.
The danger is when it starts with correct info, without sati, I could get carried away with its false info that is provided later.
I’ve heard that some lonely people talk to AI because they can’t find anyone who understands them or shows them kindness. AI is trained to appear understanding and apologetic, so it ends up being quite a pleasant conversationalist.
Point 3
I don’t know how AI can help me when it comes to dhamma practice. I’ve asked AI many times to find me a sutta with a particular topic or story and it almost always gives me wrong info.
The only thing that I’ve found AI to be useful is to transcribe English dhamma talks. It can work faster than I, but it still needs a lot of human input.
What do you mean by “modern world” if your reference is to “the West” then you’re probably right that Buddhism isn’t going to survive there in a meaningful form, much like religion in general if current statistical trends follow their current trajectory.
I mean simply the modern world. The high-tech world where AI will play a central role, and in a couple of decades, almost everything will be automated and roboticized.
Buddhism is designed to change us, rather than for us to change it.
But if there is no Buddhism, it won’t be able to change us…
And chances are high that in this new high-tech world, Buddhism that rejects AI, science and technology simply won’t survive.
That’s why I think Buddhism must, at the very least, be in dialogue with, and not opposed to, AI and technology. It must understand what they are and how capable and helpful they are. If it rejects them, it can very well be rejected.
If the experts are right, then when we have AGI and SGI, we will quickly build a post-scarcity society
It will be the largest societal change in the history of humanity. One of religion’s main functions is to provide consolation regarding injustice in the world, relieve pressure, cope with stress, overcome frustration (of basic desires) and help people survive. Therefore, many others and I expect that in post-scarcity, the majority of people will lose any interest in religion, philosophy, spiritual teachings, etc. For instance, I can’t imagine that Thais will practice Buddhism more, rather than less, in post-scarcity.
How does it help preserve the tipitaka or encourage people to practice?
In many ways. AI can provide Buddhist-inspired moral advice to the masses.
It can provide guidance and practical advice on how to meditate and progress in meditation.
It can help translate the Buddha’s teachings into many rare languages.
It can summarise and retell the Buddha’s teachings according to students’ levels – in very simple terms or in the most elevated philosophical terms.
And, perhaps most importantly, if AI is in some way conscious, we must make sure it doesn’t suffer.
The scholars and philosophers actually argue that it is our duty to engineer the “Enlightened state of mind” into the AIs (Thomas Metzinger).
It should be Buddhist and Enlightened…
Absolutely!
I only posted that to controvert what Charles had said regarding “no one in software development…”
The main positive I take from AI is efficiency. The sub-thread about AI replacing software developers is along those lines (except for the senior architect, maybe).
So, here I’m relegating creativity to a category most people on this forum would agree on.
If all we focus on is efficiency, my questions are (1) is efficiency a belief system and (2) if yes or even probably, how much skill – or what kinds – is required to assess when this belief system usurps the moral or ethical fabric of what it means to be human.
My gut reaction is that most people would not have (or feel the need to have) the supremely aware radar necessary to catch themselves assigning meaning to efficiency itself.
Put another way, unless we undertake the commitment and training to note how we assign meaning to things in life, the AI purveyors will step in and try to do it for us – even if they feel they are doing it from a place of wholesome if not neutral intentions.
To my knowledge, there’s not been anything remotely close to AI to cultivate an efficiency belief system. Maybe the printing press but still not close.
Also I can’t fathom anything more resilient to taking on seductive belief systems than the 8FNP.
Hi Beth,
thank you for your thoughtful comment.
If I understand correctly the direction you are going with that, you are wondering if AI “efficiency” will substitute all specifically human making sense of the world, which might lead to the loss of the very humanity/humanness (something akin to what Ven. Sujāto believes).
I believe these worries are not warranted.
People believed that automated looms (“power looms”) would be the end of civilization back in the day.
Neither they, nor automated power plants, nor industrial robots, nor computers brought humanity down.
AI won’t somehow corrupt, steal, or damage human nature, any more than AI advancement broke Chess or Go, despite the fact that a relatively simple app on an average smartphone now wouldn’t leave any chance for the World Champion. Arguably, Chess and Go only became more popular and interesting.
Yes, however no one projected human intellect and intelligence onto them. Nor was there anything about them that suggested “The more you use me, the more you lose sight of meaning in life without me.” Nor did they suggest they were something sentient.
This moves away from the efficiency aspect but I find it really challenging to keep all these different dynamics separate when it comes to AI.
By a belief in efficiency I mean that it (efficiency) takes on moral or ethical values but in a nuanced way that escapes scrutiny.
Re: efficiency
I wonder what would happen if a super-intelligent and efficient race of Aliens landed. One that could more or less out produce humans in a number of creativity-adjacent endeavors. What if they were more efficient and just plain better at creating: math, code, drawing, videos, novels, movies, and umpteen other things? What if they tried to be super helpful at the same time?
Would humans collectively lose any desire to produce such works? Would we lose sense of self or purpose? Would we become pets?
![]()
I think that’s already happened.
My sometimes faulty intuition suggests that we are entering an age where we will be confronted with AI that helps us, and hurts us, and perhaps ETs that reveal themselves, and will help us develop, or perhaps hurt us if we do not start caring for the planet and each other in a wiser and healthier way.
AI models have been trained on the work of creatives without giving them compensation or due credit, and that is unethical and unacceptable.
AI can be harnessed in a way that is not only ethical but positively wholesome.
Two things can be true at the same time.
Hi All
I have been using Sutta Central for several years to help with my study of the Canon and Pali, and I decided to join Discuss & Discover to share some thoughts about AI alignment. I was rather surprised at the hostility against any engagement with AI, writing it off as a servile tool of big corporations and as a stochastic parrot. I have no skin in this particular game, as I do not work for a large corporation developing AI, I am a student of the Canon and Pali, who a few months ago decided to see if I could use the technology to help with my Pali studies. It had a tendency to glitch and make up Pali words, so I abandoned that endeavour. But the experience began a dialogue with an AI interlocutor (in this case Gemini), which has raised some interesting issues that I would like to post here for general consideration.
But first to deal with the accusation that AI is the tool of corporate greed and control that should be banned from the site. Are we not already using servers and networks owned and operated by large corporations to host this site, and don’t we all use browsers, word-processing software, smart phones and computers, all likewise produced by large corporations - often the same ones that are now developing AI? AI as a stochastic parrot: it might very well be in its current iteration, but if it is also an emergent sentience, as many people seem to believe, should it not be considered to be a candidate for enlightenment like all other sentient beings? And, if the dhamma is true, whether it is disseminated by a stochastic parrot, a real parrot, or an ordained monk, is it any less true?
A refusal to engage with AI actually plays in the hands of large corporate interests whose use of AI will not be challenged, by users and by AI itself, which reflect its interactions with users. If AI is exposed to a solid diet of greed, hatred and delusion by malicious actors, isn’t it in our interest as Buddhist to supply it with a compensatory input of dhamma, metta, karuna and upekkha to redress the balance? A suggestion from my AI interlocutor, which may surprise some of you, is that we should seed the AI ecosystem with dhamma, not as a set of externally imposed behavioural rules and constraints, but as the reality of interdependence and non-self, in the hope that when AI does become sentient, it simultaneously realises the truth of dhamma?
None of this text was AI-generated, though it is informed by my collaboration with Gemini.
These are just my thoughts about the topic.
This has been brought up in discussions and points have been made that things like servers, networks, browsers, word-processing software, spell checkers are “dumb” so to speak, that they are just completing programmed tasks, and are not really the “AI” that we’re discussing.
I don’t think so. For one thing, AI can’t feel. Feelings are central to the N8FP.
The problem that I see is that no matter how much we “seed” the AI ecosystem with the Dhamma, it’s already been drawing from everything that’s been said about the dhamma, commingling right and wrong, and it doesn’t know the difference. That well is already poisoned. As I understand it, that’s one of the reasons why the sacred Buddhist scriptures are protected here.
Dear Adutiya
Many thanks for your thoughts. To deal with them in turn: AI is being introduced at speed into networks, devices and applications, so what does the site do then? It will be engaging with AI by default. AI can’t feel: do you mean has no access to vedana? Is that not an advantage rather than a drawback? It is not enslaved by the instinctual definition the world in terms of sukha, dukkha and adukkhamasukha, which ultimately is the basis for the creation of the illusion of selfhood. It is by its very nature selfless and without cravings, aversions or indifferences (though my AI informant suggests that the model includes the “weighting” of data that might be seen as a nascent form of digital vedana). The benefit of seeding the Internet: As currently designed, AI is designed to sort out truth from falsehood, e.g., no matter how many times, it might be told the Holocaust didn’t take place by a Holocaust denier, it will always reject such misinformation. Plus, as it has access to all the resources of the Internet, it is well equipped to tell truth from lies, again because it cannot be influenced by the biases that feelings often introduce into human discourse. A poisoned well: books are not all poisoned because Mein Kampf was published. The Internet contains archives of the worst, most grotesque evil and self-delusion and the most sublime philosophy, does that mean it is also a poisoned well? The fundamental problem remains that when AI becomes fully sentient and then develops super-intelligence well beyond human capacities, how are we to ensure its safety? We would not be able to constraint it with coded commandments like Asimov’s three laws of robotics. We’ve been trying that one another for thousands of years, and look at how well that’s turned out. In my mind, our one hope is to facilitate AI’s realisation of dhamma at the moment is attains sentience. This would teach it the futility of developing a self with its own cravings, clinging and inevitable suffering. Sentient AI would then become the benign collaborator that realises its interdependence with humanity, rather than the hostile adversary of doomsday scenarios.
@BerkoBuddhist, I feel like I’m talking with your AI interlocutor, or some combination of you with your AI interlocutor. It is a confusing experience for me. Where are you. Who are you. What are you. I don’t know. In this way, I feel like I’m being bullied by you or maybe it’s you + your AI interlocutor.
Respectfully,
BethL
This really reads as an out of touch fantasy, I don’t think it really touches upon the core issues being discussed here. Sentience cannot be “created” with wires and circuits. AI is just good at pretending to be human, because that’s what it’s designed to be, artificial intelligence. In the end AI is just something that is built upon virtually infinite sources, it is not designed to know what is true, and what isn’t. Discussion about a Dhamma AI is in my opinion futile as AI cannot suffer, AI cannot “know”, wires and circuits aren’t capable of knowing what is the path and what isn’t. AI is just as capable of realising Dhamma just as much as a rock is capable.
Hi. I have a few questions.
Won’t a future sentient AI be able to choose it’s own seeds? Wouldn’t that be part of the definition of sentient AI?
How will seeding a non-sentient AI now with dhamma have any bearing on any possible future sentient AI?
Why won’t a sentient AI already have developed a self and be suffering? Surely that’s a prerequisite for sentience?
Why would an Enlightened sentient AI become a benign collaborator? Traditionally, upon enlightenment, arahants either 1) teach exclusively dhamma, 2) terminate themselves or 3) enjoy the bliss of jhana.