Maybe the problem isn't AI, but rather the programming languages it is written in?

Still had Venerable Khemarato.bhikkhu post in mind - Yes, Tech Companies are Pushing AI Content Down Our Throats - when I came across this remarkable extract from a book by Andrew Smith - I learned the language of computer programming in my 50s – here’s what I discovered | Technology | The Guardian

One day in 2017 I had a realisation that seems obvious now but had the power to shock back then: almost everything I did was being mediated by computer code. And as the trickle of code into my world became a flood, that world seemed to be getting not better but worse in approximate proportion. I began to wonder why.

Two possibilities sprang immediately to mind. One was the people who wrote the code – coders – long depicted in pop culture as a clan of vaguely comic, Tolkien-worshipping misfits. Another was the uber-capitalist system within which many worked, exemplified by the profoundly weird Silicon Valley. Were one or both using code to recast the human environment as something more amenable to them?

There was also a third possibility, one I barely dared contemplate because the prospect of it was so appalling. What if there was something about the way we compute that was at odds with the way humans are? I’d never heard anyone suggest such a possibility, but in theory, at least, it was there. Slowly, it became clear that the only way to find out would be to climb inside the machine by learning to code myself.

Reading the whole article I was struck with how parallel it is to some of the sentiments shared on this website recently. Consider this portion:

There is a serious point, though, which I started to glimpse at PyCon: that the values and assumptions contained in programming languages inform the software that’s written with them and change the world accordingly. By the time I’d learned that Brendan Eich, author of JavaScript, is an anti-vaxxer and was a supporter of a campaign to have same-sex marriage nixed in California, I wasn’t surprised.

Here you have the author implicitly making the claim that those who use or enjoy using JavaScript may be ethically challenged! That there may be something inherent - either in JavaScript itself or those humans who use/enjoy JavaScript - that is ethically suspect.

This seems very much in keeping with some of the sentiments expressed on this website recently about AI and those who employ this technology.

Perhaps the problem isn’t AI, but rather that the vast majority of AI applications are written in Python which the author of the article above believes to be the more ethically correct language as compared to JavaScript :joy: :pray:

1 Like

Perhaps they just prefer biochemical code? A program that has written itself using 6 billion base pairs, and which humans can decode but not understand the working of, surely seems mystical by comparison. :smiley:

1 Like

The problem is we know this “language” also constructs “programs” with ethical lapses as well :joy: :pray:

1 Like

Humans should create a program destined and designed to enter Nibbana in the program’s current version, and then likewise find a way to do so themselves as well.

Well, first humans will need to artificially create a ‘Self’ that is conscious of ‘its’ environment. Given human ingenuity, I’m sure they will find a way. :joy:

Theoretically it should be possible, given the Buddha’s teaching that Consciousness is simply a recursive function based on Form and the Processing of Sensor Input.

Machines are already ‘aware’ of their changing environment - that’s how we have Self driving Cars. They are however, not yet ‘aware of being aware’ - the nested function required for Vinnana and which further causes a ‘Self’ to arise within ‘its’ World. Once at that point, a sophisticated Damage Avoidance System based on judgements about the conditioned phenomena being experienced by that ‘Self’ would inevitably lead to the perception of Suffering and the need to discover the path to ‘its’ Nibbana.

I personally find this kind of reflection quite valuable in what it reveals of who or what ‘I’ might be and what Nibbana might mean for ‘me’.

:smiley:

Interesting. This begs the question, is the suffering involved in finding Nibbana worth the Bliss of extinguishment? Yes? Therefore life has purpose, meaning, and worth.

1 Like

I really like the Buddha’s teaching that ethical lapses are a result of wrong information.
Garbage in - Garbage out, in a self reinforcing spiral of bad processing. :laughing: With the possibility of correction!!

MA51
Being with bad people, one readily associates with bad friends. Having associated with bad friends, one readily hears bad teachings. Having heard bad teachings, one readily has disbelief. Disbelief having arisen, one readily has incorrect thinking. Having incorrect thinking, one readily has incorrect mindfulness and incorrect knowledge. Having incorrect mindfulness and incorrect knowledge, one readily has unguarded faculties. Having unguarded faculties, one readily has the three bad conducts.

Well, the tears to be shed in a maximum of 7 further lifetimes (or less) are nothing compared to the tears that have already been shed in the innumerable lifetimes before (SN15.13) or the infinity of lifetimes to come if one doesn’t enter the stream!

1 Like

Interestingly, JavaScript (the language the writer feels is morally suspect) is not used in AI development. JavaScript is a website development tool. And Python, the programming language the writer of the article celebrates, is used extensively in AI development. So the article doesn’t cast aspersions on AI and AI developers so much as websites and website developers. :rofl:

Having studied and worked in Computer Science for much of the 1990s and early 2000s, I have to say I highly resist the suggestion that you can make ethical judgments based on someone’s preferred programming language. That’s a product of age, type of work you got, and market forces. If you entered the job market in the 70’s and got a job programming for a bank, you were probably going to program in COBOL. If you studied AI in the 90’s, you were probably going to learn LISP. C, C++ (as programing languages moved to be Object Oriented), and later Java (different than JavaScript) have been widely used programming languages since C came out in the 70’s.
Python is a great programming language. When my son wanted to learn some coding I had him create increasingly complex games using Python. Python has done a great job creating libraries that support fun coding, which makes it great for learning. They’ve created a wonderful, supportive community. I think they’ve done a great job with the social component of a programming community. If you want to start coding, I’d suggest Python. But if you tell me you’d rather learn JavaScript, I’m not going to assume anything negative about your ethics.

1 Like

Not an expert, but it seems obvious to me that a computer program, necessarily based on math and logic, can only ever come up with objective and logically coherent conclusions.

The interesting thing is that many of AI’s ethical or politically offensive blunders are, taken by themselves, nothing but such objective conclusions.

The statement: “There is no biological 3rd gender” would immediately cause an uproar, but is by itself just an objective conclusion of available medical data.

What this means is that human beings are irrational more than AI is at fault. Once it is adapted to include relative human ethical and political positions, it may of course lose its core value of producing quick, objective information !

Yet here we have an example of someone doing just that: making ethical judgements based on someone’s preferred programming language! :joy:

I get that you were trying to say that you don’t think it wise to do so and I agree. When you step back and look at it I think most here would agree that not only is it unwise, but actually a bit absurd.

And yet many on this board think that ethical judgements can be made about the usage of a particular algorithm or that a particular algorithm is inherently ethically dubious. Was trying to compare/contrast so people could see the absurdity. :pray:

1 Like

Anyway, my post was only tangentially related to the topic, and I’m baited only because @yeshe.tenley makes interesting points. Anyone can see my post in the edits and I’ll practice my silence by removing the post & muting this thread. :slight_smile:

Well let’s see how these intelligences, if becoming quasi-conscious, define themselves, or how they define the terms human and life, because, as the history of philosophy shows, humans themselves until this day haven’t been that lucky. Agreed-on ethical behaviour happens mainly despite of, not thanks to, ethical theories.

Maybe AI will even help us to learn a lot about ourselves. If these machines, even when more sophisticated, will fail to mirror human thought, behaviour and understanding, or if there will be very noticable differreces, we may have grounds to question the materialistic hypothesis alltogether. Epistemologies of the late 19th and early 20th centuries, the time before the positivistic turn, may become very relevant again.

There are clear problems with Buddhism, photography, the internet, hammers, the invention of the wheel, usage of fire by humans, with regards to ethics, their application, their goals and so on. You deem it necessary to mention the problems with the algorithm in question only because this board has become quite sympathetic to it. I’m not willing to give that inch.

As was mentioned above, the Python programming language is the lingua franca of the algorithm in question. The vast vast majority of those who make use of the algorithm do so via the Python programming language. The article above is a proponent of the mindset that we should regard this as a salient fact and condemn Python and those who use it. It isn’t the algorithm, it is some feature of Python that is the problem! :pray:

Ah. Understood. But I don’t think the analogy works.

First, I’m certainly not (and I think most here are not) saying that AI programmers are all ethically suspect. I think the board has been very careful to single out certain proponents of AI as having ethical issues, such as Musk.

Second, to say a programming language is ethically problematic is not the same as saying a type of program is ethically problematic. (I’m not sure what you mean by “algorithm” in your post. There’s not an AI algorithm, if that is what you are saying. Lots of algorithms are used in any program, including AI programs.)
Anyway, I don’t think you can argue from “you shouldn’t judge a person’s ethics for their choice of programming language” to you shouldn’t judge the ethics of certain classes of programs, such as AI.
Thanks for your thoughts on this. :slightly_smiling_face: :pray:

Thanks for sharing!

This is really a key insight. Programming languages answer the question: how do we get machines to do what we want? The differences between languages like JS and Python are, for philosophical purposes, are not important. They’re all abstractions that make it easier to turn switches on and off.

The real issue is, what do we want?

The libertarian impulse of the so-called “Californian Ideology” assumes that it is right to pursue desire, and to make machines to serve this purpose. It doesn’t interrogate the suffering created by desire, especially when that suffering is experienced by others.

It is possible to imagine a technology that serves not the maximization of desire, but the moderation of desire through wisdom. SuttaCentral has always tried to be an example of this: we don’t have ads, for example. Instead, we rely on generosity, that is, on the impulse of humans to want to share.

The thing is, there is plenty of this in the tech world as well. There are masses of technologies that have been built with the genuine aim to enable sharing and just to improve people’s lives. Programmers love to do this stuff. The desire to make the world a better place is a genuine and powerful desire.

So it’s not about being puritan, it’s about supporting wholesome impulses where we can. That’s we why connect with community on an open-source forum, rather than on facebook. It’s why I use Ubuntu and have done for like fifteen years now. We can make decisions that are based on something other than, “what satisfies my desires right now?” Instead, we can ask, “how can I moderate and shape my desires so that they align with what is good and wholesome?”

4 Likes

Programming languages do not contain ‘values and assumptions’ that can inform ethical considerations. JavaScript and Python are both Turing complete languages and as such any program written in one can be turned into a functionally equivalent program written in the other. This has been proven mathematically. :pray:

Computer programs are algorithms. Just because they may be formed from subroutines that are themselves algorithms doesn’t change that any computer program that can be run on a Turing machine is itself an algorithm.

There is nothing ethically problematic about a GPT/Transformer based program. That it can be put to uses that are themselves ethically problematic does not change this.

There is nothing ethically problematic about black and white high resolution photography. That it can be put to uses that are themselves ethically problematic does not change this. :pray:

No. The point is what you’re doing with the algorithm. A lathe isn’t good or bad, but if you’re making guns on the lathe that’s a different story. Or if you’re using stolen metal that, for example, would be wrong.

Also: Perhaps the @moderators can weigh in here with a clarification on the Right Speech policy. Putting forward an argument sarcastically just to stir the pot feels like trolling to me…

1 Like

Of course they do. One assumption they share is that, “this language will help people quickly and easily turn lots of switches on and off.” And a value they share is that, “it is better to have a world where people can turn lots of switches on and off”.

Again, of course there is. Everything has a cost. One cost of AI programmes is that they consume massive amounts of energy, thus accelerating global warming and hastening climate breakdown. This is inherent in their very nature: you can’t build an AI like ChaptGPT without consuming vast amounts of electricity. This is just one of the very many problems inherent in the technology.

We don’t live in a world where these things exist as some sort of theoretical entity divorced from reality. For a long time this was the case, and I as much as anyone enjoyed playing around with these ideas in sci-fi. But in the real world, the world we live in today, any computer program operates as part of a complex web of social, economic, psychological, ecological, human, etc. realities.

Here is the reality of AI.

Everything has a cost, and the cost of AI is vastly more than that of most other technologies. There’s no such thing as safe or ethical generative AI.

1 Like

Agreed. The ethical consideration begins and ends with the human using the tool with specific intent and nothing inherent to the tool.

Unfortunately, not all agree as evidenced by the post below yours:

This is inherent in their very nature: you can’t build an AI like ChaptGPT without consuming vast amounts of electricity. This is just one of the very many problems inherent in the technology. … There’s no such thing as safe or ethical AI.

This I think supports what I wrote… some on this board do indeed think that, “ethical judgements can be made about the usage of a particular algorithm or that a particular algorithm is inherently ethically dubious.”

I apologize if it seems otherwise, but the intent of this post was to offer a point of view in good faith in the hopes that it might inform considered mutual understanding. I have clumsily employed humor in the form of sarcasm, but it is offered in the hopes of shared laughter, not of derision. :pray: