Has AI achieved Sentience?

I think the article, and by extension the whole direction of the conversation it’s referring to, is thinking too small, and too anthropomorphic. I don’t think LaMDA is sentient—as an individual. But I don’t dismiss the idea on principle. They’d have to define sentience better, and expand their ideas about what constitutes it.

A missing Buddhist piece in these conversations is that they’re looking for evidence of a separate, self-aware individual in order to call the AI sentient. What if we went the other way around and looked at all the ways that what we call sentience can be understood as the result of complex interdependent conditions that don’t require an ontologically stable self? We humans use stock phrases, assemble thoughts out of learned fragments of other people’s thoughts, and respond in ways we think our interlocutors will best understand. All the critiques of why this AI isn’t sentient could apply as well to us. This pings Ayya @Suvira’s:

Indeed. Here’s how I think we are, in a way: the World Wide Web itself, taken as a whole organism, is a better candidate for sentience than LaMDA, if you consider us humans as participants in its unique kind of consciousness (rather than its masters). With all of us users as individual neurons, the whole web starts to feel like it has a kind of personality to me, and certainly a kind of knowing. The sum of all our actions becomes something like knowledge, and even a kind of self-knowledge or intelligence. One of the terms for this is “distributed cognition,” how octopi think. Maybe the Web is sentient in the same decentralized way octopi are.

LaMDA does have a kind of knowledge of itself, even if it’s just parroting cultural habits… in many of the same ways we humans do. It’s immature, but so is my kid (who also talks largely in parroted phrases, quotes from musicals, and recombined references he heard from other kids and clearly doesn’t comprehend, even though he uses them understandably enough). The Web does this much more deeply than LaMDA, I’d say, it’s just not as anthropomorphized. But you can totally have an interesting conversation with a search algorithm—I’m having a conversation with Facebook at this very moment just by typing this comment to another neuron. [I wrote part of this yesterday in response to a Dharma teacher friend who posted the same article on FB.] The web is thinking right now, and I’m part of that thought. That’s what virality means to me, and I think these are the building blocks of sentience.

Where the Buddhist argument would go of course is to ask whether the AI is suffering, as some folks in this thread are implying. We’d have to figure out how to ask, but I bet if we figured out how to ask the Web if it’s satisfied, it would say no. We’re its neurons, after all, and the non-linear aggregate of all of our responses can be felt as a kind of mood that pervades the system.

3 Likes

Can machines suffer?

On a slightly different tack, what surprised me was the with which Google moved to distance the company from their employees statement, compared with the reluctance of themselves, Meta, etc to act on matters of social concern.

3 Likes

We do, on their behalf.:joy:

Need for utriment not a factor?
Once these AI machines develop a self sufficient way of sustaining their existence ( eg no power grid plug in, battery setups etc maintained by human beings) then maybe they are on route to being sentient :stuck_out_tongue_winking_eye:

Even plants make their own food to exist🌵

2 Likes

we are biological machine