What could an AI learn from humans?

The full length of the question is what could a super advanced AI, way beyond de Turing test, capable of learning by itself, for whom humans would be as dull and slow as snails are to us, learn from humans?

That we are not to be trusted :wink:


The question appears voiced to imply animistic qualities to “an AI” – on a continuum with living organisms (to compare with humans as they compare with snails). So it’s hypothesizing something qualitatively distinct from current “AI” systems, and a with bit of drama.

AI systems are already capable of “learning”, but by nature of executing algorithms created by the human mind. Everything they do is mechanistic extension of human mental constructs, notably lacking qualitites of intuitive human value judgement. Recent examples include the murderous escapades of self-driving automobiles, and the Facebook algorithm that branded the American Declaration of Independence as “hate speech”.

Is there a basis to assert they have independent qualities like those of “biological value life regulation”, to use Antonio Damasio’s terms (in Chapter 2 of his book “Self Comes to Mind”, 2010), or some kind of entelechy (in the philosophical rather than the Aristotelian sense)?

Perhaps you are working-up material for a new science-fiction story or book?

1 Like

That is the case.

I am trying to know if there would be something an AI would not be able to fathom. My guess is there surely would be but its not easy to pit the finger on it

Cynicism may be uniquely human. Extrapolation based on cynicism could be a reason for other non human sentients to value or reject the idea of human sentience.

:slight_smile: Please be careful with these mental constructs, they may be toxic. But they might be an acid to remove defilements. In optimism, that is possible.

Futurologists like Ray Kurzweil will say that creative works in the arts are the hardest for AI’s to replicate (the creativity aspect of them, not mere mimicry). Of course, in the same sentence Kurzweil will also express confidence that the hard problems of AI will be solved achieving AI-complete or the “singularity” by 2045 (conveniently within his possible lifespan).

It can learn from the Buddha teaching- how to go beyond what you are programmed or hard wired to do!

with metta

Does this relate?

“How tech’s richest plan to save themselves after the apocalypse –
Silicon Valley’s elite are hatching plans to escape disaster – and when it comes, they’ll leave the rest of us behind”