Learning from AI

I watched a video on AI(I can’t find it right now), where it explains how the AI couldn’t calculate a simple maths(i.e 7*7 + *(4-2) ), and gives an irrelevant answer. But when asked to calculate it step by step, it does give the accurate answer. When confronted that it had previously given a different answer, AI replies that was a typo, obviously the second answer is the correct answer.
Similarly my mind also alters my memory retroactively as if that’s obviously how I did it - not any other, if there was evidence it will trivialize the error. It also reorders events such that a later thoughts appear before actions that lead to it .A popular phenomena “Mandela Effect” illustrates this effect clearly.

I think we have similarities in how our brain works and how AI works. Although it’s obvious how error prone and AI not understanding any concepts(in contrast to humans), in humans we believe we do understand the concepts(feelings etc). Do you have a way to test if we understand a concept or not? - in contrast to AI like guessing.

I was investigating how emotions arise and when presented with a opposite scenario, they seem to cease. When using this technique some times the obvious reason for a feeling- that my mind tells me, doesn’t lead to cessation of the feeling. I had to keep guessing and later reach a conclusion that I don’t really know the reason for a feeling. Although most of the time I assume I ‘know’ the feeling- I am just guessing.

AI doesn’t “give answers” or “calculate” or “reply”; nor does it understand what a “typo” is or the concept of “correct”; nor does it “reorder” what it did before. These are all human concepts that you have read into a meaningless stream of data.

What AI systems do is take the work of humans without consent, acknowledgement, or recompense; eliminate meaning and context by grinding it into paste; then burn the energy of entire cities to reconstitute it probabilistically in a stream of data whose purpose is to fool you into thinking that it is behaving like a person.

The real insight into human behavior is how easy it is to trick people into reading meaning into places where there is none. We human beings, lost and lonely in the cold dark of space, are driven to find meaning. Where our ancestors read meaning into the bright stars of the firmament, we who have blotted out the stars look for meaning in the sparkling lights that AI makes on our screens.

6 Likes

AI or neural networking seems to be based on how neurons work to my knowledge. Something that’s modeled on brain seem to behave very differently from a human being. Where does this difference come from? or is it similar but a human brain is more complex/better such that we don’t notice the discrepancies? or Consciousness that leads to this difference?
If it’s Consciousness ( Vinnana being similar to an illusion) - I want to test what kind of Avidya - ignorance that prevents us from noticing if it’s really similar.
As an example I already pointed that I don’t know what ‘my’ feelings are or the reasons for their arising. When I question it - an obvious answer arises in mind but on closer inspection they turn out to be wrong.

Since even feelings are not correctly perceived ( Perception is a mirage ), I am skeptical that I actually know anything at all.

That just sounds like a student who is pretending to know how to do a math problem. “Oh, you’re right. Of course the answer is X.”

Most “AI” systems like this are just guessing what to write next, based on a corpus of available texts. They don’t really understand what they are producing, or even proofreading it to make sure it even makes sense.

When playing around with ChatGPT, I found that it gave many wrong answers. Even answers that flatly contradicted themselves, sentence to sentence. If you point out the problem, it will apologize and acknowledge that you are correct, but it will likely continue to produce wrong answers.

I think these systems are not really intelligent. They’re basically parroting what other people have already written. They don’t have the ability to actually reason about much of anything, besides just trying to decide what sources to imitate next. :parrot:

These computers we type on sure do resemble something our bodies have. Almost like an extension. Yet because it works for us we don’t want to say that it has a consciousness or life.

One time when I was reading a Greek translation of the Bible, I read that the Greek word for “body” could be translated to “the slave.”