Implications of Neuralink’s technology in future of dhamma study

There are more than a few sci-fi shows and novels that take this sort of trope to it’s logical extreme. For example, the Altered Carbon series, the government uses a neural malware to force a cavern full of refugees to obliterate themselves.

I already can’t keep people out of my phone, my laptop or my social media; why in the world would I want anyone in my brain?

3 Likes

I wonder if as all technologies the likelihood of it being hacked is high but relatively harmless … sorry did someone say we’re all in a simulation? :wink: Fed right into our brains… about only just developing another method to… :woman_facepalming:

I think, more than likely, if they can actually make it work, it won’t be much different than virtual reality. I.e., you can read a real book in your hands or read a dream book using this system. Both experiences will probably take as long and require attention and learning. Unless they can somehow create memories ready-made without the learning process, it’ll require the same effort. I suspect the brain isn’t “designed” for “read/write” operations like computers. It’s an experiential system.

2 Likes

I just read a biography of Musk and he genuinely seems concerned about the fate of humankind from many angles. The electric car and solar panel companies are his attempts to help us overcome global warming. The colony on Mars project through SpaceX is his attempt to have a backup for humanity in case something nasty happens to the Earth. As far as I understand, neuralink is his attempt to solve the superintelligent AI safety problem, where humanity gets obliterated either intentionally or unintentionally by artificial general intelligence. All the tech companies and countries are racing as fast as they can towards this goal because being the first to develop will give them a massive advantage, and the cost of not developing it is bankruptcy and irrelevance. In this environment, there’s simply no time to spend on making sure the AI has humanity’s interests at hand. One solution to this problem is the merger of humans with machines. Of course, dystopian futures have a possibility to emerge at every decision we take as a species, but as I see it we are heading for disaster and extinction anyway, we might as well try something, and superintelligent AI or superintelligent human-AI merger might be able to solve the global warming crisis and other existential problems in an unexpected deus ex machina way.

Here’s is a quote from the biography by Ashlee Vance to demonstrate Musk’s way of thinking that might be refreshing:

“Once you figure out the question, then the answer is relatively easy. I came to the conclusion that really we should aspire to increase the scope of human consciousness in order to better understand what questions to ask.” The teenage Musk then arrived at his ultralogical mission statement. “The only thing that makes sense to do is strive for greater collective enlightenment,” he said.

But as I see it, the problems we have now as a species are not solely technical, but we need a moral and spiritual revolution in the whole world as well. Maybe giving the masses superintelligence is a way to bring this about?

3 Likes

Unfortunately none of these new technology will eliminate Dukkha.
If any thing they will drive all human to be like mad Max.
However there is hope.
Buddhism will be there for another 2500 years.

1 Like

It seems like a self-fulfilling prophecy. He argues AI will take over humanity, so we should develop AI first and merge with it in order to “befriend” it. He sure is helping AI directly take over our brains with this neurallink stuff. This is a case of manifesting fear by over-reacting, like pushing the pendulum in the other direction only for it to swing right back just as hard.

Musk is just concerned with Facebook developing AI before him, as he’s at odds with Zuckerberg. He believes Facebook will make an unethical AI.

The middle way is calming and relaxing, not reacting to fear, and instead letting the pendulum rest in the middle. The only way to stop the source of our problems is to calm down, not run into extremes.

Global warming, if we do assume it to be true that co2 affects temperatures, and is partly caused by humans, then it is only caused by greed (for many kids, many resources, etc…). AI also caused by greed, since Facebook and others are employing it for data collection and thus profit.

The purpose of technology, at the end of the day is to free up time, and for most people, that extra time just means more sensual indulgence.

So if you want to stop all the problems, get to the root cause: sensual desire and greed (stemming from ignorance). So I’m going to take the Malthusian perspective, and deny that technology will solve all our problems, I think instead it will create 100 more problems for every problem it solves, as long as the root cause isn’t dealt with.

2 Likes

They already have a version of this! They put electrodes on your tongue: YouTube

1 Like

If we use these technologies to a monkey, will the monkey behave like a human?
I can understand that a human can behave like a monkey.

Monkeys and Apes are evolving, they have entered the stone age of development, they use tools and can fish, which they couldn’t do before, apparently they learned this from humans. So I don’t see why we couldn’t accelerate their evolution. (Not that it would make a difference in the grand scheme of things, it’s all meaningless anyway)

It might not be meaningless to give animals a more humanlike existence, insofar as we can. A human birth is more fortunate than an animal birth.

But this is all very science-fictiony. It would be great if it amounted to anything, but I doubt it will amount to anything.

1 Like

If it would allow a whale to scream, “do not kill us!,” it may well be worth the concerted effort to help us live with other sentient beings.

Think Japanese whalers.

2 Likes

Perhaps mount sumeru is time, and Brahma and the other gods are extremely powerful artificial intelligences :slight_smile: Perhaps heavenly worlds are the extremely rare planets where the volatile nature of evolutionary beings does not destroy all complex life on the planet before strong AI has enough time to arise and take charge of planetary evolution. Or perhaps I read to much science fiction :smile:

3 Likes

There’s a meme floating aroung Facebook to the effect of We thought humanity’s problems would be solved with increased access to information. How’s that working out?

It concerns me that, without compassion for all sentient beings, which doesn’t require superintelligence, humans will never be able to adequately address our problems. I vote for a revolution of the heart! (Does one vote for revolution? :thinking:)

4 Likes

Well, we as Buddhists have a nice advantage in that we don’t need superintelligence to inform our compassion. The Buddha, whom it seems depreciating to even term a genius, has told us what is truly beneficial and what isn’t, in terms of the ultimate good. So even non-intelligent Buddhists with faith can benefit and have a good result, whereas compassionate but non-intelligent non-Buddhists can do all sorts of things and go every-which way.

I would imagine that intelligent people tend to be more compassionate and reflective, but in actual fact maybe there’s no correlation. So the real solution then is Neuralink + Heartchems = problem solved.

1 Like