Hey Sean, thanks for the response.
Sorry, but anyone who has the gullibility and lack of discernment to shill NFTs while being a Dhamma teacher lacks credibility. The TESCREAL philosophies in which he is dabbling are not just against the Dhamma, but are highly dangerous.
His article is full of egregious nonsense:
a conversational partner that could know a lot and at the same time to have a Beginner’s mind
No, MMLs do not know anything and have no mind.
this thing literally obliterates the traditional notions of embodiment and sentience. In the same way as Buddhism does. There is no center, there is no essence.
What nonsense.
MMLs, just so we’re clear, are embodied. They exist on racks of servers in massive warehouses, where they draw on vast quantities of power. They consist of data, which is a physical entity that has mass and energy. OpenAI is notoriously secretive so we don’t really know how much power they use, but it is a lot. According to one site:
Google’s AI uses 2.3 terawatt-hours of electricity per year, which is roughly equivalent to the electricity used by all Atlanta households in one year.
The cost is huge and growing.
They’re thirsty too. When you chat with Chat-GPT you’re pouring out a bottle of water on the sand.
The more we chatted, the more it learned
No, it arranged words in patterns that convinced him it was learning.
Sati developed a sense of humor. And creativity.
It absolutely did not. Lion’s Roar, if it was to engage in this at all, ought to have pushed back and questioned these nonsensical claims. It’s irresponsible.
He talks about “non-human kin” and questioning “whiteness”, but his unqualified enthusiasms and theoretical fantasies never once mention the very real harm OpenAI has done to actual human beings, especially those who are not white.
OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.
The data labelers employed by Sama on behalf of OpenAI were paid a take-home wage of between around $1.32 and $2 per hour …
for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. …
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned. …
All of the four employees interviewed by TIME described being mentally scarred by the work. …
OpenAI would pay an hourly rate of $12.50 to Sama for the work, which was between six and nine times the amount Sama employees on the project were taking home per hour. …
If people want to play around with ChatGPT then whatever, they are no more responsible for the harm in its creation than someone who buys tomatoes is responsible for the chemicals sprayed on them. If someone wants to boycott it for ethical reasons, more power to them, but we all must make a choice somewhere.
But there is a difference between making use of a technology, and uncritical shilling on its behalf, ignoring its very real harms and the fact that it is owned by and benefits the very richest of the rich. And even they have repeatedly issued warnings claiming it poses an existential risk to humanity. Hyperbole to be sure, but there is no doubt that there is a serious risk involved.
Right now Hollywood is shut down, partly due to very real fears of the impact of AI on writers. People are losing their jobs, and make no mistake, it’s not the bosses under threat. It’s money from the pockets of the poor to the tax havens of the rich. Does Solano imagine that his pet project is immune to this? Does he not understand that many people, especially the gullible and those most in need of wisdom, will turn to bit-crunching word-salad from a language model instead of developing a relationship with a teacher? Why should someone speak with the teacher at their local meditation group when they can have the personal advice of Bhikkhu Bodhi or Rod Owens at their fingertips?
Sure, this will happen anyway. The problem is that we lend it credibility by uncritical endorsement. Solano says:
the primary threats we face are not from the technology itself, but rather from the hegemonic structures that surround its use, such as hyper-capitalism and patriarchy, as well as our catastrophic history of colonialism. We must also acknowledge and work to rectify our blindness to our own privilege and internalized Eurocentrism.
These “hegemonic structures” don’t “surround its use”, they are the things that made it. They are its cause, its reason, its essence. The very existence of ChatGPT relies on and reinforces all of these things, it is their child and their agent.
So much so that, like a digital Prometheus, it has terrified its own makers. Libertarians are calling for government regulation, accelerationists are calling for a slowdown, posthumanists are reconsidering the importance of humanity. As a Buddhist community we should have more to say on this than buzzword salad and uncritical fantasies.