AI-3: There is no road from here to there

Strange article I came accros from 10 months ago…

By inputting vast amounts of Buddhist data into Artificial Intelligence systems, we enable them to answer any questions about Buddhism accurately. This technology can provide insights into historical figures like King Dutugemunu and Queen Victoria, demonstrating its potential to enhance our understanding of Buddhist teachings.

To propel the dissemination of Theravada Buddhism globally through Artificial Intelligence, we recognize the need for thorough investigation and understanding. As a testament to our commitment, we have allocated 200 million rupees for these endeavours, with the flexibility to secure additional funding as required.

That’s 660,000 USD. Crazy.

And this more recent article:

Now, artificial intelligence (AI) mirrors the capabilities of the human mind. AI can process vast amounts of information and operate accordingly. Therefore, it’s pertinent to explore the connection between Buddha’s teachings and AI, considering their shared focus on the mind’s control and its implications for our actions.

If artificial intelligence promotes a different religion, it could pose a threat to Buddhism, so it’s crucial to consider this possibility. Throughout history, Buddhism in Sri Lanka has been influenced by Hinduism, Mahayana and political influences. With the emergence of artificial intelligence, it adds another layer of influence. Therefore, we must contemplate whether AI might propagate alternative doctrines.

Moreover, there are plans to allocate LKR 01 Billion next year for research on the interconnection between Buddha’s teachings and artificial intelligence. Although originally slated for this year, the initiative has been deferred to next year due to pending laws and regulations concerning AI oversight. Consequently, we anticipate introducing new legislation to kick-start these endeavours”.

That’s 3.3 million USD


As I said in the other thread, chatbots are the new stūpas


Ok, wow, thanks for posting. i’ll definitely be in touch with them to see if I can have a say.


It’s really such a shocking amount of money. I almost wonder if the money won’t just be allocated to tech infrastructure in general but it’s being promoted with the buzwords du jure. At least that would be my hope.


These techno-billionaires profess to be fulfilling a broken promise, but surely they know that the promises were made by liars – showmen using parlor tricks to sell the impossible. You were “promised a jetpack” in the same sense that table-rapping “spiritualists” promised you a conduit to talk with the dead, or that carny barkers promised you a girl that could turn into a gorilla

And in today’s headlines:

OpenAI’s long-term safety team disbands

OpenAI no longer has a separate “superalignment” team tasked with ensuring that artificial general intelligence (AGI) doesn’t turn on humankind.

Why it matters: The non-profit firm — founded to protect the world from the gravest threats AI could pose — is looking more and more like an impatient Silicon Valley startup cranking out new products at warp speed.


1 Like

Yes, this is very much a pattern.

Together with the recent flirtbot debacle, it seems that the tide is turning against OpenAI, or at least there is at least some resistance to them.

Scientists do some basic research and develop a promising novel mechanism,

Nope, that’s just plain wrong. Calling any type of AI research “basic” is just… wow .
If anything, working with transformer models has left me with a really deep appreciation of being able to stand on the shoulder of giants. The massive amounts of leaps of faith the creators had to take to get something complicated to eventually work has seriously left me in awe. Think about a really complicated gadget, that has a 100 switches, trying to look for one of the very few combinations that would eventually make it turn on. If you miss a switch, the whole thing can fail even if the rest of the switches are turned the right way. And people just kept trying out combinations instead of giving up even if switching certain switches didn’t make any sense for a really long time.

The whole article is a strawman trying to argue about AI being nothing more than a ploy to sell more compute, but I don’t think the author composed this text on a calculator from the 60s. I don’t think they had to invest into a brand new computer just to use ChatGPT from their browser either.

I believe this arising hatered towards AI has little to do with the actual technology, and has more to do with the anxiety surrounding the current economy.

Almost every argunment agianst AI can be made against Crypto too:

  • it’s a pipedream promising more freedom while democratizing access to ___ while in reality most of crypto is in the hands of a just small tech minority
  • it fuels Musks’ dreams who can easily rip off his fanboys with it
  • it needs massive amounts of energy to run, accelerating climate change while having no real use case + it is actually a ploy to sell GPUs in massive quantities

Yet there are no essays on the dangers of nerds doing crypto exchanges that contribute to drug cartels being able to launder they money.

Did you read the article?

and I want to be clear: each cycle has produced genuinely useful technology.

Ah, that’s because crypto has many of the same problems.

I’m sure there are. Even on this forum.


I appologize, I just hastly went through it, but I stand by my word, that ML is far from basic research (updated that part of the comment) and the hatred towrads it is mostly undirected.

I’ll check the crypto thread.

My take from the article wasn’t that the real technology wasn’t real. But that there was a pattern of hype-cycle that was predictable and not representative of the actual technology at the time of the contemporary hype.

And it was interesting to me because only the last one was really on my radar.

Oh, I see!

Deep Learning itself was a rebranding of Neural Networks that have failed to live up their promise in the late 50s (or to bemore precise, the Perceptrons). We were promised an “embryo of a computer” that will soon become a sci-fi robot. To be fair, having a large machine “learn” to distinguish between photos of men and women without having to code the rules into it was really impressive.

After all the unfullfillled promises, funding (from DARPA) went dry and even going near the subject would have burnt you.

Later on people came up with the idea of expert systems: if you can just encode all knowledge into a decission tree then why would anyone need experts in the first place? Turns out you do, because knowledge needs to be updated and in some cases not all information is available to simply make a yes / no decision following a branch of questions.

After that, people needed ways to work with massive amounts of data automatically (for instance to find spam mail) so the field of machine learning started to born. But noone called it AI because of the bad reputation. Yet, they provided a lot of value and helped companies like Google to power.

The closest thing we called AI in the 2000s were GPS systems, because they even talked! Nowadays they are just graph algorithms with some really basic speech synthesis.

Nevertheless, working on “AI” was still hushed upon. Deep Mind’s Demis Hassabis often talks about his experience of also learning neuroscience just to be able to work on AI. His teachers would roll their eyes at MIT thinking he’s crazy and is jsut wasting his time. In the end, he had to work on Videogames instead, because Videogame AI was far ahead in the field than academia:

In the end, he was right. So was Geoff Hinton who never gave up working on Neural Networks, despite everyone thinking it was a dead end. We all saw what happened in the late 50s early 60s, and even though we had the maths figured out to train deeper models by the 80s, no one dared to touch them. They were expensive, slow, we didn’t have enough data nor compute and you could get similar results with far less compicated algorithms. In the end, he was also right to stick with it, although rebranding it to Deep Learning was probably the right choice (but nowadays Wide Learning would be probably more accurate). When people saw how effective they were, beating 30+ years of carefully hand crafted computer vision solutions with ImageNet, they all started to jump ship which initiated the current AI boom we are in.

I am not sure if another AI winter would be coming similiarly to what we had in the 60s and 80s, because Machine Learning still provides value and people enjoy interacting with LLMs as if they was definitely not Scarlet Johansson. It is entirely possible that they will be far less hyped and we would stop calling the tech AI. But industries rely on a lot of these technologies already and they will rely more in the future.

In fact, I think the next big thing will be robotics. So even if LLMs run out of steam, a humanoid robot folding your laundry will continue generating hype.

I can’t find the article on the history of AI winters that was a go to for me, but I will link it if I find it.

Update: I can’t find it anywhere, it might have become a book The First AI Winter (1974–1980) — Making Things Think: How AI and Deep Learning Power the Products We Use

Update2: this post is an oversimplification, there have also been failed hype circles based on chatbots before too. For instance, the failure of machine translation during the cold war and the recent (less than 10 years) failure of recurrent network based chatbots (that were supposed to take over Facebook Messenger for IVR tasks)

1 Like

I think the idea of an AI winter is probably behind us, in that the tech has proved useful enough to have a lasting presence in a bunch of applications. So it’s not going to just lie fallow.

But we’ll still see highs and lows, and it does seem like the upwards trajectory of 2022 has already softened considerably. It’s like with Facebook, they weathered all the various scandals, and even the disastrous pivot to VR, they’re still there, but they’ll never be the same. The shine is gone, now they’re just a utility. I think OpenAI is going through something similar. Maybe another firm will take the lead. Or maybe there’ll be another leap forward. But most likely we’ll see a cooling off as the hype shakes out and reality hits home.

To me that’s why the thing with Johansson is significant. In itself, it’s not a big deal. But it raises awareness of issues through someone that everyone knows. It’s a cultural moment.


Yes, even I have joined the Dark Side and succumbed to the temptation. I just bought a new MacBook Pro with 128GB memory and I intend to run some LLMs locally.

Don’t worry - I am not letting it anywhere near Buddhist texts (thank goodness I don’t need AI to “summarise” or “translate” these for me).

Current gen LLMs can save some time on knowledge worker chores and drudgery. I did several courses on prompt engineering ( is a good place to start) and I now have a fair idea how to make them productive, as well as what they are good (and not so good) at. But it’s like training an intern (or a graduate student) - you have to feed them a lot of information and give them step by step instructions before they become useful. Otherwise they are just hallucinating random words that seem intelligent.

1 Like

So, the current thinking with LLMs is that we’re about at the limits of the technology:

The problem is that the machines need an exponentially increasing amount of data to “learn” about more niche concepts (like basic arithmetic), and the data isn’t there: a void that grifters and conspiracy theorists and pranksters across the internet are happily trying to fill.


From this article:

Reid argues that AI Overviews generally don’t “hallucinate;” they just sometimes misinterpret what’s already on the web.

lol. It’s like they have done a 180 from when they were kind of good at judging what were quality search results, which is what set them apart in the early days of search.

1 Like

Yes. Google search is now run by the guy who drove Yahoo into the ground. But maybe that’s just a coincidence

In my experience, this is untrue (the assertion by Reid, not your comment). AI models can hallucinate quite badly, even when asked to summarise correct text.

Background: for the past week or two I have running various LLMs locally on my MacBook Pro with 128GB RAM. This is enough memory to run medium sized models such as Llama 3 with 70B parameters, Mixtral 8x22B, and Command R Plus with 140B parameters, provided I quantise down to 4 bits.

My setup involves using these models to summarise academic documents using RAG (Retrieval Augmented Generation). I use a large context length (Mixtral 8x22B for example supports a context window up to 64K tokens).

What I have discovered so far is very disappointing. Despite being given the text, and all the models had to do were to summarize, they all do rather bad attempts at summarising, and can hallucinate at any time. When they are not hallucinating, they are sometimes missing out on key points in the summarisation. I have done some prompt engineering to reduce hallucination, but the rate of accuracy is still too low by my standards.

I am now forced to experiment with unquantised models (fp16), which severely limits the size of the models I can run in 128GB of memory.

An academic paper I have read suggests quantisation severely negatively impacts long context tasks such as the summarisation I have been doing, up to 40% degradation in accuracy at 16K context length.

Based on this, I definitely do not recommend using current gen LLM to even summarise Buddhist texts, let alone translate them, unless you have a highly capable machine with lots of RAM.


Sorry to hear about your frustrations!

Don’t worry too much, because soon, when we get frustrated, AI will take care of us.

1 Like