ChatGPT is a bit scary

I wonder how long it will be until I can ask Moggallana a question - hehe

Test question to see how close to right view it is…

I don’t know if it’s a good thing or a bad thing it’s not totally hitting the mark on right view . lol

What is the ethical implications - should be be training and making sure AI have right view? hehehe

What happens when we ask.

Lets see what happens when I give it an open dhamma question I’ve had for a while - one I am truly curious to know the answer to…

EDIT: I figured I’d give it a shot at answering the problem it itself poses:

5 Likes

I don’t get it, ChatGPT is sophomoric and totally unreliable, it responds to any question you ask it basically like a middle schooler who has done all their research with Google and understood nothing of what they have read.

It fails to answer your first question (DPR answers it perfectly)
It fails to notice your second question is a trick and gives a completely wrong answer
For your third question it gives pre-programmed boilerplate
and for your fourth question it simply lists lists and pastes together stuff from the internet.

that’s all it does, smoosh pre-existing stuff from the internet in grammatically correct sentences, its basically google if google presented a mash up of its results instead of the results one by one, it will help people do homework, maybe, but its completely useless as a research tool.

What I find scary is how much people seem to think this is some sort of revolutionary technology. It seems to happen almost every other year, some overhyped piece of nonsense does something or other, it gets flogged to death and op -eds proliferate about how the world is now a very different place, and then when the dust settles, nothing much has changed.

Honestly what’s scary is how gullible most people are when you generate bullshit in an authoritative tone of voice. But I suppose t’was ever thus.

color me unimpressed.

Metta

2 Likes

Well, as a consequence of that hype, I’m looking at my old dotfiles & Emacs config, and going to look at my old python and common lisp code and get that running again because it can write suggestions of how to solve certain problems in code - if there’s some API that this can be in Emacs, I could write emacs lisp or some macros which are already pretty cool for code generation and see if chat GPT could write parts of programs - and indicators are this is how others are using it - maybe I will find out where the “hype” or maybe it’s just a good excuse to become reaquanted with this stuff I dropped 5 years ago or so

1 Like

Yeah, the very fact that authoritative-sounding BS can now be automated is precisely what is revolutionary (and terrifying) about it. I really enjoyed this conversation on the technology on Ezra Klein’s show:

1 Like

actually for code snippet stuff I think it might be very good.

… actually it might be “good enough” for a lot of things, like legal info, coding, a lot of spaces where a lot of already written stuff needs to be parsed and colalted or recombined…

…honestly I am just prone to hyperbole. :slight_smile:

1 Like

In the episode above, Marcus proposes more hybrid models which pair the fluency and intuition of neural nets with the logical rigor of more traditional rules-based systems.

I was surprised he didn’t bring up in the conversation the best current example of such a system: Stockfish.

Currently the world’s best Chess AI is exactly such a hybrid model. Stockfish uses a traditional N-moves-ahead decision tree but pairs that with a neural network to evaluate all those prospective positions.

This kind of combination of logical thinking and intuition is much closer to how human intelligence actually works and is pretty clearly where AI will need to go in the coming years.

1 Like

NY Times has a paywall for some cookies etc, so here is the raw mp3

Previous to this I asked it about the passage SN 5.2 by manually copy and pasting and it just gave questions, then I tried it for questions and answers …

I’ve seen it generate a reading comprehension test for some URLs others it says it can’t. It can also generate flash cards or even give you a script to automatically generate anki cards.

1 Like

In a professional math forum (the “meta”-subforum - for the “meta” of the mathforum see meta-mathoverflow and here for concrete examples ) a similar discussion occured, having obvious examples that someone feeded that Chat-messages as mathematical input. As it is a professional forum, there has been enough suspection about the answers which really look like sound mathematical responses … at least for a half-professional like me.
The problem is: once there is no more certainty, that someone on the “other end of the line” has discussed a problem definitely with due seriousness, and once it is not even obvious that we have simply random text: then the whole platform becomes irrelevant for serious exchange.

The shadow of this appears now in the mathematical forum. But I think it will become even worse, if it is not only in mathematics (where “exact” answers are possible and can -in principle- be recognized… but what about questions?!) but in a forum like this one here, where many questions/items are non-systematic, not-derivable from some axiomatic system, but must be handled with faith: one of the essential aspects for the personal accomodation with “the dharma”. If one must assume the possibility of being confronted with random text (in answers or questions, even imagine in fake mod-interventions), even when textually interpolated from existent internet texts with relevant and meaningful analysis&derivations - - - then the value of such a forum like D&D shall go to zero.

It reminds me a little of the 90ies in the usenet fora, where some trolls invaded discussion groups (for instance for buddhist discourse) and spoiled completely the seriousness of the exchange. The trolling luckily had often simple patterns to be recognized easily and even could be filtered out by the browser-software (I had sometimes up to 80% of all msg filtered out based on troll-filters, and even in the 20% remaining posts were a lot of “bullshit”/trolling) but the experience was really devastating and with the occurence of new forums in the web I left that usenet experience - but now: when you have no more such recognizable patterns/signals… why spending time in front of a monitor when a flood of random texts destructs any intention of serious research/investigation/discussion? (A small example of this discussion of 2018 already, but with a much simpler bot, obviously, led to some humourous comments, at least… )

2 Likes

On a side note, let us wait and see how it evolves and gets used… things are already afoot…
https://www.reuters.com/technology/microsoft-talks-invest-10-bln-chatgpt-owner-semafor-2023-01-10/

1 Like

Not Moggallana but I sort of got it to quiz me on Pali words and “teach” me pali (kinda) based on a formula I saw for japanese… (EDIT: going to try a few formulas with more metta & summarize later to save attention. See edits if you want to try.)

Result:

1 Like

Here is a script where I was inquirying into pali a bit and playing around - I have no idea if it gave me an intelligent answer when it explained the senses of "“nissāya” or if it had mixed it up with another word, it can spit out bad answers - if you have domain specific knowledge you can sort of correct it and then the “conversation” context is kept then you can get increasingly better answers

The following is an example word with sentence in pali, as well as example options, structured as a python dictionary

{“pali”:
{ “word”: “assutavā”,
“sample_sentence”: “Idha, bhikkhave, assutavā puthujjano ariyānaṁ adassāvī ariyadhammassa akovido ariyadhamme avinīto"},
“english”: {
“example”: “Take an ______ ordinary person who has not seen the noble ones”,
“options”: {1: “awesome”, 2: “funny”, 3: “skilled”, 4: “fabulous”, 5: “unlearned”, 6: “virtuous”},
“answer”: “unlearned”,},

Provide the next word and a sample sentence as a python dictionary

{“pali”:
{ “word”: “bhojanaṁ”,
“sample_sentence”: “Idha, bhikkhave, bhojanaṁ paccayo hoti”},
“english”: {
“example”: “Mealtime is ______”,
“options”: {1: “the best time”, 2: “a cause”, 3: “a condition”, 4: “a concern”, 5: “a blessing”, 6: “a joy”},
“answer”: “a condition”,}

there is an error in the above example sentence

{“pali”:
{ “word”: “assutavā”,
“sample_sentence”: “Idha, bhikkhave, assutavā puthujjano ariyānaṁ adassāvī ariyadhammassa akovido ariyadhamme avinīto"},
“english”: {
“example”: “Take an ______ ordinary person who has not seen the noble ones”,
“options”: {1: “awesome”, 2: “funny”, 3: “skilled”, 4: “fabulous”, 5: “unlearned”, 6: “virtuous”},
“answer”: “unlearned”,},

provide the next one

{“pali”:
{ “word”: “vīmaṁsati”,
“sample_sentence”: “Idha, bhikkhave, vīmaṁsati dhammā vīmaṁsanto”},
“english”: {
“example”: “Here, bhikkhus, one ______ the Dhamma”,
“options”: {1: “examines”, 2: “observes”, 3: “meditates”, 4: “blesses”, 5: “teaches”, 6: “follows”},
“answer”: “examines”,}

thank you, please provide the next one

{“pali”:
{ “word”: “nissāya”,
“sample_sentence”: “Idha, bhikkhave, nissāya paññāya nissaraṇiyā dhammo adhiṭṭhahati”},
“english”: {
“example”: “Here, bhikkhus, the Dhamma is ______ by wisdom”,
“options”: {1: “attained”, 2: “followed”, 3: “held”, 4: “sought”, 5: “examined”, 6: “purified”},
“answer”: “held”,}

what is the meaning of the pali word “nissāya” in English?

The English translation of the word “nissāya” is “by”.

please give me more sense of the word

The word “nissāya” is used to express a sense of dependence or reliance on something. It is often used to indicate that a particular action is done “by means of” or “through” something else. For example, the sentence “Idha, bhikkhave, nissāya paññāya nissaraṇiyā dhammo adhiṭṭhahati” means “Here, bhikkhus, the Dhamma is held by wisdom”. This indicates that wisdom is the means through which the Dhamma is held.

ok thanks, could you generate the next one

{“pali”:
{ “word”: “upanissayaṁ”,
“sample_sentence”: “Idha, bhikkhave, bhikkhu upanissayaṁ adhiṭṭhahati”},
“english”: {
“example”: “Here, bhikkhus, a bhikkhu ______”,
“options”: {1: “attains”, 2: “follows”, 3: “holds”, 4: “sought”, 5: “examines”, 6: “relies”},
“answer”: “relies”,}

Here’s a better script,

The following is an example word with sentence in pali, as well as example options, structured as a python dictionary

{“pali”:
{ “word”: “assutavā”,
“sample_sentence”: “Idha, bhikkhave, assutavā puthujjano ariyānaṁ adassāvī ariyadhammassa akovido ariyadhamme avinīto"},
“english”: {
“example”: “Take an ______ ordinary person who has not seen the noble ones”,
“options”: {1: “awesome”, 2: “funny”, 3: “skilled”, 4: “fabulous”, 5: “unlearned”, 6: “virtuous”},
“answer”: “unlearned”,},

{“pali”:
{ “word”: “assutavā”,
“sample_sentence”: “Idha, bhikkhave, assutavā puthujjano ariyānaṁ adassāvī ariyadhammassa akovido ariyadhamme avinīto"},
“english”: {
“example”: “Take an _______ ordinary person who has not seen the noble ones”,
“options”: {1: “awesome”, 2: “funny”, 3: “skilled”, 4: “fabulous”, 5: “unlearned”, 6: “virtuous”},
“answer”: “unlearned”,},

{“pali”:
{ “word”: “a condition”,
“sample_sentence”: “Idha, bhikkhave, bhojanaṁ paccayo hoti”},
“english”: {
“example”: “Mealtime is ________”,
“options”: {1: “the best time”, 2: “a cause”, 3: “a condition”, 4:“a concern”, 5: “a blessing”, 6: “a joy”},
“answer”: “a condition”,}

please provide the next one

{“pali”:
{ “word”: “paññāpeti”,
“sample_sentence”: “Tassa vāceyya, bhikkhave, paññāpeti”},
“english”: {
“example”: “He should be _______ to speak”,
“options”: {1: “questioned”, 2: “advised”, 3: “encouraged”, 4: “persuaded”, 5: “instructed”, 6: “directed”},
“answer”: “instructed”,
“sense”: “Paññāpeti comes from the Pali word paññā, which means “wisdom” or “knowledge”. The verb paññāpeti literally means “to convey knowledge” or “to give instruction”. It is often used to refer to the instruction or advice given by spiritual teachers.”,}

please provide the next one

{“pali”:
{ “word”: “vipassati”,
“sample_sentence”: “Imāya evaṁ diṭṭhiyā ariyā vipassati”},
“english”: {
“example”: “The noble one __ with this view”,
“options”: {1: “meditates”, 2: “reflects”, 3: “commands”, 4: “predicts”, 5: “explains”, 6: “speaks”},
“answer”: “reflects”,
“sense”: “Vipassati comes from the Pali verb passati, which means “to observe” or “to look at”. The prefix vi- is added to the verb to denote an intensive or deep observation or contemplation. The verb vipassati literally means “to look deeply” or “to observe deeply”. It is often used to refer to the practice of meditation.”,}

I think that last ones an error - or not - I’m stabbing in the dark because I don’t know pali but maybe if I fed it like heaps of phrases and their translations first from some datastructure here - it’d create the right context for chatgpt to begin to be more useful

1 Like

What is interesting to me is that the first question “what suttas have iti pi so” is exactly the kind of thing that any basic search function can find trivially, but it can’t even do that. Maybe it’s just a domain thing, it probably hasn’t been fed the Pali.

It reminds me of one of the qualities of consciousness that has always struck me. We think that an AI will gradually get better until it cam do the things that humans can. One of the things humans can do is maths. Back in the 70s at school I had a calculator that could do maths infinitely better than I could. I’m more conscious, but worse at maths. We can’t just keep making better maths machines until we get humanity, because we’re already doing something that is better; clearly it’s not just a matter of degree, but of kind.

And here we are, where an AI is better at doing human-type things, and worse at doing calculator-type things.

Okay fair enough, not wrong. But you know what middle-schoolers do? They go to high school. Then they graduate.

It seems to me that the answers we’re getting from this may not be great, but they’re better than most of the answers on human-generated forums about Buddhism. I’m serious, go to Buddhism stack exchange, which is built to be a place to get authoritative answers, and it’s mostly just empty, or else just low quality answers and opinions.

Just at random, a question of social responsibilities of leaders offers little information or context, while the top answer promotes covid fud from the Daily Mail :roll_eyes: . A historical question on the first use of “ethics” to translate sila gets some vague discussion and no real answer. A request for a reference on clinging gets a lot of opinions and no sutta references.

If we, as a Buddhist community, have not been able to provide meaningful answers to genuine questions, people will turn to bots instead.

2 Likes

Sure, but most of them never advance from repeating what they have heard to actually thinking for themselves.

That chatGPT can aggregate a lot of pre-published data and spit out grammatically correct English that smooshes it all up into continuous text is not the same thing as reading any of that text, understanding its implications, and formulating new arguments and insights that don’t already exist on the internet.

Now, I am not saying that this can’t potentially be done, I have heard about work in the math space that uses similar tech to try and generate new theorems, and it might work, and be very useful tech in that and other spaces where “new” things can be generated semi mechanically by applying known tactics to new situations.

but NONE of this is “intelligence” in the sense of “she handled that unexpected situation so well, she’s a really intelligent person”

The exercise of good judgment, like the judgment: “which of these mechanically generated true mathematical theorems have the potential to impact the widest field of current research?” or “does this AI generated artwork have a genuine flavor of “freshness” or is it just a middling, derivative piece?” or “should we direct the AI to this particular problem space or that one at this juncture” and so on are questions that are not statistical-mechanical.

The AI’s will get better and better at what school students can do, blindly apply the simple rules that are given to them by their teachers, I see no indication that any of the recent developments are in spaces that would lead to robust, general and generalisable “human style” intelligence.

I am not even sure that we have any idea what that would even look like tbh.

For example I am almost certain that it would be possible to program a computer right now that would score whatever arbitrarily high number you like on a standardized IQ test. I don’t think anyone would seriously take this to indicate that the program had a better than human level intelligence.

I think your calculator point is very well made, we now have a calculator for producing derivitve text summaries of things already published by humans, and plausible and passable imitations of fictions that resemble but are legally distinct from already published examples of same.

There is nothing to suggest that we are any closer at all to an AI deciding it is unsatisfied with prose literature and innovating it into new and unexplored territory the way say Joyce did in the early 20th century. It is probably possible, with clever humans driving the thing, to make it APPEAR to do so, but that is NOT the same thing.

I guess what I find tiring in this whole space is the constant promises of genuine revolution and the constant disappointment of actually receiving highly limited, tightly controlled, commercial and corporate do-dads that fail to actually transform the spaces that they claim, and really just occupy a part of the landscape and seek rents from customers while desperately trying to stifle further innovation that might threaten it.

I mean we are already all typing into machines that are capable of doing just about anything we want, but mostly we are using massively crippled and closed source versions of them to watch amusing cat vidoes and shout at each other about our opinions.

I guess fundamentally it is at this stage unclear to me if chatGPT and it’s successors are, in the grand scheme of things, going to be more or less significant inventions than the pocket calculator. Thus far I would say the pocket calculator is miles ahead.

Metta

1 Like

Some thoughts on how it could be better.

I’m just playing around and imagining, very hand wavy thoughts as I haven’t really taken an interest in tech for years and years - the above scripts are scratching the surface for what it can do - the use I’ve got in the back of my mind is this…

  • It seems now we have something that takes natural language. (GPT3.5/chatgpt)

  • If we just ask it things in natural language we can get more or less mixed bag of responses. (e.g. play with it and see)

  • However, if we prime it a bit with some formal structure - we get more informed responses. “Scripts” (such as above)

  • This knowledge comes from our domain and us.

How can we generalize this?

  • to get the LLM to answer questions in a sensible way we now need to write in formal languages to represent something about natural languages - compile to natural languages on the platform that is a LLM/GPT3.5. (Think of this - almost everything programmers have done until now is the opposite! Try to get from natural language descriptions to formal language.)

  • writing a bunch of scripts could be tediuous for every use case - and boring - what we want is something general - like a script writing language specific to a domain (I’m thinking macros in lisp can do code generation easily)

  • in addition - something that can provide context in the way of raw knowledge so that it can find knowledge of simular kinds - maybe we take an existing corpus of knowledge (suttas!?) and then store them in some graph, where each “atomic bit” or “node” of knoweledge (could be a phrase or a word) has weights with simularity to other bits of knowledge or link. (I’m imagining that is exactly how the oral tradition recalled everything.)

  • This then becomes the domain-specific AI on top of the LLM/GPT3.5 - when we ask a question of the domain-specific AI, we get some idea of the structure of the question, decompose it in to some structure, generate some links with other knowledge (in nodes) from our other cases using those weights in a graph, then generate a script, then feed that to GPT or the LLM.

I think for LLM today - its not trying to get the “(semi?)-general AI” that is GPT to in general know about our area - its creating the context and conversation for the general AI to sort of “come visit us” and be changed as it walks through a graph about knowledge about our area (compiled to a natural language), to the point it can answer a question…

who has experience with representing knowledge and thinking as a graph and then sort of resolving that into a natural language script? Some visual thinkers reportably. hehehe

please let me know if anything is a bit vague also I am a bit late to the party - I only found out about LLMs and GPT in first week of january - I soured off high techology years ago chatgpt is only thing that made me reinstall emacs again lol. feedback welcome. Maybe this should be on github as a chat or something?

EDIT: this is the general idea of what is motivating me to play with ChatGPT - hopefully its “show” not “tell” from here again.

1 Like

I think it is truly unavoidable that in the next couple decades something that passes for intelligence will exist. Yes, for now it is unimpressive. But if human civilization manages to save itself from its imminent dangers, I don’t see how machine intelligence is anything but inevitable.

If we could have a society where machine “minds” were created to exist in an alliance with our own, I think that’d be a good thing even in a Buddhist milieu. When artificial minds are employed purely to advance the power of those already in power? I don’t know of anything that would be impervious to their potential corruption. How long until we can no longer believe we’re interacting with human beings on the internet? How long until AI Sujato is able to convince most people here that actual Sujato is the real AI?

All I know is that I have absolutely no hope for the future based on this and a bunch of other trends. It wouldn’t surprise me if existence was “designed” to not be capable of achieving a sort of egalitarian equilibrium.

I found it useful. Some keys were to like just assume it knows and tell it a little bit more for context and it when its made an error. I’m using it for stuff I used to know but need to be reminded of - e.g. name of certain syscalls, old equations, also for exploring stuff “what is the common words between English, Arabic and Bengali”. It’s also very useful for accessibility tech - I can see non-verbal and people with communication disabilities using it. “How do I say this, to this kind of person, in such and such kind of situation?” It’s pretty good at that kind of question. 1.1 million people in Australia have a communication disability

2 Likes

In case all y’all missed it, I want to make sure we don’t ignore the ethical dimensions, which I only found out about yesterday. In the previous thread about writing novels, I wrote:

Ahh well, now that mystery is solved.

It seems that the “future” is built on the exploitation and psychological distress of poor, black workers.

Time is cascading ever faster, so that the gap between “oh cool new thing” and “oh, horrifying version of the same terrible old thing” is now just weeks.

2 Likes

Thanks Bhante, that reminds me also of where lithium ion batteries come from (piecemeal pay for workers - many children - in a place like the congo who are below subsidence and live in the mine) https://www.washingtonpost.com/graphics/business/batteries/congo-cobalt-mining-for-lithium-ion-battery/

The other ethical issue is the governance dimension. “Who gets time slices of the general AI?”

AI if its introduced to governments today could be a way for social democracies to allow more community and face to face interaction OR it could (if we keep “efficiency dividends” and KPIs with bureaucrats incentivized to find efficiency and with KPIs)- introduce AI - and “save money”. Beware the Microsoft sales pointy haired sale people approaching middle-level bureaucrats and NGOs! Hopefully the dividend can be taken from the time taken to process to the face to face again. Maybe a social democracy with communities with good governance of those orgs (Germany? Norway?) it could free public servants and knowledge workers to personal interactions again - but in a neoliberal place (Australia? UK?) it could become hellish.

Some optimism is the best use case I can imagine for GPT or LLMs today - sharing scripts among people with communication needs - accessibility tech. This could be the beginning of its use for understanding processes of government to enable more face to face - or it could be Microsoft turns up in India to do this and the first its heard of in Australia is some “product” that immediately middle level are happy to sign up for since it makes their efficiency dividends and KPIs look so good - so we gradually see government erode.

Hopefully there is some branch, agency or oversight to this to ensure the “time slices” free’d up by AI are not just going to be accounted for on corporations books and slashing budgets but actually to face to face and community interaction.