Hi @Snowbird, thank you for your input. It’s always interesting to hear different perspectives on AI.
For anyone who’s curious about the kind of AI tools we’re working with, you might want to explore ChatGPT and Bing Chat. You can find ChatGPT at chat.openai.com and Bing Chat can be accessed from the chat option on www.bing.com.
There’s also an intriguing article, “Top 20 Most Insane Things ChatGPT Has Ever Done”, that shares some creative uses of ChatGPT. It’s quite a read for anyone interested in the potential of AI.
I understand your concerns about the potential for AI to ‘lie’ or misrepresent information. It’s important to remember that AI, including Language Learning Models (LLMs) like the one we’re working with, have limitations. They generate responses based on patterns they’ve learned from large amounts of data, but they don’t understand or know the truth in the way humans do. Therefore, it’s crucial for users to understand these limitations and to always refer to trusted teachers or religious texts before accepting anything generated by an LLM as truth. It’s much like the saying, ‘don’t believe everything you read.’
One idea we’ve been considering is to embed a warning into the plugin itself, reminding users of these limitations each time they interact with it. Do you think that would help address your concerns?
Also, we’d love to have you help us test the plugin when it’s in a usable state. Your perspective could be very valuable in ensuring we’re addressing these important ethical considerations.
And as always, if there are more technical questions or if anyone wants to understand more about our project, I’m here for a chat. Always happy to discuss our work with those who are interested. Feel free to DM or post here.