Some of my retired colleagues have been playing around with this, hoping to stump it. The closest they've come is with questions based on pop songs, e. g.
Who put the bomp in the bomp-she-bomp-she-bomp?
Who wrote the book of love?
What if the hokey pokey isn't what it's all about?
Tell me why?
I used to program in early AI. Once you have given it instructions it would start doing those instructions. It would run away with itself following those instructions. The only way to cease the program was to turn off the entire system. Once you’ve felt that power you could extrapolate how with faster machines and ignorantly designed code these heuristic devices could be quite destructive. If they aren’t programmed with Asimov’s rules of robotics the machines could decide that we are a speed bump. There are enough coders/designers who do things because they can instead of asking should they.I've been reading stuff written by computer scientists about AI becoming so complex that it becomes self-aware, like the sci-fi trope used so often.
From my own philosophical perspective I think it's hogwash. I hope I'm right.
AI was asked "what is the meaning of life"
It didn't know. All it could tell us was what individuals thought it was/is, including "to serve the higher good".
Sounds like AI is limited by the consciousness of its programmer. Thus making it prone to reactionary processes on fundamental questions... that is; questions on the nature and relationship between being and consciousness.
There are multiple built-in problems, with it learning by data-scraping from real humans, and in the process inheriting the biases, and prejudices of the humans it is in effect mimicking (and presumably there is the compounding problem of it eventually mimicking itself, and other AI chatbots, once they have been quoted all over the internet for years).
Another issue is that they are predisposed to give an answer. Any answer - rather than say "I don't know" - which proliferates misinformation.
The question is whether AI can change its mind.
As of now, AI gets all its info from multiple data sources, all of which was obtained & stored by humouans. It’s for certain some of that data is incorrect. Some info in biology, astronomy, geology, etc. will have an error of some kind. We don’t know what that is, but at one time, scientists / astronomers said earth was flat & also at the center of the universe. That was eventually determined to be incorrect.
So if AI was deployed during the time it was thought the earth was at the center of the universe, what would it say when contradictory information started showing up, or would it ever show up? Will science simply stop and let AI take over, and if AI ever makes a mistake, who will know?
What I’m saying is; with the deployment of AI, we may never know what bass is best for metal.
It goes to Wiki.I'm a biomedical scientist and I'll tell you for sure there is a ton of total crap scientific information on the internet. How does ChatGPT vet it's sources?
What is original thought?Does AI have any original thoughts?
(the answer is no...)