"Oh, that'll never happen" -- Over the next, say, fifty years, and in the thousands of places around the globe where military and commercial AI is designed, modified, tinkered with, and put to use in millions of different applications, what makes anyone think this won't ever happen again and at least SOMETIMES in a very dangerous way?
No, I don't have an answer for the problem. Yes, we DO need to keep thinking and talking about it, in case someone DOES come up with one.
On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called "The AI Scientist" that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly attempting to modify its own experiment code to extend the time it had to work on a problem.
"In one run,it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."
It was PROGRAMMED to rewrite code.
There is no AI, there are only computer programs written to fool you into thinking AI exists.
"Intelligence" covers a lot of ground and many skills.
Computers can in fact BEHAVE intelligently in many ways; they don't have consciousness or comprehension, but just as machines can outrun you, they can also be made to outthink you in many ways -- as with a simple calculator. YOU can do math -- a mental activity -- and so can a computer.
And your belief that programming brings a one-to-one correspondence to a computer's actions is simply wrong, especially in larger software and doubly so when the software is working (as conversational AI in particular does) with inputs from humans (who have free will and behave in chaotic ways) and from the natural world, where the unexpected and nonlinear data streams are common.
Those are some of the reasons that even with millions of people beta-testing a new release of (say) MAC OS, and hundreds or thousands of programmers working to perfect the software, the program STILL does things no one expected it to (or wanted it to, in most cases).
Again, computers follow code written by humans.
An unexpected outcome or response from a computer is an error in code: if an autopilot drills an airliner into the ground, is it suicide or a programming fuck up? Increased complexity in programming leads to increased errors in programming.
Computers cannot outthink me or anyone else since computers CANNOT THINK: they follow predetermined algorithms.
Computers are incapable of mental activity because computers do NOT have any mental abilities.
Intelligence by its very definition requires the ability to think.
"AI" only gives the appearance of intelligence much as pixels on a screen give the appearance of an image... as determined by human programming.
AI's purpose is to deflect blame from humans pushing harmful policies and to control the gullible.