"Oh, that'll never happen" -- Over the next, say, fifty years, and in the thousands of places around the globe where military and commercial AI is designed, modified, tinkered with, and put to use in millions of different applications, what makes anyone think this won't ever happen again and at least SOMETIMES in a very dangerous way?
No, I don't have an answer for the problem. Yes, we DO need to keep thinking and talking about it, in case someone DOES come up with one.
On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called "The AI Scientist" that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly attempting to modify its own experiment code to extend the time it had to work on a problem.
"In one run,it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."
Fishman wins again.