This looks bad but it's also part of the Developers seeking to create a situation where AI won't be fighting or going to war against other ais, ie, proof of personhood, I e, proof that the people that are interacting with them are actually people and not other AIs
Here's a link to the full paper, which is the product of a collaboration between Stanford Internet Observatory, OpenAI, and Georgetown University’s Center for Security and Emerging Technology:
This looks bad but it's also part of the Developers seeking to create a situation where AI won't be fighting or going to war against other ais, ie, proof of personhood, I e, proof that the people that are interacting with them are actually people and not other AIs
But this is still dangerous
Here's a link to the full paper, which is the product of a collaboration between Stanford Internet Observatory, OpenAI, and Georgetown University’s Center for Security and Emerging Technology:
https://arxiv.org/pdf/2301.04246.pdf
It suggests government and corporate actions to prevent the unfettered use of AI technology (allegedly to prevent "disinformation").
Why do the authors of the paper in the last tweet on the left all have names that sound like they support a small country in the Middle East?
Demonstrate humanness before posting -- It means you will have to solve a captcha puzzle before you can paste your AI gibberish into the forum🤣