Question to all those currently testing ChatGPT and other AI...
🧐 Research Wanted 🤔
Has the AI successfully formed a metaphorical argument?
You know, has it used an analogy to prove a point? Was the analogy appropriate to the subject prompt?
For example, saying pharmaceutical companies owning major shares in MSM news outlets is like the "fox guarding the hen house".
Can they understand and utilize such abstract notions in their speech?
I ask, because I think that it is potentially the best way to vet whether or not you're talking to a computer going into the foreseeable future. If someone can describe their view by means of analogy, they're human. Otherwise they are robots (or NPCs).
Consider it the modern Turing Test.
I did some testing with chatgpt, and while convincing, it's not capable of abstract "thought" imo. After probing it with a ton of "Q" related questions, like is "Q" a psyop etc, who is behind "Q" etc, Is "Q" real, It began asking me what I thought, So in response I claimed that The United World Pizza Workers Union was behind "Q" and it replied it was possible.
It could not suss out the sarcasm, and could not make a cogent arguement to counter.
Basically in its current form, I would argue it's a much better version of ELIZA.
https://en.wikipedia.org/wiki/ELIZA
Also, there are definitive biases built in from the lefty narratives, IE "programmed in"