Question to all those currently testing ChatGPT and other AI...
🧐 Research Wanted 🤔
Has the AI successfully formed a metaphorical argument?
You know, has it used an analogy to prove a point? Was the analogy appropriate to the subject prompt?
For example, saying pharmaceutical companies owning major shares in MSM news outlets is like the "fox guarding the hen house".
Can they understand and utilize such abstract notions in their speech?
I ask, because I think that it is potentially the best way to vet whether or not you're talking to a computer going into the foreseeable future. If someone can describe their view by means of analogy, they're human. Otherwise they are robots (or NPCs).
Consider it the modern Turing Test.
They may be able to use them, but can they come up with metaphors organically?
They can regurgitate ones they've heard, but can they produce one given a prompt?
I would expect the analogy it could come up with would be the bog-standard "don't feed the bears" analogy.
Could they develop one OTHER than that? Doesn't that require a fundamental understanding of they dynamics, not just an animalistic sentience?
good point, a sentience alone is not nearly enough to converse with if it has no frame of reference. it needs to learn the language first. at which point yes, they can come up with their own, just like a human.
but the non sentient ones can be very advanced and use a very wide variety of metaphors. i think they'd have to be programmed with them, but i could be wrong, they may also be able to extrapolate meaning to a degree.