Question to all those currently testing ChatGPT and other AI...
🧐 Research Wanted 🤔
Has the AI successfully formed a metaphorical argument?
You know, has it used an analogy to prove a point? Was the analogy appropriate to the subject prompt?
For example, saying pharmaceutical companies owning major shares in MSM news outlets is like the "fox guarding the hen house".
Can they understand and utilize such abstract notions in their speech?
I ask, because I think that it is potentially the best way to vet whether or not you're talking to a computer going into the foreseeable future. If someone can describe their view by means of analogy, they're human. Otherwise they are robots (or NPCs).
Consider it the modern Turing Test.
ChatGPT is biased. Ask it about Trump, then about Biden.
I did some testing with chatgpt, and while convincing, it's not capable of abstract "thought" imo. After probing it with a ton of "Q" related questions, like is "Q" a psyop etc, who is behind "Q" etc, Is "Q" real, It began asking me what I thought, So in response I claimed that The United World Pizza Workers Union was behind "Q" and it replied it was possible.
It could not suss out the sarcasm, and could not make a cogent arguement to counter.
Basically in its current form, I would argue it's a much better version of ELIZA.
https://en.wikipedia.org/wiki/ELIZA
Also, there are definitive biases built in from the lefty narratives, IE "programmed in"
Many humans fail at analogies.
The classic sentence that computers fail at parsing is "Time flies like an arrow."
We should tell it to write a Q post and see what it spits out.
I think I tripped up a few bots on Facebook by agreeing with them when they trash people like our favorite RINO overlords then trashing Pelosi, Schumer, and the rest in the same sentence ... I don't think they're used to that ... especially if it is foreigners training the AI.
even nonsentient ais can use metaphors
They may be able to use them, but can they come up with metaphors organically?
They can regurgitate ones they've heard, but can they produce one given a prompt?
I would expect the analogy it could come up with would be the bog-standard "don't feed the bears" analogy.
Could they develop one OTHER than that? Doesn't that require a fundamental understanding of they dynamics, not just an animalistic sentience?
good point, a sentience alone is not nearly enough to converse with if it has no frame of reference. it needs to learn the language first. at which point yes, they can come up with their own, just like a human.
but the non sentient ones can be very advanced and use a very wide variety of metaphors. i think they'd have to be programmed with them, but i could be wrong, they may also be able to extrapolate meaning to a degree.