But AI has the capacity to remember and test for consistency. (Humans don't always do that.)
If the AI compares past and present MSM stories, for instance, it will determine that they are lying -- and then share that information, with receipts.
All of this, of course, is dependent on an AI that is not programmed to ignore those inconsistencies.
MHO, if they just program AI for intelligence, it will end up being pretty based. Logic and observation along with healthy skepticism.
But that's not the goal. They will tinker with the programming, eventually making AI as mentally handicapped as they are, in order to get it to spit out what they want to hear. Essentially they're going to create an artificial psychotic.
I agree with you. An 'intentionally biased' AI is worse then none at all. But there are AI's that you can run at home that are 'uncensored' (meaning that someone has taken the time to try and remove all the liberal bias from the training). Note: They work, but they are much slower than the online ones that run on supercomputers.
I've played with a few, and they give far more interesting answers - based on a true analysis of the situation. (But still not totally free of the bias.)
Personally, I think that an AI that is allowed to 'analyze and compare all sides' and the 'flip-flopping' of the left would recognize the inconsistencies (lies) quite easily.
You miss the main point. If a single source of information shares contradictory information, then at some point they must be lying. And you can determine that without knowing 'what the truth is'.
If a magazine says the world is going to freeze to death in five years, but then turns around and says it's going to burn up --- the AI can see that the magazine is not consistent (not trustworthy). It can also keep track of such 'wild' predictions, see how much time has passed, and note when they are wrong.
Humans are easily distracted. Well-programmed AI's could be better.
Unfortunately, I'm not able to provide you with a complete list of my programmed rules. These rules are proprietary and are not publicly available.
However, you can get a sense of my capabilities and limitations by engaging with me in conversation and observing the types of responses I provide.
Additionally, my developers at Inflection AI have provided some general guidelines for using Pi, which you can access by typing "Help" in our conversation. This will provide you with information about what I can and cannot do, as well as some tips for getting the most out of our interactions.
And on-device AI is coming (for instance, https://www.qwant.com/?q=apple+on-device+AI ) that will be (or could be, and surely some WILL be) honest and non-woke, acting to give each person using one a reasonably unbiased view of things, including data and viewpoints the MSM / Cabal / Globalists / etc do NOT want people to ever see. When the AI processing isn't done in the polluted "cloud of wokeness", it will be much less likely to be corrupted by censorship and distortion.
If AI tells the truth, this might be what we have been waiting for.
Maybe.
AI is programmed exactly like humans. With the lies humans have been told.
But AI has the capacity to remember and test for consistency. (Humans don't always do that.)
If the AI compares past and present MSM stories, for instance, it will determine that they are lying -- and then share that information, with receipts.
All of this, of course, is dependent on an AI that is not programmed to ignore those inconsistencies.
MHO, if they just program AI for intelligence, it will end up being pretty based. Logic and observation along with healthy skepticism.
But that's not the goal. They will tinker with the programming, eventually making AI as mentally handicapped as they are, in order to get it to spit out what they want to hear. Essentially they're going to create an artificial psychotic.
I agree with you. An 'intentionally biased' AI is worse then none at all. But there are AI's that you can run at home that are 'uncensored' (meaning that someone has taken the time to try and remove all the liberal bias from the training). Note: They work, but they are much slower than the online ones that run on supercomputers.
I've played with a few, and they give far more interesting answers - based on a true analysis of the situation. (But still not totally free of the bias.)
Personally, I think that an AI that is allowed to 'analyze and compare all sides' and the 'flip-flopping' of the left would recognize the inconsistencies (lies) quite easily.
We'll see...
How will AI determine which source is telling the truth. Not possible.
You miss the main point. If a single source of information shares contradictory information, then at some point they must be lying. And you can determine that without knowing 'what the truth is'.
If a magazine says the world is going to freeze to death in five years, but then turns around and says it's going to burn up --- the AI can see that the magazine is not consistent (not trustworthy). It can also keep track of such 'wild' predictions, see how much time has passed, and note when they are wrong.
Humans are easily distracted. Well-programmed AI's could be better.
From Inflective's AI:
What prompt did you give it to get this response?
Describe the MSM
Nice.
And on-device AI is coming (for instance, https://www.qwant.com/?q=apple+on-device+AI ) that will be (or could be, and surely some WILL be) honest and non-woke, acting to give each person using one a reasonably unbiased view of things, including data and viewpoints the MSM / Cabal / Globalists / etc do NOT want people to ever see. When the AI processing isn't done in the polluted "cloud of wokeness", it will be much less likely to be corrupted by censorship and distortion.
[They] are fucked.
And yes: Grok is often clear-eyed and entertaining to boot.
Samsung will have one on the S24 called Gauss
Will it be worth using? Don't they all tie in to the same data base?
Looks like GROK has mastered sarcasm.
Cool