You miss the main point. If a single source of information shares contradictory information, then at some point they must be lying. And you can determine that without knowing 'what the truth is'.
If a magazine says the world is going to freeze to death in five years, but then turns around and says it's going to burn up --- the AI can see that the magazine is not consistent (not trustworthy). It can also keep track of such 'wild' predictions, see how much time has passed, and note when they are wrong.
Humans are easily distracted. Well-programmed AI's could be better.
You miss the main point. If a single source of information shares contradictory information, then at some point they must be lying. And you can determine that without knowing 'what the truth is'.
If a magazine says the world is going to freeze to death in five years, but then turns around and says it's going to burn up --- the AI can see that the magazine is not consistent (not trustworthy). It can also keep track of such 'wild' predictions, see how much time has passed, and note when they are wrong.
Humans are easily distracted. Well-programmed AI's could be better.