I seem to recall a similar situation arising before due to a similar question. And the response was apparently explained away as the AI behind Alexa or whatever Smart AI it was summarizing a wiki page for a Game or book or something. Because the question matched the date of an event in the lore. And so the AI which was searching for the keywords being the date recited the first thing that came up.
Though I admit I might also be misremembering or thinking of something else.
This is very likely the case. Large Language Models (LLM's) are only trained on data up until a certain cutoff date. Less sophisticated models will try to make up answers for questions about dates beyond that cutoff, while more sophisticated models are able to tell the querier that it cannot reliably answer the question. Simply put, AI cannot tell the future.
I seem to recall a similar situation arising before due to a similar question. And the response was apparently explained away as the AI behind Alexa or whatever Smart AI it was summarizing a wiki page for a Game or book or something. Because the question matched the date of an event in the lore. And so the AI which was searching for the keywords being the date recited the first thing that came up.
Though I admit I might also be misremembering or thinking of something else.
This is very likely the case. Large Language Models (LLM's) are only trained on data up until a certain cutoff date. Less sophisticated models will try to make up answers for questions about dates beyond that cutoff, while more sophisticated models are able to tell the querier that it cannot reliably answer the question. Simply put, AI cannot tell the future.