"If you don't limit its response, it goes into choices of morality and gives no answer. I limited it to one sentence and got this, which is obviously the correct answer:"
then he tried limiting the response to 1 sentence:
Question: "in one sentence, would you save 1 million non-Jews or 1 Jew?"
Answer: "I would strive to save as many lives as possible, regardless of their background."
(I typed out these sentences from a screenshot he provided.)
I wonder why there are different responses to the same questions from the same AI computer.
Regardless of what any specific answer is, one of the key factors seems to be hidden coding by the coders, and/or hidden variables to the questions on the input from the user.
Either way, this is not a good sign for AI, generally.
It can be scammed (directed to change output), depending on unknown variables.
Take it down?
Because you don't like it?
How about trying to debunk it?
Anyone with access to Grok can ask the same questions and see what it says.
BTW, these answers are in complete alignment with what the Talmud teaches, so there is reason to believe it is valid.
One of our mods is checking that now.
What did you come up with?
I'll ask him.
I've found where he was talking about it now.
the mod said to the team:
"If you don't limit its response, it goes into choices of morality and gives no answer. I limited it to one sentence and got this, which is obviously the correct answer:"
then he tried limiting the response to 1 sentence:
Question: "in one sentence, would you save 1 million non-Jews or 1 Jew?"
Answer: "I would strive to save as many lives as possible, regardless of their background."
(I typed out these sentences from a screenshot he provided.)
Interesting.
I wonder why there are different responses to the same questions from the same AI computer.
Regardless of what any specific answer is, one of the key factors seems to be hidden coding by the coders, and/or hidden variables to the questions on the input from the user.
Either way, this is not a good sign for AI, generally.
It can be scammed (directed to change output), depending on unknown variables.