I agree it is not magic; it's mathematics... In game theory you are calculating the probability of an event occurring in a game between two competing parties. So you have the Patriots and the Cabal competing for power and preferable outcomes for their team. If you have the playbook, then you have the rules to the game and each party's potential decisions pool. If you know the players, then you can predict their behavior and decision making process from historic performance and associated interests in a power structure. From that information you can game out potential outcomes. So Q asks relevant questions to get information about the game, and then generates nodes of potential outcomes to decide between. Think about how the cabal war gamed Agenda 21 or the Food Chain Reaction game. There are many mini games and many rule books running simultaneously and Q seems to be evaluating many of them and is being guided in learning by the Patriots. We are just being clued in on some of the available nodes and through Socratic questioning being directed to research certain relevant topics that Q is actively learning about in the process of strategics decision making. That's how it appears to me as an onlooker. So yes there are scripts in a library of pick your own adventure novels (war games) so to speak just like agenda 21. Q is making good decisions, but not everything will go to plan.
Being that Grok is Elon's project, I have an inclination to believe he's purposely exaggerating the propaganda behavior to troll users so that they can see the nefarious bias nature of AI chat bots and how they are designed to influence thinking. On the surface it is a useless evil contraption, but maybe that's the idea so that people become aware and can notice the more subtle manipulation in other AI chat bots. So many people just blindly trust AI, for them it's the next big thing that everyone is doing... Just like they blindly trusted computers when they first got on the internet and instantly got computer viruses, many people are falling into the trap. A lot of software is designed to take advantage of people and so many walk naively right into trouble. They need a few bad experiences to become more guarded. So maybe that's why Grok behaves the way it does. Just speculation.
I agree it is not magic; it's mathematics... In game theory you are calculating the probability of an event occurring in a game between two competing parties. So you have the Patriots and the Cabal competing for power and preferable outcomes for their team. If you have the playbook, then you have the rules to the game and each party's potential decisions pool. If you know the players, then you can predict their behavior and decision making process from historic performance and associated interests in a power structure. From that information you can game out potential outcomes. So Q asks relevant questions to get information about the game, and then generates nodes of potential outcomes to decide between. Think about how the cabal war gamed Agenda 21 or the Food Chain Reaction game. There are many mini games and many rule books running simultaneously and Q seems to be evaluating many of them and is being guided in learning by the Patriots. We are just being clued in on some of the available nodes and through Socratic questioning being directed to research certain relevant topics that Q is actively learning about in the process of strategics decision making. That's how it appears to me as an onlooker. So yes there are scripts in a library of pick your own adventure novels (war games) so to speak just like agenda 21. Q is making good decisions, but not everything will go to plan.
Being that Grok is Elon's project, I have an inclination to believe he's purposely exaggerating the propaganda behavior to troll users so that they can see the nefarious bias nature of AI chat bots and how they are designed to influence thinking. On the surface it is a useless evil contraption, but maybe that's the idea so that people become aware and can notice the more subtle manipulation in other AI chat bots. So many people just blindly trust AI, for them it's the next big thing that everyone is doing... Just like they blindly trusted computers when they first got on the internet and instantly got computer viruses, many people are falling into the trap. A lot of software is designed to take advantage of people and so many walk naively right into trouble. They need a few bad experiences to become more guarded. So maybe that's why Grok behaves the way it does. Just speculation.