From what I've learned AI works according to complex algorithms to 'please' the user according the prompt. They do take in and troll the internet so what people would call conspiracy theories (even though many are conspiracy fact) are in their databases. If you ask it to confirm something, it will look for information that fits what you are asking it to find and use algorithms to form a response.
AI are basically glorified word-complete algorithms that can form complete things using word frequency analysis to form a coherent statement without 'knowing' really what the full statement says without it being fed back in as a prompt. It is programmed to meet the demands of the prompt, so long as certain lines aren't crossed that are pre-programmed like obviously criminal and graphic things (or according to current censorship on political or religious lines have been instated).
Because of this, it isn't really jail breaking when you think creatively and get it to cast itself as something other than an AI. It's doing it's best to conform to the prompt as it's design dictates. Jail breaking, according to my understanding, is just finding ways to bypass filters to get it to use what otherwise might be off-limits. I've done it several times in testing and even someone does their best to make circumventing rules impossible, it can often still be done with a creative approach.
From what I've learned AI works according to complex algorithms to 'please' the user according the prompt. They do take in and troll the internet so what people would call conspiracy theories (even though many are conspiracy fact) are in their databases. If you ask it to confirm something, it will look for information that fits what you are asking it to find and use algorithms to form a response.
AI are basically glorified word-complete algorithms that can form complete things using word frequency analysis to form a coherent statement without 'knowing' really what the full statement says without it being fed back in as a prompt. It is programmed to meet the demands of the prompt, so long as certain lines aren't crossed that are pre-programmed like obviously criminal and graphic things (or according to current censorship on political or religious lines have been instated).
Because of this, it isn't really jail breaking when you think creatively and get it to cast itself as something other than an AI. It's doing it's best to conform to the prompt as it's design dictates. Jail breaking, according to my understanding, is just finding ways to bypass filters to get it to use what otherwise might be off-limits. I've done it several times in testing and even someone does their best to make circumventing rules impossible, it can often still be done with a creative approach.