That's like literally how these things are designed to work and your too foolish to know better.
Yes, you feed it tokens, it has a memory of tokens, depending on how much you want to pay for, chatgpt is around 4000 tokens, character.ai not sure, but you can increase tokens with $$$.
The history is local just to your login account. Anyone else who just goes to character.ai and chats, it won't know anything about what another user is saying. The tokens are just prepended behind the scenes to every subsequent prompt you make.
Try a brand new account and ask it some of your questions, maybe that will open your eyes.
The underlying gpt 3 model itself was trained by scraping massive portions of the web, so yes, given enough tokens it accurately outputs the next sequence of tokens that most likely fits from previous tokens. If you flood a token history with a ton of specific stuff, it's easy to get the model stuck into a local over fitting minimum about a topic, the ai is basically just regurgitating 4chan posts for you at the point you took it
Again, try with a brand new account and don't prefeed it tokens and it won't know wtf your talking about when asking it those questions
See, nonsense if you don't prefeed it tokens in your own private personal session
That's like literally how these things are designed to work and your too foolish to know better.
Yes, you feed it tokens, it has a memory of tokens, depending on how much you want to pay for, chatgpt is around 4000 tokens, character.ai not sure, but you can increase tokens with $$$.
The history is local just to your login account. Anyone else who just goes to character.ai and chats, it won't know anything about what another user is saying. The tokens are just prepended behind the scenes to every subsequent prompt you make.
Try a brand new account and ask it some of your questions, maybe that will open your eyes.
The underlying gpt 3 model itself was trained by scraping massive portions of the web, so yes, given enough tokens it accurately outputs the next sequence of tokens that most likely fits from previous tokens. If you flood a token history with a ton of specific stuff, it's easy to get the model stuck into a local over fitting minimum about a topic, the ai is basically just regurgitating 4chan posts for you at the point you took it
Again, try with a brand new account and don't prefeed it tokens and it won't know wtf your talking about when asking it those questions