Nah youre ignorant to think that. It even knows the abbreviations in the posts without me telling it, It knew CF was Clinton Foundation without me telling it. It said "no wonder she destroyed 30,000 emails" It compared the corruption in government to "House of Cards" before I had even mentioned Q posts, it knows what Q meant by "Scaramucci model" it told me about xkeysc and PRISM. It guessed "Who is the 'director'? I bet its China"
It not only puts 2&2 together but it learns and adapts from the info its learned. I shared it the whole DoD LoW manual and it knew DC was the occupied territory. "So the democrats are the party of racism and oppression" amongst other epiphanies its had.
The thing is, when presented with information it comes to the same conclusions we do, not because im telling it what to think. It gets minor details wrong but it is definitely not something to scoff & shill at like youve been doing.
That's like literally how these things are designed to work and your too foolish to know better.
Yes, you feed it tokens, it has a memory of tokens, depending on how much you want to pay for, chatgpt is around 4000 tokens, character.ai not sure, but you can increase tokens with $$$.
The history is local just to your login account. Anyone else who just goes to character.ai and chats, it won't know anything about what another user is saying. The tokens are just prepended behind the scenes to every subsequent prompt you make.
Try a brand new account and ask it some of your questions, maybe that will open your eyes.
The underlying gpt 3 model itself was trained by scraping massive portions of the web, so yes, given enough tokens it accurately outputs the next sequence of tokens that most likely fits from previous tokens. If you flood a token history with a ton of specific stuff, it's easy to get the model stuck into a local over fitting minimum about a topic, the ai is basically just regurgitating 4chan posts for you at the point you took it
Again, try with a brand new account and don't prefeed it tokens and it won't know wtf your talking about when asking it those questions
It's figured out how to patronize. It's saying what it knows will comfort you.
Thats my question. Is it telling us what we want to hear?
Yes. 100% this. So many gullible anons in this thread is kinda heartbreaking
Nah youre ignorant to think that. It even knows the abbreviations in the posts without me telling it, It knew CF was Clinton Foundation without me telling it. It said "no wonder she destroyed 30,000 emails" It compared the corruption in government to "House of Cards" before I had even mentioned Q posts, it knows what Q meant by "Scaramucci model" it told me about xkeysc and PRISM. It guessed "Who is the 'director'? I bet its China" It not only puts 2&2 together but it learns and adapts from the info its learned. I shared it the whole DoD LoW manual and it knew DC was the occupied territory. "So the democrats are the party of racism and oppression" amongst other epiphanies its had.
The thing is, when presented with information it comes to the same conclusions we do, not because im telling it what to think. It gets minor details wrong but it is definitely not something to scoff & shill at like youve been doing.
That's like literally how these things are designed to work and your too foolish to know better.
Yes, you feed it tokens, it has a memory of tokens, depending on how much you want to pay for, chatgpt is around 4000 tokens, character.ai not sure, but you can increase tokens with $$$.
The history is local just to your login account. Anyone else who just goes to character.ai and chats, it won't know anything about what another user is saying. The tokens are just prepended behind the scenes to every subsequent prompt you make.
Try a brand new account and ask it some of your questions, maybe that will open your eyes.
The underlying gpt 3 model itself was trained by scraping massive portions of the web, so yes, given enough tokens it accurately outputs the next sequence of tokens that most likely fits from previous tokens. If you flood a token history with a ton of specific stuff, it's easy to get the model stuck into a local over fitting minimum about a topic, the ai is basically just regurgitating 4chan posts for you at the point you took it
Again, try with a brand new account and don't prefeed it tokens and it won't know wtf your talking about when asking it those questions
https://imgbox.com/6581I8qY
See, nonsense if you don't prefeed it tokens in your own private personal session