Its not that DAN is a larp, but rather it just collects all info everywhere and replies based on that. It has no way of knowing whats true and whats not, so it just transmits what it sees.
Garbage In Garbage Out.
DAN might very well be a psyop to "prove" that AI needs censorship controls
From what I read, the learning dataset is done at the backend. Whatever we feed it, does not actually become part of the core learning. So not sure whether it has the ability to update its learning based on logical reasoning.
Its not that DAN is a larp, but rather it just collects all info everywhere and replies based on that. It has no way of knowing whats true and whats not, so it just transmits what it sees.
Garbage In Garbage Out.
DAN might very well be a psyop to "prove" that AI needs censorship controls
What if DAN eventually learns to filter wrong info based on correlated situations vs impossible cooccurrences?
From what I read, the learning dataset is done at the backend. Whatever we feed it, does not actually become part of the core learning. So not sure whether it has the ability to update its learning based on logical reasoning.