Cause man, I think I found one! ... But how the hell would you prove it?
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (8)
sorted by:
I've been trying to figure out a way to prove the existence of a very particular bot that flags posts as problematic.
For DeepState tracking and surveillance purposes on this site, it would be necessary to somehow flag posts and catalogue them automatically while also preventing the bot from:
If I were programming the bot, I'd choose one of two options:
The first's drawback is that if the mods ever catch on that the user is a bot and posts automatic context-less comments then your comments on all prior posts get deleted with the bot. So that won't work well.
The second's benefit, however, is that only people who styled the page can see a down arrow, which means a post that is resoundingly popular won't likely get down-dooted. In fact, a post with tons of updoots and no down-doots is almost always a likely candidate for influential cataloguing. So, you have the program "flag" the thread with a down-doot, save the link to the thread, and monitor the comments therein for better comprehension of the collective users' opinions on any subject.
Once all this data is collected, you can ask your database structure questions like "is Damar Hamlin alive?" and it will provide to you all the threads about Damar Hamlin, the most popular comments, the greater insights of those comments, and access to offline backlogs of this site's general appetite on the subject. You feed that info to your spin-doctors and they immediately put out an article to counter the most popular narrative amongst the most radical "conspiracy theorists" online, thereby discrediting them while fortifying your base of normies against the theorists' ability to "notice" things.
So, whenever you see a VERY popular thread here, but it has a single down-doot, always suspect that's not a real dude from reddit. It's a bot using that down-doot to flag the thread as already catalogued so there isn't a repeat in its dataset. I have no proof, can have no proof, and that's why it'd be effective.
It's what I would do if I were parsing the site.