.....
A cybersecurity breach at Muah.ai, a website offering “uncensored” AI-powered chatbots, has resulted in the theft of a vast database containing users’ interactions including a large number of users seeking child pornography.
404 Media reports that a major hacking incident at Muah.ai, a platform that allows users to create customized AI chatbots with a focus on explicit content, has exposed a substantial amount of user data, including chatbot prompts that reveal users’ sexual fantasies and, in many cases, attempts to create chatbots for child sexual abuse scenarios.
The hacker, who chose to remain anonymous, discovered vulnerabilities in the Muah.ai website, describing it as “basically a handful of open-source projects duct-taped together.” Initially driven by curiosity, the hacker decided to contact 404 Media after discovering the nature of the data in the database.
The stolen data includes email addresses, many of which appear to be personal accounts with users’ real names, linked to their chatbot interactions. This presents a significant privacy breach for Muah.ai users, as their explicit sexual fantasies are now potentially identifiable.
Some of the data contains explicit references to underage individuals, including the sexual abuse of toddlers and incest with young children. While it’s unclear if the AI system generated responses reflecting these requests, the data reveals the disturbing intentions of some users.
Muah.ai administrator Harvard Han claimed the breach was “financed by our competitors in the uncensored AI industry.” However, no evidence was provided to support this assertion, and the hacker denied any affiliation with the AI industry. Muah.ai positions itself as an “uncensored” platform for building AI chatbots, explicitly allowing sexual conversations and photos while claiming to ban underage content. The site offers pre-made bots, community-created options, or the ability to create custom AI companions that can send sexually suggestive or explicit messages and AI-generated images.
.....
.....
It's a tricky situation isn't it. On one hand all imagery is disgusting, perverse and should be destroyed. On the other, images created via AI could supplant the need for real life images. Ie. the real life photographers would be out of business due to not being able to compete with the ones doing it "safely?" Fucking weird situation but if I'm being honest, I would obviously rather the children be saved or not needed over any other feelings I may have towards the content existing. It's almost like its better to trap the weak minded people in a matrix of sorts designed to provide base level stimulation so they don't interfere with normal society.
I’d say it depends on whether there really is a “need”, whether such perverse attractions are immutable or if they’re a sickness that can be treated.
My primary concern on this matter is for real life children, and an AI creating images from nothing doesn’t directly harm them.
And then there’s the question on whether access to such not-immediately-harmful content merely works to stave off these people’s desires or if it causes them to want something more.
I don’t know the answers to these but I don’t think it’s a bad thing that these people have been exposed. Maybe the shock will get them to rethink the things they know they can control in their lives.