.....
A cybersecurity breach at Muah.ai, a website offering “uncensored” AI-powered chatbots, has resulted in the theft of a vast database containing users’ interactions including a large number of users seeking child pornography.
404 Media reports that a major hacking incident at Muah.ai, a platform that allows users to create customized AI chatbots with a focus on explicit content, has exposed a substantial amount of user data, including chatbot prompts that reveal users’ sexual fantasies and, in many cases, attempts to create chatbots for child sexual abuse scenarios.
The hacker, who chose to remain anonymous, discovered vulnerabilities in the Muah.ai website, describing it as “basically a handful of open-source projects duct-taped together.” Initially driven by curiosity, the hacker decided to contact 404 Media after discovering the nature of the data in the database.
The stolen data includes email addresses, many of which appear to be personal accounts with users’ real names, linked to their chatbot interactions. This presents a significant privacy breach for Muah.ai users, as their explicit sexual fantasies are now potentially identifiable.
Some of the data contains explicit references to underage individuals, including the sexual abuse of toddlers and incest with young children. While it’s unclear if the AI system generated responses reflecting these requests, the data reveals the disturbing intentions of some users.
Muah.ai administrator Harvard Han claimed the breach was “financed by our competitors in the uncensored AI industry.” However, no evidence was provided to support this assertion, and the hacker denied any affiliation with the AI industry. Muah.ai positions itself as an “uncensored” platform for building AI chatbots, explicitly allowing sexual conversations and photos while claiming to ban underage content. The site offers pre-made bots, community-created options, or the ability to create custom AI companions that can send sexually suggestive or explicit messages and AI-generated images.
.....
.....
I never thought of this aspect. Given that AI is close to generating photorealistic images and videos aren't far behind, I can't imagine what someone could do with this. Yuck.
It's a tricky situation isn't it. On one hand all imagery is disgusting, perverse and should be destroyed. On the other, images created via AI could supplant the need for real life images. Ie. the real life photographers would be out of business due to not being able to compete with the ones doing it "safely?" Fucking weird situation but if I'm being honest, I would obviously rather the children be saved or not needed over any other feelings I may have towards the content existing. It's almost like its better to trap the weak minded people in a matrix of sorts designed to provide base level stimulation so they don't interfere with normal society.
I’d say it depends on whether there really is a “need”, whether such perverse attractions are immutable or if they’re a sickness that can be treated.
My primary concern on this matter is for real life children, and an AI creating images from nothing doesn’t directly harm them.
And then there’s the question on whether access to such not-immediately-harmful content merely works to stave off these people’s desires or if it causes them to want something more.
I don’t know the answers to these but I don’t think it’s a bad thing that these people have been exposed. Maybe the shock will get them to rethink the things they know they can control in their lives.
Real or not, it feeds the same desires. Do we, as a world, really want tech widely available to foster growth in this demand within the human population? Because that is what it does... it grows the demand and the market. It cultivates it. Manufactures it. Think for a minute, is that what we all want? We pay the regulators. They work for us... don't they?
I guess the question becomes do we have the right to limit what people view if they are not hurting anyone and no human was harmed?
I am with you, it feeds desires and those rarely stop progressing/expanding/evolving if continually fed.
That is the point. Child Porn is harmful to the child and the watcher... not a question.
And sadly as of this moment AI generated child porn is not illegal. Disgusting.
Breach is planned. They were set up by their beloved AI.
WTAF?! Who are these sick f*ckers?! Apparently there are not enough millstones in the world to meet the need.
People make the argument that these AI images at least aren't real children and so the people asking for them are somehow better than those who are seeking CP from sources that target real children.
Here's my thought: a rabid dog doesn't have to bite anybody for it to be a menace to society. AI generated or real-child pornography is not the issue. The issue is somebody seeking either is a f*cking rabid dog. I can pity the dog that likely didn't seek to end up rabid. That doesn't mean he can be left in society. He still needs an a bullet courtesy of an Atticus Finch to put him down. Or a millstone. But that brings me back to my initial point: looks like there won't be enough millstones. We'll have to resort to bullets.