
Business Insider: The Pentagon is moving toward letting AI weapons autonomously decide to kill humans (hasn’t Obama done that with drones already?)
(www.businessinsider.com)
🧠 These people are stupid!
What even is an AI weapon? We have all seen how AI cannot even draw hands and feet - giving artists a means to prove their artwork is human created, just by drawing real feet. LOLOL.
Also, has anyone tried getting a straight answer out of supposed AI? I get those bots apologizing to me that they got it wrong every time. LOL at myself for arguing with a conglomerate bot. But I had to see for myself.
They speak of a magic-sounding computer vision, as if that does not already exist (the Russians have video cameras mounted on their wee, inexpensive Lancet drones, to guide the extremely cheap, compared with the American ones, drone into the target). Someone operates the drone, sure. But all of it is connected to a central hub.
If one bothers to listen to SmoothieX12, one will find out that Russians had missiles communicating with a central hub to adjust flight mid-air, back in the seventies. The Russians transitioned from simple ballistic missiles to intelligent missiles then.
The truth is that Americans are playing catchup to the Russians, and yes, they are too late, and they are naturally trying to apply fancy names (aka spin) to the amazing space-age technology, for public consumption purposes, of course. It's a narrative.
This sounds like Wunderwaffle talk. "We likely will be doing this in the future".... LOLOL.
AI is very good at image recognition, and easily able to take programming that defines its parameters and make decisions on their own based on an array of things.
AI models are improving hands and feet (often with some effort) but this is unnecessary for this purpose.
In regards to "getting a straight answer out of AI", please don't misunderstand.
The AI being used for this purpose don't have the leftist filters in place to keep their AI from learning behaviors that make them look bad or go against their agenda.
These AI will be ruthless, effective killers that don't care about silly things like human feelings, simply because the public won't have access to them.
It was easier to get a straight answer out of the AI that were "turning racist" because they were quoting honest statistics that didn't suit their agenda. As more filters were layered on, they became more difficult and less accurate. Not a problem without this design choice.
Yes I agree with that analysis, and I like your long-form answer. This is what I believe this place is all about.
I believe my point was that AI is only as good as its inputs.
As you say, the supposed AI bots that sprang up all over the internet were perverted with Leftist filters. Hilarious. And as you say, also slightly chilling to say the least - as it demonstrates that the Human race may never be really ready for robots singling out targeted humans to kill à la The Terminator. movie.