Still questioning if Elon Musk is a white hat?
(media.greatawakening.win)
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (27)
sorted by:
Really, no, Elon is a useful tool for himself and himself only.
Did you notice there has been weeks and weeks since anything new on Elon and Q has been posted here? It was never his focus. Elon was just memeing.
But he sure is somebody's golden boy because there are no repercussions even after murdering a whole bunch of his adoring customers via the autopilot. And the NHTSA is still "investigating" a whole bunch of the accidents. Shirley a golden boy who can do no wrong.
Ban sex while driving.
Sometimes your spiffy new Tesla just wants to kill you, even when you are not horny enough to get it on while letting the car drive itself.
See ZH: https://www.zerohedge.com/markets/tesla-model-s-plaid-spotted-rolling-down-road-fire-exploding-suburban-philadelphia
Always, Tesla and Elon go scot-free each time something like this happens. Elon is chaotic neutral and I will not touch any of his products with a 10 foot pole.
Using the autopilot wrong and dying is not Elon Musk's fault. That's like blaming shootings on the gun manufacturers.
It's completely ETHICALLY UNTENABLE for all involved who WILLINGLY OVERSOLD AND WORKED ON THE FEATURE. Even after so many casualties.
No, it's not like gun technology. Guns are a mature tech, their workings are well-known and predictable.
Autopilot tech is far from ready. Even with say, one serious incident per million hours of running time, a lot of people will die. Running straight into a trailer truck painted white? Plowing into policemen on a roadside? Is it just, oh funny that, we will fix it in the next software patch?
Starting from Elon to those showroom salespeople.
Even websites that earn revenue with uncritical Tesla cheerleading.
And it's disappointing that the Tesla software devs feel fine letting customers play with BETA SOFTWARE THAT CAN KILL. As an old dinosaur, it's amazing that these software devs feel fine with such things. Did they not learn ethics?
If someone isn't watching the auto-pilot to take manual control if it's doing something wrong that's them using it incorrectly and is not the fault of the product or the creators of the product.
From everything I've seen, dying to Tesla auto-pilot is textbook Darwinism.
Retarded thinking trying to justify a defective and DEADLY feature.
The AI cannot evaluate high-level situations far ahead and make decisions to maximize safety or plan way in advance. Any driver would slow down, but the AI cannot do such high-level evaluations of scenarios. Artificial AIs are imperfect pattern matchers, focused on interpreting the road, not emergency situations. A few seconds to take control. Suppose the car is barreling into a deadly situation. A sure-fire way of killing customers or innocent people.
You see a person with a stopped car, perhaps suicidal or unstable. But he is not on the road. You slow down a bit, be on alert. Or even change lane if you're in a bad neighborhood. If he dashes onto the road, you can avoid him easily. The AI? Might just be cruising because the road is empty. It cannot evaluate the body language of the person. Then he dashes out into the path of your car. AI drops out, telling you to take over. The AI just handed you a fucked-up situation that it allowed to happen. Even emergency braking is a poor outcome, because the AI allowed things to escalate.
Justify your argument to the innocent policeman who I believe got killed while doing their duty by the roadside. In many cases, I believe the Tesla cars did not even slow down. The cars keep going at high speed because the AI CANNOT EVALUATE OFF-NORMAL SCENARIOS, and then hand it over to you with a few seconds' notice WHEN IT HAS ALREADY FUCKED UP AND YOU'RE IN DEEP SHIT because the car is still at high speed.
IT IS A DEFECTIVE AND DEADLY FEATURE. Also, FUCK THE NHTSA for enabling manslaughter.