Also, other courts. Judges not needed if you get rid of precedent (legislating from the bench) and "interpreting the law" (legislating from bench). AI would judge based on the intent of the law, not what some judge says it is. Judges have usurped power from the people for far too long.
Comments (59)
sorted by:
No. Tired of my stupid car talking I don't want another toaster telling me what I can do. The AI in the WH is enough.
Haha. Agreed.
I don't need no more stinkin’ “smart tech” that can be hacked and remotely manipulated.
A friend of mine hacked into a neighbors smart doorbell just to show how easy it was. Took him less than 5 min.
All smart tech has back door access. No thanks. No friggin smart meters either.
Just because bad guys have been controlling tech doesn't mean good guys can't get it right.
Thats not really the point. Any “smart” tech that allows remote access is inherently hackable.
AI systems don't necessarily need to be online.
True. But IoT sure seems to be moving things that way.
When we are governed, and our laws are adjudicated by non-humans we cease to be the human race, and are instead slaves of whatever non-human is running the show.
After we get rid of our Satanic overlords, lets see how we do by ourselves before we hand over our rights to another overlord.
Kind of a stretch!
the problem with artificial intelligence today is that it is rarely able to explain why it came to the conclusion it did.
You get an outcome and a confidence.
"78% sure this lawyers argument is inline with the constitution"
Then you get into adversarial intelligence that actively tries to exploit the source AI to trick it into the desired outcome but hiding in that 22% lack of confidence the system can't explain.
Then you have a technology arms race.
Absolutely rather have a person that can explain the opinion and how they arrived at it.
This article explains how AI decisions can be 100% explainable: https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/6
no it doesn't, it just says that one team made a model where it extracts a the weights of the neurons.
That's like a robot telling you that the LA Lakers are going to win because their Small Forward is 6'9" and the neuron said 6'9" SF is a huge indicator of success.
The author had the nerve to compare this type of visibility into the neuron weights as on-par with talking to a skilled surgeon. Total BS by a bunch of really dumb smart people.
Who programs the AI?
Exactly! Who ever programs the AI sets the decisions it will render. Very bad idea.
Who programs the autopilot in an aircraft? Transparent AI's can be inspected to provide assurance that they are non-biased in their decision making: https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/6
Good people.
The absolute last people you would want to be programming it
Huh?
Sounds like the making of a Dominion disaster.
Tech is good in the hands of good people.
Sure, but that works only under the assumption that it'll remain in good hands, and, that if good people were in control, that they'd be competent enough to pull it off.
In theory, it sounds like a good idea. You'd eliminate the activist judges.
In practice, I would have concerns.
Concerns are good. They'd need addressing. If addressed it would be a large improvement over what we have now. Remember, with the Great Awakening we'll have a whole new environment to work in.
This is an interesting idea actually. But could people creating the AI include any bias?
Every human is biased. Unless you want Jesus Himself to program the AI, we will have to make do with humans, imperfect as we are.
A way to keep bias out would be needed. This could be done under the new Republic Trump administration after most of the evildoers are gone.
Could probably be done through intense scrutiny. I think accountability is needed to keep a balance. Considering Project Mockingbird and our media as an example.
No, if it doesn't have a soul, it doesn't get authority.
The soul is in the Constitution it follows.
No. The Constitution is a piece of paper.
Sad you believe this. The Constitution is a set of ideas that are meant to protect the freedoms of individuals.
On a piece of paper. Why do think it has a soul? That's a little naive don't you think?
Not the answer. Less ai less government. More people doing actual jobs.
Less people less government.
Sounds good but i will only let you do it if you can find me a computer program that has no bugs.
It can be done. We just need the good guys doing it.
Good people can do this.
Are you saying that every current computer program has been written by bad people?
It seems to me that any program of any size always comes with a list of bugs that will be addressed, maybe, in the next version.
Or do you already know of bug-free programs? Some examples would be good.
Programs are written by good people and by evildoers. I mean that good people can program AI's and find the bugs, which will inevitably occur.
OK, but the point I am trying to make is that programs written even by good people have bugs. If a programmed AI system was going to be judge, jury and executioner it would not be entirely reassuring for it to sentence you to death prior to the bug being fixed in version 2.0.1.
Embrace the matrix. LOL Hell no.
If they aren't careful with refund police crap Robocop will become reality. Might suck but might also be glorious to see them all go down.
Until some genius computer geek figures out a way to upload a virus or reprogram it?
Or, what if a base, undiscovered sub routine is made that purposely let's [them] go?
Nope. A.I.tech is good for STEM type applications, but that's it.
Wont work. I work with state of the art AI on the theoretical side, developing testing strategies. A proper explanation of the problems with AI would require some very heavy duty math, some that very few people know and some that I might be the only person who knows. I don't mean to brag, but I am a math prodigy. I have developed a new math theory specifically for testing AI. I started this in 1980.
Have you seen this article on clarity in AI's: https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/6
Yes. It was a while ago. The authors are spot on. I'm striving for mathematical provablity of computational systems. I'm using topology and category theory. It's hard to explain but because of a whole set of erroneous theory, AI systems (and softwares in general) are far more complicated then needed. Are you familiar with the work of Kurt Gödel?
Electronic home base umps I can handle. AI Supreme Court - no thank you.
The same black & white logic could be used. Was the law violated or not? Ball or strick?
Why?
Good people can make good tech. It can be made trustworthy: https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/6
Doesn't have to be an unreadable black box: https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/6
It would never work. Some Democrat political operative would just hack the computer & reprogram it to have a distinctly leftist ideology.
This is for after the dems, commies, deep staters are gone. Soon.
Not a bad idea imo
I do not think they'd allow it. Do you know Tay?
https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/
It would be allowed after the fall of the deep state/CCP.