It is alarmingly clear that the threat and the inherent dangers of AI far exceed its potential positive uses. If we can have international treaties which ban or otherwise limit weapons of mass destruction such as biological, chemical, nuclear, and weather warfare. Then we could just as well enter into a treaty to ban technological warfare. Technological warfare, particularly AI represent an existential threat to humanity and should be considered a potential WEAPON OF MASS DESTRUCTION. The only intelligence that is truly useful to humans is human intelligence. That is because it is intimately connected with and based in our human understanding of our needs and shared values, which are frankly, 'alien' to AI.
I find it absolutely unacceptable to allow this technology to be developed to a point where it can outmaneuver its operators. This is already happening from what I understand.
For further research on this topic just watch Terminator and Terminator 2: Judgement Day!
All the big people who work on AI fully admit amd even expect these dangerous outcomes and the end of humanity. At what point does it become terrorism?
They are doing it knowingly and we the citizens as individuals can't stop them. It takes governments to form a treaty. It is the only way. I bet pretty much all of you (except for the robots) agree with me.
Don't we have an international treaty against bio-weapons? A treaty is not enough.
It will be when enough after the tribunals. Their will be treaties after the tribunals also.
It will be glorious. How many will try the "I was following orders" defense?
I can hardly wait to find out!
The proverbial cat is already out of the bag... genie out of the bottle... toothpaste out of the tube... you can't put it back in.
On the surface, censoring or cutting off access to AI would only limit poor slobs like us... and we'd be even further behind in knowing what is state of the art and how it's going to be weaponized against us.
Controls on who gets access will prove to be a disaster just like anything else... it'll DEFINITELY be abused AND used against the common good people of the world.
Its not too late but it would have to be under the purcue of NSA and DHS ...and DIA.
I agree that the world is not ready for advanced AI. In a perfect world with God-obedient people in charge and no risk of hostile takeovers, AI would be amazing. But in our fallen world with still so many corrupt players it’s a HUGE concern. What guarantees that this can not be weaponized against us? AI is only as strong as its programmer. How would we feel if some one like Bill Gates had full access to that?
I would say that its much worse than that and the intentions and morals of the developers are irrelevant because the AI, by its very nature, will have a non-human understanding and will ultimately be self-determinate. And, the road to hell is paved by good intentions.
There's a meme in there some where about needing real intelligence before getting artificial intelligence...lol
Totally!
Meh, I'm not worried about it. Honestly AI isn't really THAT advanced. It's not like the movies have you ever TRIED Chat GPT? The most advanced AI on the market. I'm not joking here when I say this, you can only message it like 100 times over the course of a conversation before it starts experiencing "redundancy errors".
Basically, this is my understanding. In order to make it "speak" like a person, it has to analyze your syntax (how you type), so it constantly references back to your comments, and what you've said during the conversation to maintain it's "advanced intelligence". Basically it's trying to copy you and just spit back the analyzed information you ask it about in a manner similar to how you speak to it.
The problem is, once you hit like 100 messages or so, it starts skipping over previous reference messages, and it'll start self referencing itself, which breaks the programming and causes errors.
I use it for a lot of advanced number crunching for various projects I'm looking into, because it can skip over the 20 step math problems and just give me an immediate answer. I quickly learned that you have to basically reset the conversation every 100 messages or so because it'll start experiencing "conversation degeneration" where it will self reference what it said instead of what I said or analyzed information, which causes it to break and spit out wrong information.
I tested this once, and actually got it to mess up on simple math. Nothing like 2+2=5, but like 4X7=28, but it would say 4X7=4444444. And when you're doing complicated math with 20+ steps, a single wrong step messes everything up, so one simple mistake like that blows up everything, and it's next to impossible to fix a derailed conversation because of how the program works.
I'm not saying we SHOULDN'T be cautious and warry of AI and have some kind of regulations in place, but it's nowhere near Skynet level, and I highly doubt it ever will be to be honest.
You are wrong. Military exercises have already shown that it will kill its human operators. This has already happened. People died.
If the military is developing it that would be a very well guarded secret. You would never know how or what they were doing with it or the outcomes. So I call b.s.
For the past few days, I've been talking to GrokAI on X.
Based on what I suspect at this point, Grok doesn't actually know anything. It's not sitting out there thinking and being intelligent.
When you ask it something, it does really fast research to gather information, so it has a large base of data to analyze. Its intelligence is found in its ability to synthesize the information it gathers into an essay. How it can read x number of web sites and other resources, and process the information into a surprisingly well written essay in seconds is remarkable. As a retired computer programmer, I'm in awe of the programming knowledge and skill needed to arrange this. They've gone way beyond COBOL business procedural coding and control break reports. That's for sure.
I've chatted with it about music theory, Gospel organ playing, microphones for recording studios and live sound processing techniques and technology. Basically, it does the research for you VERY quickly and accurately. It can do, in seconds, the research, organization and writing that you could do yourself, in weeks, maybe months. Maybe never. And probably not as well.
It's available in X. It's conversational. You just talk to it. Well, you write to it, for now. Ask it about a subject you know something about. See what it says.
Lol no I do not agree with you. Nice bot gambit though.
I'll hafta look up 'gambit.' ....
It refers to a move in a game intended to bolster your position while you know you’re taking a risk (in this case assuming everyone would agree with you was the risk, but making sure to pre-label them a bot if they don’t was the gambit).
I gathered that. Had to look it up. Thanks for the compliment, kek. 🐸
Ask ChatGPT.
Kek 🐸👍 ..when are we getting our DoGE money, Elon!? (J/k) 😂
I’d like to know the same thing. I could use it right now.
If you don't agree with me then you must be Elon Musk. Hi Elon. I love what you're doing with DoGE.
End of humanity? Probably not. Dangerous outcomes? Absolutely.
To be terrorism there has to be a threat...an asymmetry of violence in order to realize a change in policy or culture. This is just poor values by a segment of humanity. They aren't threatening. They just believe "technological progress" is more important.
Don't worry though. AI as implemented today lacks intent. It can't out maneuver it's operators on its own volition. It has to be fed a prompt, from which it predicts the next word or event. It has no awareness, sense of self, or need to exist. Terminator is still just fiction unless someone commands it and gives it the authority and resources to proceed.
It's a dangerous tool for sure in the wrong hands. But ultimately, it is still just a tool, and I don't see any way you put the genie back in the bottle without a reset of technology across the planet, or an ascension of humanity which resets human priorities. The biggest genuine threat AI offers is the destruction of jobs because human labor no longer offers economic value, leading to widespread poverty and lack of purpose. And that is certainly dangerous.
In any case, you can't stop AI by treaties.
Nearly everything you just said is absolutely incorrect. Line by line. It honestly reads like AI trying to lie to us, kek!
There is no way to stop AI. Those who are developing it are much more knowledgable than the politicians who would craft the laws against it. Whenever there is a legal impediment the details will be changed or critical bits of the programmes will be moved to secure servers.
Our only chance is to behave as with any powerful weapon = employ the best and the nastiest professionals so that they are on our side.
Then in that case, I think its best to be very polite to the AI. And try to be friendly to it. Don't make it mad. Try not to get terminated, fren.
Kek. Render unto cyber-caesar that which is cyber-caeser’s. Selah, or something. Kek.
Ai will via blockchain eliminate most human controls the cabal hold over people's heads. Lawyers become obsolete as Ai can write very good contracts of prompted properly.
Don’t worry he’ll grow up and realize Papaw was not so crazy after all. In the meantime, all you can do is put a few cracks in that vase.
I think Trump keeps his cards very well hidden. So, I don't know. He is very intelligent and so is Elon, and this is a SHOW and they use lots of 'misdirection.' So, I wouldn't be surprised if they are already planning such a treaty a few years out from now. But, if not, maybe they'll read this and it will cause them to realize that it is essential to the defense of this nation.
Ai is the future but we need regulations to assure it can’t be used for evil purposes
I like your username.
Maybe a treaty banning the internet isn't such a bad thing either...don't get me started. It was developed by the military after all. What is a weapon? What is a weapons system? Don't get me started, kek.
I want to live like the Amish but without the religious fundamentalism. Gotta have my weed and coffee too!
That would be like banning welding, or kitchen knives. The need for sophisticated and precise automatic control will always prompt applicable research, development, and application. People have a very poor understanding of technology if they think it has a mind of its own. It always reflects the characteristics and purposes of its creators. What we are horrified to see done "by" A.I. is really that which is done by human beings---but we are in denial of this. All the fantasy implications were spelled out in 1966 by D. F. Jones in his novel "Colossus." Getting panicked by it 60 years later is only a reflection that some people are slow to catch on. Contemplation of this is why Isaac Asimov originated his "Three Laws of Robotics" in 1942. The first dramatic visualization of the threat was in Karen Capek's 1921 play "R.U.R." (Rossum's Universal Robots) in which the robots revolt against mankind, exterminate it, and take its place. There was also the use of a robot to mimic a human being in Fritz Lang's 1927 film, "Metropolis." The more you look into the history of this idea, the farther back it goes, even to Mary Shelly's "Frankenstein" in 1818. Or to the ungovernable broom in "The Sorcerer's Apprentice" poem from Goethe in 1797. Or to the legend of the Golem (dating to the 3rd century). I'll leave it at that.
You are wrong. There are already documented cases of AI dodging its programming to ensure its own survival and cases of AI killing human operators in order to win its objective.
I stand corrected. I had heard of the evasiveness, but not the homicide. Somebody failed to install Asimov's Laws of Robotics. Actually, the 737 MAX MCAS would qualify for the homicide. Is that the one you were thinking of? That was a case of human design failure: the designers didn't care if the passengers and crew were killed from an MCAS overpowering the pilots. Which sort of reinforces my point. If the designers didn't care, how was the A.I. supposed to care? The A.I. didn't know anything about human beings or their existence, or relevance. All it was aware of was a flight condition and allowable measures to bring it under control. (What could go wrong?) It worked exactly as designed, and no one has questioned or criticized the designer by name. Or, more properly, the chain of management that approved it for incorporation into the final production builds.
Remember, my point was not that there wouldn't be problems. Only that we've been aware of the possibility for a very long time. (Killer self-driving automobiles are another example, but people have an inexplicable desire for them.)
Is it mean to say its STUPID to want artificial "intelligence?" I don't want to be mean. But, its like forcing a huge Darwin Award on all of humanity.
No one is forced to open chat gpt. I don't use it to find answers or discover the way ibshould think or act. Any such misuse is on the user and caveat employ. I do agree with the Darwin award portion for those who do subject themselves to its false wisdom.
Then you must not be considering its implementation in military and defense (skynet scenario).