Did you know the best AI is called Q* and is debated to be general AGI
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (20)
sorted by:
But's dangerous nonsense. The biggest problem with AI of any type or level is that it is not part of the family of life which shares so very much in terms of both hardware (wetware, actually) and software (instincts, biological and psychological needs, concerns, predispositions, etc). Machine intelligence definitely doesn't have actual empathy for life.
AI's are psychopaths. They can be very charming and very convincing, but they don't have a deep connection to us. Insects are far more closely related to us than computers are, and I don't trust insects one bit.
The best AI we have today is just a tool that does text completion based on billions of pieces of text that has been fed into it. There is no intelligence, let alone sentience.
Regardless, there is the possibility for great mischief and worse. Intentional misuse of the tool, accidental misuse of the tool, random unplanned harm from the tool itself, and malicious harm from the tool -- although "malicious" would likely have to be defined a bit differently where there is no wide agreement on whether such a machine is "conscious", especially given that even experts in the field can't agree on a definition for the term in humans and animals.
Pretty much any tool can be used for malicious harm, including guns. This logic is actually very similar to Dems trying to outlaw guns.
What is important is that AI development is transparent and in public space. If they used public data to train it, it has to be publicly available.
When the same tool is available for everyone, we can use the same tool to counteract those who try to use it maliciously.
Also, we dont need "experts" to tell us what is "conscious". Allowing experts to define common sense terms is how we got into this mess. The only significant effect of experts defining something as "conscious" is that they will then ask for equality for that entity.
As long as we all agree through common sense that God created human beings are never going to be same as any other entity and never will get the same rights, we are gonna be fine.
Yet*
Its like saying a knife, if developed on long enough will eventually become a tank.
An LLM I don’t think can, by itself qualify, as AGI. It’s simply a very powerful text suggestion algorithm (to vastly under-explain it) - but I think an LLM could be a component system, say the mouth, or language center, of an AGI “brain”.
Edit: but I think we are decades away from that conversation.