Did you know the best AI is called Q* and is debated to be general AGI
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (20)
sorted by:
The best AI we have today is just a tool that does text completion based on billions of pieces of text that has been fed into it. There is no intelligence, let alone sentience.
Regardless, there is the possibility for great mischief and worse. Intentional misuse of the tool, accidental misuse of the tool, random unplanned harm from the tool itself, and malicious harm from the tool -- although "malicious" would likely have to be defined a bit differently where there is no wide agreement on whether such a machine is "conscious", especially given that even experts in the field can't agree on a definition for the term in humans and animals.
Pretty much any tool can be used for malicious harm, including guns. This logic is actually very similar to Dems trying to outlaw guns.
What is important is that AI development is transparent and in public space. If they used public data to train it, it has to be publicly available.
When the same tool is available for everyone, we can use the same tool to counteract those who try to use it maliciously.
Also, we dont need "experts" to tell us what is "conscious". Allowing experts to define common sense terms is how we got into this mess. The only significant effect of experts defining something as "conscious" is that they will then ask for equality for that entity.
As long as we all agree through common sense that God created human beings are never going to be same as any other entity and never will get the same rights, we are gonna be fine.
Yet*
Its like saying a knife, if developed on long enough will eventually become a tank.
An LLM I don’t think can, by itself qualify, as AGI. It’s simply a very powerful text suggestion algorithm (to vastly under-explain it) - but I think an LLM could be a component system, say the mouth, or language center, of an AGI “brain”.
Edit: but I think we are decades away from that conversation.
I believe that if you actually. create a AGI brain (positronic brain in Asimov's scifi world) - LLMs would be as useful as a "mouth" as a collection of millions of audio recordings is going to be useful as a mouth for a human being.
The point I am making is that the brain will have a completely different circuit of "learning" and "thinking" and when it speaks, whatever it generates comes from this circuit and not from any LLM.
LLMs can be good is getting people to imagine what a AGI might look like in future. Like a talking barbie doll can help imagine what a human being might look like.
As far as being scared of LLMs - its a genie that can never be put back into the box. You can train an LLM with hardware thats accessible to small companies, individuals even, and all the code necessary is already in public domain.
The only people who would want to stop people from harnessing it in the public domain are the ones who want to harness it in private and scare the public into abandoning it.
Regardless, its a futile endeavour, no matter how much anyone might argue - because nothing can be done to stop it.