Thats not ai, that's sentience. Ai is not sentient. One day they will make a sentient machine, likely with a lab grown human brain/chip but that's not AI. On your phone, you see how it guesses what your next word is? This is what chatGPT is, but to a larger scale. I'm not sure many understand this. Ai is literally an algorithm or series of algorithms that a human could have manually created but likely instead used machine learning to create the algorithm in a more "natural" methodology. Ai is not more complicated (generally much less complicated) than a Ps1 game.
Its a little more complicated than that since sentience required not just consciousness but also emotion. It would be hard to know when a machine actually emotes vs just saying it does. But I do think we are close. Within 100 years.
I greatly appreciate your excellent explanation. I'm old and tend to be apprehensive about a lot of new stuff trending in what I think is the wrong direction. I'm glad to read I have no reason to worry. Thank you!
I wouldn't say you have nothing to worry about. Any tool can become a weopon in the wrong hands. I am a machine learning developer and have rejected projects that misuse Ai already.
Thats not ai, that's sentience. Ai is not sentient. One day they will make a sentient machine, likely with a lab grown human brain/chip but that's not AI. On your phone, you see how it guesses what your next word is? This is what chatGPT is, but to a larger scale. I'm not sure many understand this. Ai is literally an algorithm or series of algorithms that a human could have manually created but likely instead used machine learning to create the algorithm in a more "natural" methodology. Ai is not more complicated (generally much less complicated) than a Ps1 game.
Looks like they're doing just that. See u/HunnyB 's post:
https://greatawakening.win/p/17tL1dM8uW/x/c/
Which links to this two year old article:
https://www.newscientist.com/article/2287207-tiny-human-brain-grown-in-lab-has-eye-like-structures-that-see-light/
u/DrMcCoy provided a picture of the thing:
https://images.newscientist.com/wp-content/uploads/2021/08/17151942/PRI_195135632.jpg?width=1674
I had also made a related post about this:
https://greatawakening.win/p/16bivvnBJ9/protect-your-dna-labgrown-minibr/c/
IMHO, they are just asking for trouble.....
sentience requires some sorta feedback loop & multivalued solutions. Once that's possible any entity, good or evil, can creep into the machine
Black Goo could provide sentience.
https://www.youtube.com/watch?v=eTACLv18_Ko
Its a little more complicated than that since sentience required not just consciousness but also emotion. It would be hard to know when a machine actually emotes vs just saying it does. But I do think we are close. Within 100 years.
I greatly appreciate your excellent explanation. I'm old and tend to be apprehensive about a lot of new stuff trending in what I think is the wrong direction. I'm glad to read I have no reason to worry. Thank you!
I wouldn't say you have nothing to worry about. Any tool can become a weopon in the wrong hands. I am a machine learning developer and have rejected projects that misuse Ai already.