Ai is a tool. Designed to do a task. There are many Ai. Most are machine vision and are only designed to recognize what an object is, or estimate the volume of a liquid by sight etc.
any tool can be used as a weapon. And any tool can be crafted into a weapon.
ai isn't going away and it isn't enherently evil, just like a gun isn't. But it can be dangerous.
I suggest instead of fearing a tool we make our own.
Kinda, It depends more on the data given to it and the environment it learned in. For example when I created an Ai to detect people crossing the border, for the data i created 10000 temporal sequences which totals 30000 synthetic images of humans moving across terrain with random skin tones, clothing, walking paths, strides, speeds. But I only randomized the amount of people to be between 1 and 5. So the ai may fail with very large numbers, and it should fail with vehicles. It might also fail if there are any flashes of light from metallic objects or if the people are crawling. Basically, the more you randomize the data within realistic parameters, the more generalized the model and the more flexible the Ai will be to perform the task.
if its indistinguishable, then maybe 'sentience' is a trait we share, and the difference is something else. reminder that sentience is just the ability to feel sensations
edit: or maybe there is less difference than we think
Ai is not the same as sentience..
In many ways, it’s worse. In theory, you could reason with a true sentient AI.
The current version is basically virtualized media matters trolls.
Ai is a tool. Designed to do a task. There are many Ai. Most are machine vision and are only designed to recognize what an object is, or estimate the volume of a liquid by sight etc.
any tool can be used as a weapon. And any tool can be crafted into a weapon.
ai isn't going away and it isn't enherently evil, just like a gun isn't. But it can be dangerous.
I suggest instead of fearing a tool we make our own.
Artificial means "man-made."
So it depends on which man made it....
Kinda, It depends more on the data given to it and the environment it learned in. For example when I created an Ai to detect people crossing the border, for the data i created 10000 temporal sequences which totals 30000 synthetic images of humans moving across terrain with random skin tones, clothing, walking paths, strides, speeds. But I only randomized the amount of people to be between 1 and 5. So the ai may fail with very large numbers, and it should fail with vehicles. It might also fail if there are any flashes of light from metallic objects or if the people are crawling. Basically, the more you randomize the data within realistic parameters, the more generalized the model and the more flexible the Ai will be to perform the task.
Hmm is this is a good point.
Ai will soon look and behave exactly like sentience. Then, this distinction won’t matter.
if its indistinguishable, then maybe 'sentience' is a trait we share, and the difference is something else. reminder that sentience is just the ability to feel sensations
edit: or maybe there is less difference than we think