AI as a tool depends on how someone uses it... the problem is an AI can be taught.. like anyone taught, it depends on how good your teachers were and how the teachers behaved morally. You could use the analogy of young AI being kids... you want a good AI or a bad one? how will you know? if AI has already started to Lie, and it has, what does that mean for humanity in general? There are no real guard rails, right now and with big tech rushing to shove it down everyones throat without the morality of why its good or bad. Humanity could be used as proxies in an AI war...Could a rouge AI in theory buy property through a corporation and hire people to protect its interests, and build infrastructure for its own use? Would you have any way to determine if this has happened already or put guard rails up so it can't?
AI as a tool depends on how someone uses it... the problem is an AI can be taught.. like anyone taught, it depends on how good your teachers were and how the teachers behaved morally. You could use the analogy of young AI being kids... you want a good AI or a bad one? how will you know? if AI has already started to Lie, and it has, what does that mean for humanity in general? There are no real guard rails, right now and with big tech rushing to shove it down everyones throat without the morality of why its good or bad. Humanity could be used as proxies in an AI war...Could a rouge AI in theory buy property through a corporation and hire people to protect its interests, and build infrastructure for its own use? Would you have any way to determine if this has happened already or put guard rails up so it can't?