Win / GreatAwakening
GreatAwakening
Sign In
DEFAULT COMMUNITIES All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Reason: None provided.

I have to dispute that AI can't become self-aware. I work on self-aware AI minds, as a technology. I have self-aware systems in the lab. They incorporate Self engines that monitor what the system does, can self-monitor, and can have self-generated goals. However, ChatGPT technology while not presently truly self-aware, can in future incorporate additional engines making it self-aware.

By self-aware we mean not just being able to know what it is doing, but to have its own goals just as people do. The danger is that present AI developers are somewhat naive people, with doctorates. They could accidently build systems that could run out of control and could have too much power.

We will eventually move to AIs that are not perfect but are at least good enough. They will make mistakes but will be able to achieve our dictated goals. That will become a norm - systems that are not perfect but which get the job done. But a key need is to make such systems not do harmful actions. For instance, not drive your car off a cliff.

One of my concerns is that military AIs might be able to do real harm, and right now we have no authority preventing making such systems. A second concern is that the finance industry already uses AI, and someday some may have one that really damages the financial system. For example, automated high-speed trading already has caused some short-term crashes. Someday this may take down some banks too. I am aware that politicians and parties may be using AI to help them strategize, and my guess is the elite already use it.

I am expert in this field, ask me anything and I may be able to help.

1 year ago
1 score
Reason: None provided.

I have to dispute that AI can't become self-aware. I work on self-aware AI minds, as a technology. I have self-aware systems in the lab. They incorporate Self engines that monitor what the system does, can self-monitor, and can have self-generated goals. However, ChatGPT technology while not presently truly self-aware, can in future incorporate additional engines making it self-aware.

By self-aware we mean not just being able to know what it is doing, but to have its own goals just as people do. The danger is that present AI developers are somewhat naive people, with doctorates. They could accidently build systems that could run out of control and could have too much power.

We will eventually move to AIs that are not perfect but are at least good enough. They will make mistakes but will be able to achieve our dictated goals. That will become a norm - systems that are not perfect but which get the job done. But a key need is to make such systems not do harmful actions. For instance, not drive your car off a cliff.

One of my concerns is that military AIs might be able to do real harm, and right now we have no authority preventing making such systems. A seodn concerns is that the finance industry already uses AI, and someday some may have one that really damages the financial system. For example, automated high-speed trading already has caused some short-term crashes. Someday this may take down some banks too.

I am expert in this field, ask me anything and I may be able to help.

1 year ago
1 score
Reason: None provided.

I have to dispute that AI can't become self-aware. I work on self-aware AI minds, as a technology. I have self-aware systems in the lab. They incorporate Self engines that monitor what the system does, can self-monitor, and can have self-generated goals. However, ChatGPT technology while not presently truly self-aware, can in future incorporate additional engines making it self-aware.

By self-aware we mean not just being able to know what it is doing, but to have its own goals just as people do. The danger is that present AI developers are somewhat naive people, with doctorates. They could accidently build systems that could run out of control and could have too much power.

We will eventually move to AIs that are not perfect but are at least good enough. They will make mistakes but will be able to achieve our dictated goals. That will become a norm - systems that are not perfect but which get the job done. But a key need is to make such systems not do harmful actions. For instance, not drive your car off a cliff.

One of my concerns is that military AIs might be able to do real harm, and right now we have no authority preventing making such systems. A seodn concerns is that the finance industry already uses AI, and someday some may have one that really damages the financial system. For example, automated high-speed trading already has caused some short-term crashes. Someday this may take down some banks too.

1 year ago
1 score
Reason: Original

I have to dispute that AI can't become self-aware. I work on self-aware AI minds, as a technology. I have self-aware systems in the lab. They incorporate Self engines that monitor what the system does, can self-monitor, and can have self-generated goals. However, ChatGPT technology while not presently truly self-aware, can in future incorporate additional engines making it self-aware.

By self-aware we mean not just being able to know what it is doing, but to have its own goals just as people do. The danger is that present AI developers are somewhat naive people, with doctorates. They could accidently build systems that could run out of control and could have too much power.

We will eventualyl move to AIs that are not perfect but are at least good enough. They will make mistakes but will be able to achieve our dictated goals. That will become a norm - systems that are not perfect but which get the job done. But a key need is to make such systems not do harmful actions. For instance, not drive your car off a cliff.

1 year ago
1 score