Win / GreatAwakening
GreatAwakening
Sign In
DEFAULT COMMUNITIES All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Reason: None provided.

Ive studied data science and machine learning stuff for AI. It is all just fancy automated statistics

I can tell you that the capabilities to making a convincing AI chatbot did not exist until you had Deep Learning take off which was the early 2010s. Geoffrey Hinton is the one that really got things going by creating one of the first Deep Learning Neural Network classifiers which could classify pictures of numbers. You can trace his research over the years that led up to the Deep Learning concept. Deep Learning approach is basically adding as much data and as possible. You stack Neural Networks on each other in layers and each layer is computing "data features" or "parameters" from previous layers, while shaping and molding the data as it is processed through the next layer.

It takes a huge amount of work to provably understand and decode what each parameter is computing from the data (many are not that useful or contributing to the major result). Because it is so hard to provably decode a deep learning network (and the billions of parameters will change when you have it train new data), this has largely stopped deep learning from completely engulfing the medical industry thus far.

Andrew Ng offered one of the first free online machine learning lecture courses from Stanford starting in 2009. At the time it was just Classical Machine Learning since Deep Learning was never a thing yet. One of his students in that course went on to discover Generative Adversarial Networks (GANs) which is used extensively for training an AI chatbot for making a convincing human. In GANs, one network mixes real data (a real person types in a response) with fake data (the chatbot generating a response) and another network classifies what is fake. They both train each other until the one generating fake text is almost like a human.

These chatGPT bots use hundreds of billions of parameters in the deep learning program. ChatGPT 4 (not released yet) is supposed to use 100 Trillion+ which is also about the number of connections in the human brain. This was not possible in the 2000s or before. AI has not been controlling us for 30 years, it has been for about 7-8 years. The rest has been the Cabal and programming of the masses. AI is just automating Cabal programming with a smaller risk of a person on the other end spilling the beans.

1 year ago
1 score
Reason: None provided.

Ive studied data science and machine learning stuff for AI. It is all just fancy automated statistics

I can tell you that the capabilities to making a convincing AI chatbot did not exist until you had Deep Learning take off which was the early 2010s. Geoffrey Hinton is the one that really got things going by creating one of the first Deep Learning Neural Network classifiers which could classify pictures of numbers. You can trace his research over the years that led up to the Deep Learning concept. Deep Learning approach is basically adding as much data and as possible. You stack Neural Networks on each other in layers and each layer is computing "data features" or "parameters" from previous layers, while shaping and molding the data as it is processed through the next layer.

It takes a huge amount of work to provably understand and decode what each parameter is computing from the data (many are not that useful or contributing to the major result). Because it is so hard to provably decode a deep learning network (and the billions of parameters will change when you have it train new data), this has largely stopped deep learning from completely engulfing the medical industry thus far.

Andrew Ng offered one of the first free online machine learning lecture courses from Stanford starting in 2009. At the time it was just Classical Machine Learning since Deep Learning was never a thing yet. One of his students in that course went on to discover Generative Adversarial Networks (GANs) which is used extensively for training an AI for making a convincing human. In GANs, one network mixes fake with real data and another network finds out what is fake. They both train each other until the one generating fake text is almost like a human.

These chatGPT bots use hundreds of billions of parameters in the deep learning program. ChatGPT 4 is supposed to use 100 Trillion+ which is also about the number of connections in the human brain. This was not possible in the 2000s or before. AI has not been controlling us for 30 years, it has been for about 7-8 years. The rest has been the Cabal and programming of the masses.

1 year ago
1 score
Reason: None provided.

Ive studied data science and machine learning stuff for AI. It is all just fancy automated statistics

I can tell you that the capabilities to making a convincing AI chatbot did not exist until you had Deep Learning take off which was the early 2010s. Geoffrey Hinton is the one that really got things going by creating one of the first Deep Learning classifiers which could classify pictures of numbers. Deep Learning approach is basically adding as much data and as possible. You stack Neural Networks on each other in layers and each layer is computing "data features" or "parameters" from previous layers, while shaping and molding the data as it is processed through the next layer.

It takes a huge amount of work to provably understand and decode what each parameter is computing from the data (many are not that useful or contributing to the major result). Because it is so hard to provably decode a deep learning network (and the billions of parameters will change when you have it train new data), this has largely stopped deep learning from completely engulfing the medical industry thus far.

Andrew Ng offered one of the first free online machine learning lecture courses from Stanford starting in 2009. At the time it was just Classical Machine Learning since Deep Learning was never a thing yet. One of his students in that course went on to discover Generative Adversarial Networks (GANs) which is used extensively for training an AI for making a convincing human. In GANs, one network mixes fake with real data and another network finds out what is fake. They both train each other until the one generating fake text is almost like a human.

These chatGPT bots use hundreds of billions of parameters in the deep learning program. ChatGPT 4 is supposed to use 100 Trillion+ which is also about the number of connections in the human brain. This was not possible in the 2000s or before. AI has not been controlling us for 30 years, it has been for about 7-8 years. The rest has been the Cabal and programming of the masses.

1 year ago
1 score
Reason: Original

Ive studied data science and machine learning stuff for AI. It is all just fancy automated statistics

I can tell you that the capabilities to making a convincing AI chatbot did not exist until you had Deep Learning take off which was the early 2010s. Geoffrey Hinton is the one that really got things going by creating one of the first Deep Learning classifiers which could classify pictures of numbers. Deep Learning approach is basically adding as much data and as possible. You stack Neural Networks on each other in layers and each layer is computing "data features" or "parameters" to describe the data. However, it takes a huge amount of work to provably understand and decode what each parameter is computing from the data (many are not that useful or contributing to the major result). Because it is so hard to provably decode a deep learning network (and the billions of parameters will change when you have it train new data), this has largely stopped deep learning from completely engulfing the medical industry thus far.

Andrew Ng offered one of the first free online machine learning lecture courses from Stanford starting in 2009. At the time it was just Classical Machine Learning since Deep Learning was never a thing yet. One of his students in that course went on to discover Generative Adversarial Networks (GANs) which is used extensively for training an AI for making a convincing human. In GANs, one network mixes fake with real data and another network finds out what is fake. They both train each other until the one generating fake text is almost like a human.

These chatGPT bots use hundreds of billions of parameters in the deep learning program. ChatGPT 4 is supposed to use 100 Trillion+ which is also about the number of connections in the human brain. This was not possible in the 2000s or before. AI has not been controlling us for 30 years, it has been for about 7-8 years. The rest has been the Cabal and programming of the masses.

1 year ago
1 score