The term "artificial intelligence" is a lie. It is hype - a marketing term used by people trying to create a market, create demand, and then sell a product to meet the manufactured demand.
In reality, A.I. is more accurately referred to as "machine learning". The systems and LLM's can find patterns in data, and then perform specific actions when specific patterns are found. That is all. There is no comprehension at all.
In other words, A.I. has zero understanding of the meaning of any of the words that it accepts as input or produces as output. It only understands patterns and relationships between all of the unknown words, which it finds by analyzing literally billions of examples.
I think the real push for the whole A.I. scam is to provide cover for the globohomo cabal when the videos of their many crimes are publicized... they can say "it's all fake A.I.... that was not me", etc.
In reality, machine learning is incredibly expensive and not very useful in most real-world scenarios. It is a classic "solution in search of a problem", and provides no actual ROI for business... just like all the other fads that have come and gone in recent years.
AGI (artificial general intelligence) would be a true intellect like a human. Based on everything we know, the machine learning A.I. is nowhere near achieving AGI... which means either Elon is talking out of his ass, or there are technologies in play behind the scenes that we, the general public, know nothing about.
Thank you!! I thought I was missing something here. But you pretty much reiterated was I already was feeling intuitively about AI. Another hype to make people afraid of something unseen.
... not very useful in most real-world scenarios. It is a classic "solution in search of a problem", and provides no actual ROI for business... just like all the other fads that have come and gone in recent years.
Man, I don't have the time at this second to drop some info here, but I just wanted to say you are VERY wrong on this. Wildly wrong.
Cool story bro... But massive distributed software systems, big data and data analytics is what I do for a living (and have for the past 25 years now).
The cost of constantly training the LLM's plus the infrastructure costs are just too damn high when using real data for training, and trying to use A.I. generated data to train the models results in more and more "hallucinations" (the bullshit term they use for garbage outputs). Trying to train the LLM with generated inputs ultimately wrecks the whole model.
Businesses don't have anything they can sell from the machine learning outputs that are able to earn more money than all the massive infrastructure and model training costs... which means negative ROI. They end up losing money.
Chat GPT infrastructure alone costs almost a quarter of a Billion dollars a year in the AWS cloud - and that does not include any of the huge costs of training the models (lots of people manually labeling and categorizing and tagging all the inputs and then scoring all the outputs... very time consuming and very expensive - and it NEVER ENDS).
So - I would say that you are the one that is "wildly wrong".
Now in terms of building a total surveillance state to enslave humanity (like communist China is doing right now)... the capabilities of "A.I." are indeed extremely useful... and tyrannical governments usually have deep pockets. That right there is your best potential market.
I've been PE at FAANG/MAMAA for about 15 years now. So. Hi.
I think if you are solely positioning your thesis around LLMs, then I think it's precarious. My personal ROI as an Engineer is insane. I'm far more productive by all measures. If you're a desk-worker and haven't figured out how to augment your productivity with an LLM, you're going to be left behind.
Other baked ML algos trained faster on discrete datasets already has multiple ROI positive applications, and has for years. Cataloging, CV, predictive analytics, speech recognition, NLP, etc. etc. I can't imagine you've not seen this if you truly are in big data?
"AI is programmed by humans" -- a common misconception. For one thing, AIs are already PROGRAMMING THEMSELVES to some extent (see below),
Current forms of AI (Large Language Models for instance) are trained on vast amounts of data and use that data in ways that are not always possible to predict, as recent news items about strange AI behavior suggests. Humans program the framework, as it were, of the AI, but not in such a way that precise predictions can be made about the system's behavior in every situation.
Even simple programs often show unexpected behavior, which is why beta-testing is done -- programs do NOT always do what their creators expect in every situation, either because of programming errors or because the program encounters something the programmer did not expect. Most people today have experienced unexpected behavior from something on their computers at some point.
AI is far more complex than traditional programs, both in the program itself and in regards to the data it works with, and like the weather, this complexity and immense and changing data set (including varied and unpredictable queries) make precise prediction often impossible and ensures that even broad behavior is sometimes unexpected.
Here's a response FROM an AI -- Brave Browser's LEO -- to the query "Are AIs programming themselves to some extent already?"
AI systems are becoming increasingly capable of programming themselves to some extent. This is known as "self-programming" or "program synthesis." Self-programming AI systems use machine learning algorithms to generate code or modify existing code based on a set of inputs or goals. While these systems are still in the early stages of development, they have the potential to significantly improve the efficiency and effectiveness of software development and other tasks that involve programming. However, it's important to note that self-programming AI systems are not yet at the level of human programmers, and they still require human oversight and input to ensure that they are functioning correctly and producing high-quality results.
Computers -- even hand-held calculators from decades ago -- are much faster at MATH than humans are. So are they "intelligent?"
Of course not. Human intelligence is organic and hugely multifaceted; it is based in an embodied organic being who moves through the world and lives socially among other humans, animals, plants, and other features of the natural world. This is why we have common sense and why we understand many things that computer do not. Also, we are genetically related to every other life-form on Earth, which is why we have empathy and a sense of connection to other life. Computers, not so much.
So computers are more CAPABLE and FASTER at certain tasks, and the sheer computing power and staggeringly large data available to them gives them the ability to mimic human responses more and more -- and will at some point, if not already, allow them to mimic us so well that we'll be unable to tell the difference in many situations.
Their intelligence will remain of a different quality from ours for a long time, however. And don't ever start believing they have any actual empathy for us.
I mean Chatgpt is not intelligent it just has a bunch of data it spews out....What am I missing here?
'Futurism reports that in a recent interview with Norway wealth fund CEO Nicolai Tangen, Elon Musk shared his thoughts on the rapid advancement of AI. Musk, who is known for his hype-filled predictions, stated that he believes AI will surpass human intelligence by 2026. “If you define AGI as smarter than the smartest human, I think it’s probably next year, within two years,” he said, as quoted by Reuters. AGI refers to artificial general intelligence, a far more powerful form of AI that generative AI made popular by tools such as ChatGPT.'
Depends on how you define intelligence. Be able to give you accurate answers and having access to the breath of Human knowledge and being able to reference such information quickly and accurately yes.
Being able to learn and create it things on its own. Free of humans. Substantially more unlikely.
There has been numerous instances of AI models having emergent capabilities not explicitly programmed by the developers. But not much to suggest truly organic evolution.
It’s also fairly likely they have access to information not readily available to the general population. Maybe a Corporation has whipped something up. Or DARPA. There was speculation a few years ago Google unintentionally created something. After curious behavior from the company and it’s censors. But nothing definitively confirmed
The term "artificial intelligence" is a lie. It is hype - a marketing term used by people trying to create a market, create demand, and then sell a product to meet the manufactured demand.
In reality, A.I. is more accurately referred to as "machine learning". The systems and LLM's can find patterns in data, and then perform specific actions when specific patterns are found. That is all. There is no comprehension at all.
In other words, A.I. has zero understanding of the meaning of any of the words that it accepts as input or produces as output. It only understands patterns and relationships between all of the unknown words, which it finds by analyzing literally billions of examples.
I think the real push for the whole A.I. scam is to provide cover for the globohomo cabal when the videos of their many crimes are publicized... they can say "it's all fake A.I.... that was not me", etc.
In reality, machine learning is incredibly expensive and not very useful in most real-world scenarios. It is a classic "solution in search of a problem", and provides no actual ROI for business... just like all the other fads that have come and gone in recent years.
AGI (artificial general intelligence) would be a true intellect like a human. Based on everything we know, the machine learning A.I. is nowhere near achieving AGI... which means either Elon is talking out of his ass, or there are technologies in play behind the scenes that we, the general public, know nothing about.
Thank you!! I thought I was missing something here. But you pretty much reiterated was I already was feeling intuitively about AI. Another hype to make people afraid of something unseen.
Man, I don't have the time at this second to drop some info here, but I just wanted to say you are VERY wrong on this. Wildly wrong.
Cool story bro... But massive distributed software systems, big data and data analytics is what I do for a living (and have for the past 25 years now).
The cost of constantly training the LLM's plus the infrastructure costs are just too damn high when using real data for training, and trying to use A.I. generated data to train the models results in more and more "hallucinations" (the bullshit term they use for garbage outputs). Trying to train the LLM with generated inputs ultimately wrecks the whole model.
Businesses don't have anything they can sell from the machine learning outputs that are able to earn more money than all the massive infrastructure and model training costs... which means negative ROI. They end up losing money.
Chat GPT infrastructure alone costs almost a quarter of a Billion dollars a year in the AWS cloud - and that does not include any of the huge costs of training the models (lots of people manually labeling and categorizing and tagging all the inputs and then scoring all the outputs... very time consuming and very expensive - and it NEVER ENDS).
So - I would say that you are the one that is "wildly wrong".
Now in terms of building a total surveillance state to enslave humanity (like communist China is doing right now)... the capabilities of "A.I." are indeed extremely useful... and tyrannical governments usually have deep pockets. That right there is your best potential market.
I've been PE at FAANG/MAMAA for about 15 years now. So. Hi.
I think if you are solely positioning your thesis around LLMs, then I think it's precarious. My personal ROI as an Engineer is insane. I'm far more productive by all measures. If you're a desk-worker and haven't figured out how to augment your productivity with an LLM, you're going to be left behind.
Other baked ML algos trained faster on discrete datasets already has multiple ROI positive applications, and has for years. Cataloging, CV, predictive analytics, speech recognition, NLP, etc. etc. I can't imagine you've not seen this if you truly are in big data?
There is a BIG difference between being "intelligent" and being "smart".
"Intelligence" is knowing that a man can legally self-identify as a woman and enter a women's locker room at a workout gym.
"smart" is knowing not to try that in Texas.
"AI is programmed by humans" -- a common misconception. For one thing, AIs are already PROGRAMMING THEMSELVES to some extent (see below),
Current forms of AI (Large Language Models for instance) are trained on vast amounts of data and use that data in ways that are not always possible to predict, as recent news items about strange AI behavior suggests. Humans program the framework, as it were, of the AI, but not in such a way that precise predictions can be made about the system's behavior in every situation.
Even simple programs often show unexpected behavior, which is why beta-testing is done -- programs do NOT always do what their creators expect in every situation, either because of programming errors or because the program encounters something the programmer did not expect. Most people today have experienced unexpected behavior from something on their computers at some point.
AI is far more complex than traditional programs, both in the program itself and in regards to the data it works with, and like the weather, this complexity and immense and changing data set (including varied and unpredictable queries) make precise prediction often impossible and ensures that even broad behavior is sometimes unexpected.
Here's a response FROM an AI -- Brave Browser's LEO -- to the query "Are AIs programming themselves to some extent already?"
Good information, but would you call them intelligent? That would indicate something more than self-programming it seems
Is an automobile "fast?"
Compared to a human, yes, of course.
Computers -- even hand-held calculators from decades ago -- are much faster at MATH than humans are. So are they "intelligent?"
Of course not. Human intelligence is organic and hugely multifaceted; it is based in an embodied organic being who moves through the world and lives socially among other humans, animals, plants, and other features of the natural world. This is why we have common sense and why we understand many things that computer do not. Also, we are genetically related to every other life-form on Earth, which is why we have empathy and a sense of connection to other life. Computers, not so much.
So computers are more CAPABLE and FASTER at certain tasks, and the sheer computing power and staggeringly large data available to them gives them the ability to mimic human responses more and more -- and will at some point, if not already, allow them to mimic us so well that we'll be unable to tell the difference in many situations.
Their intelligence will remain of a different quality from ours for a long time, however. And don't ever start believing they have any actual empathy for us.
So all it will do is utter statements that no one can understand as we are too dim to do so. Much like Joe Biden?
🤣🤣🤣
I mean Chatgpt is not intelligent it just has a bunch of data it spews out....What am I missing here?
'Futurism reports that in a recent interview with Norway wealth fund CEO Nicolai Tangen, Elon Musk shared his thoughts on the rapid advancement of AI. Musk, who is known for his hype-filled predictions, stated that he believes AI will surpass human intelligence by 2026. “If you define AGI as smarter than the smartest human, I think it’s probably next year, within two years,” he said, as quoted by Reuters. AGI refers to artificial general intelligence, a far more powerful form of AI that generative AI made popular by tools such as ChatGPT.'
In simplest terms yes humans program it. We also program calculators, that doesn’t mean my single brain is a calculator.
We make advanced “ai” search engines bouncing off as much information as possible, that doesn’t mean the smartest human can do the same?
No, but it doesn't make your calculator a self programming, intelligent entity does it?
I’m just explaining it from my simple pov. Apparently like the guy below talks about a lot of the AI seems to be programming itself now.
Depends on how you define intelligence. Be able to give you accurate answers and having access to the breath of Human knowledge and being able to reference such information quickly and accurately yes.
Being able to learn and create it things on its own. Free of humans. Substantially more unlikely.
There has been numerous instances of AI models having emergent capabilities not explicitly programmed by the developers. But not much to suggest truly organic evolution.
It’s also fairly likely they have access to information not readily available to the general population. Maybe a Corporation has whipped something up. Or DARPA. There was speculation a few years ago Google unintentionally created something. After curious behavior from the company and it’s censors. But nothing definitively confirmed
Because eventually mankind will create SELF LEARNING AI. Then we're screwed.
Self learning and AI put together is still limited to human input. I don't see any intelligence there just putting facts together.