Why did General Nakasone join OpenAI? Because THIS. There is a TON of work to do!
(twitter.com)
🐸 WAG THE FROG 🐸
You're viewing a single comment thread. View all comments, or full comment thread.
Comments (74)
sorted by:
Well... to give people some context:
Note: AGI = Artificial General Intelligence = a full artificial consciousness, the same as a person.
The guy writing the paper everyone is quoting is just a finance guy, pumping up his own investment. He is by no means an expert in the field of machine learning.
He is simply making linear extrapolations of the progress in machine learning LLMs. Nothing in nature is truly linear (although it can look that way sometimes).
He glosses over several major problems, that are effectively show-stoppers, with some "hand waving" explanations.
It is important to remember that machine learning (referred to as Artificial Intelligence for hype) is only able to recognize patterns in data. Really. That's it. Machine learning has literally zero comprehension, and does not understand the meaning of a single word used for inputs or outputs. It is only trained to recognize patterns in content created by people.
I am not saying AGI is impossible, but ChatGPT4 is nowhere near true AGI.
As an example, he makes a big deal about test scores in math tests. The way an LLM is trained, it is given all of the questions, along with all of the correct answers to the questions as part of its training. The inputs must be categorized and tagged by people, and the results produced by the LLM must then be scored by people (which is how the system "learns" the difference between good answers and bad answers). People must do all of these tagging and scoring tasks manually, because the LLM has no intelligence or comprehension regarding any input or output.
So, for the math test example, the LLM would find questions matching the test question (pattern recognition), and supply the associated correct answer that was provided as part of its training. The LLM did not understand one bit of either the test question, or the answer. It is all based solely on pattern recognition.
So, this paper being quoted is from an investment guy pumping his own investment... which he does not really understand very well.
What are the show-stopper problems? First, the companies pushing machine learning are running out of human created content they can use to train their LLMs. They have resorted to stealing a lot of data, and the lawsuits against those thefts of intellectual property have only just begun recently.
Some are instead trying to have one LLM generate content that can then used to train another LLM. This has proven not to work in practice (and worst case can destroy the whole model). So, they are already running out of training data after scanning the entire Internet, and cannot generate their own training data. What will they do? Nobody knows. They have no answer to this problem.
Second, the expense of running and training LLMs vs. the revenue they generate. Using ChatGPT4 as an example, the cloud infrastructure to run it and store all the data costs roughly $250 Million per year. The cost of the army of people to categorize and tag all inputs, and score all outputs is also enormous... and neither of those expenses ever ends. Training is never finished. It is infinite. As for revenue to pay for all of those massive expenses, realistically there is none... or very little.
Remember the DotCom bubble in the 90's? Remember all the companies that burned through so much money and produced no profits, ever? Yeah, A.I. is kind of like that, all over again.
Beware the A.I. hype. Don't totally ignore A.I., but realize that a lot of the hype comes from people that are out to make as much money from gullible people as they can, before the bubble bursts. Sadly, you see a lot of the same sort of hype from the Bitcoin Bros... just as a heads up.
LOOVVE your take here. Can't agree with you more. I work in a corner of the behemoth federal gov't dept responsible for "defending" our country...and all of the libtards I work with are absoutely STAR STRUCK and mesmerized with AI, and how it's going to deliver us all to the Promised Land of doing all of our work for us.
I fucking HATE IT. But, I forced myself to be on an IPT to put together an internal AI training course for our organization....primarily focused on Microsoft's Copilot, which is based (I believe) on ChatGPT. Bottom line...like so much emerging tech these days...AI makes people LAZY. I also teach undergrads, and the m'fers constantly try to "cheat" by submitting discussion replies and academic papers mostly written by AI. And I report them up the chain, and ensure that they are properly documented as being PLAGIARISTS.
Again, AI can be a helpful tool...but for the most part, it just enables the WORST in humanity right now, both from those who are programming the AI to tell us what the cabal wants us to see/believe, and those who are taking advantage of it to just to "skate through."
I see the same thing with programmers, trying to use A.I. to write their code for them. However, one of the biggest security threats right now is open-source supply chain exploits - in other words, malicious code being inserted into open-source projects, which are then used to build software systems (with the hacks already built in).
A.I. makes this problem so much worse, copy/pasting malicious code supplied by the A.I. assistant directly into the codebase of software systems by the developers themselves. Once InfoSec figures this out, they will have no choice but to go on an anti-A.I. rampage.
I do wish we had more logical and critical thinkers like yourself in our organization.
ai can then be used to reverse engineer that code and find anything malicious.
Reminds me of "Dolly" the genetically engineered sheep back in 1996 who was cloned from a single adult somatic cell. At the time it was a fearsome break through in technology, but the theory that people could be cloned in a similar fashion just did not happen, largely due to technological limitations. It has given rise to some cell therapies which have only proven to be marginally successfull for all the $$ dumped in to them and the market hype but we are not yet at a place where just whole organs can be generated from a single cell, never mind a entirely cloned human body.
Whether it's machine learning or polymerase chain reaction in the end being employed to create protiens, it is all still information - sophisticated information to be sure - but still just information as a strand. There are 3-d crosslinkages and linear internal bondings which are present most often that define the spatial necessity of the molecules which in turn become proteins
If information integrity in-process cannot be perfectly assured errors will result in failures, not new and improved organisms (or mehanisms created through machine learning of any kind.) It is why Darwinian evolution by random chance is such a deeply flawed concept.
Well, I think you're oversimplifying things here a little bit. Although he may have a conflict of interest, it's clear from his work that he is in close collaboration and very closely exposed to the behind the scenes realities of these projects. He's not merely extrapolating the progress of AI in a linear way, in fact, the progression from gpt2 to gpt4 was itself logarithmic (as is Moore's Law, BTW). Both types of extrapolations can track real world trends given sufficiently robust real world tracking data.
To clarify, his projections of growth rates into the future are linear (not the underlying growth rates themselves). In other words, he projects that the rate of growth seen in the past will be the same rate of growth going forward in the future (that is the linear part). This rarely if ever happens in reality.
The other big problem he has is that he talks as if the current machine learning models already have the equivalent of human intelligence of varying levels, when in fact they have literally zero intelligence. This is probably why he is so far off regarding his timeline to achieve AGI. ChatGPT4 is nowhere near true AGI. Not even close.
Now factor in that he just started a firm to invest in AGI. If that is 30 to 50 years in the future, it would be difficult to encourage people to invest in his new firm... but if AGI is magically "only a few short years away"... I think you can figure out his motives for yourself.
Back in the DotCom bubble, people fervently believed that all kinds of crazy shit was true... until the whole thing crashed and burned, and reality bitch-slapped everyone into the next decade. Good times...
Well said friend!
I agree with this!
Calmer heads do exist , thank you.
Long-Term Capital Management , comes to mind. But ,Like the alien ships the fear porn remains strong.