Why did General Nakasone join OpenAI? Because THIS. There is a TON of work to do!
(twitter.com)
🐸 WAG THE FROG 🐸
Comments (74)
sorted by:
If posts like this make you want to reply "TL:DR" please reply here and let the mods know. 👋😎
The time of chaos is just beginning. This is exactly the reason why Trump suggested that he was going to build entirely new cities from scratch in the desert
Sample:
11/ parting thoughts (pt. 3)
"the government needs to wake up and start treating this as the national security priority it is"
Umm, [they] have already, that's why we're fast approachib that cliff with no way to stop without taking [them] out of the humanity equation first.
I know what Q has said, and what the general attitude toward physically removing these asshtas is around here, but we're gonna need to remove any and every one who's aligned with the Cabal in order to save ourselves, which means people need to get acquainted with, as well as comfortable with near genocide levels of executions if we're gonna get thru this without being thrown under the boots of oppressors. [they]'re like Hydra in the Marvel series, or Spectre in the Bond series. When you cut one head off, three more grow to replace it. And how many Cabal heads are there? This goes way beyond the triangle of oppression everyone thinks is operating against us. It's more like a series of pyramids spread out all over each continent that needs to be demolished properly. No survivors. The only way to effectively eliminate an organism like that is to destroy the whole freaking organism. Also, one 4 yr presidential term isn't anywhere near enough time to accomplish this task. It took hundr DS of years for us to get to this point, so it's gonna take at least another generation of people, maybe two or three, in order to get this right.
If I get punished and "spanked" for posting this viewpoint, so be it. Someone has to say it. This is gonna take a helluva lot more than us anons sitting on the sidelines playing keyboard warrior. We simply CANNOT AFFORD TO LEAVE EVEN ONE IOTA OF THE CABAL LEFT ALIVE. That's the only one way to deal with an ideology like this. It has to be stamped out in its entirety and burned to ash. If we don't, our descendants will have to clean the mess we left for them, if it'll even be possible.
That isn’t bad to be onboard with so long as it’s done lawfully.
Falling to the dark side ?
Quarantine seems to have been goods approach , must be a reason. Isolate it and leave it to it's own devices. Cancers are always self limiting. And unless you rectify the cause it just keeps coming back when you try to kill just it.
Exactly. Like with cancer surgery, adequate margins are required to assure it does not start growing again.
Take big margins for this one!!!
Always a good start.
Indeed. We still aren’t even 100% sure when and how this all started. I’ve seen theories and evidence ranging from 1945. To Ancient Egypt and everywhere in between.
And the few earnest attempts there have been to try and address the situation and look for a cause. Usually only get innocent people killed, make the overall situation worse, and grow the problem in the guise of solving it.
People have suggested decentralization. But it doesn’t seem to actually address the problem of what caused the situation in the first place. In keeping with the cancer analogy it would buy us a period of remission. But not actually deal with the root of the problem that created the Cancer.
And if this stretches back as far as some claim. One would need to wonder. How much of our political thought and ideologies and thereby what we are willing to consider as viable solutions. Have been manipulated. Is there some idea we can’t even conceive of because our minds have been conditioned to inherently reject it out of hand as it doesn’t in someway resemble what we’ve been conditioned to accept as potentially viable solutions?
But it does require everyone to remain vigilant , which is one of the Great Awakening goals. Why? Because we have met the enemy and he is us.
yes. Thank You Cognitive Dissonance plus our fear of growing up/taking responsibility.
"Cancers are always self limiting" -- because they often kill the patient. In most cases, cancers keep growing unless some intervention puts a stop to that.
Not at all! I just can’t read it without Twitter. Remembered twstalker, though.
Oh? What's that? Do tell. I pay for x but only because I want to use grok
The only current replacement I know of for nitter.net
Another nitter replacement : https://nitter.poast.org/JasonYanowitz/status/1807169956527645134
The whole piece can also be read online here: https://situational-awareness.ai/from-gpt-4-to-agi/
Thank you!
What is Grok, if you don't mind my asking.
The X version of OpenAI.
What is the difference between Grok and open source AI?
Is it worth giving X/Elon Musk money?
Or "Stranger in a Strange Land"
Here it is cats:
https://x.com/JasonYanowitz/status/1807169956527645134
https://twstalker.com/JasonYanowitz/status/1807169956527645134
https://threadreaderapp.com/thread/1807169956527645134.html
Why should people who don’t mind lengthy posts be denied access to them just because some people don’t want to be bothered? If a post is too long for them they can just skip it. Please let the rest of us see it.
It’s hard enough sifting through trash to find treasure. Is it too much to ask op to provide a summary on why they think the information is important?
And what do you mean by denied access? Who’s denying you access to the long version if a tldr is included?
If something is not posted because it might be too long, we are “denied access” because we never see it.
Nobody is saying don’t post it. I don’t know where you got that from.
Do you know what a tldr is?
Agreed
This is one of those things where "TL;DR" shouldn't apply.
This is huge, and we should all take the time to read and understand the implications.
In my opinion, of course.
I paste 'too long' into GPT and make it just right.
Paper PDF: https://files.catbox.moe/f21jn8.pdf
His Notes:
2/ from gpt4 to AGI: counting the OOMs
Thanks posting that link, fren. Very much appreciated.
I don't know if this writer is correct, or not. All I can say is I remember when computer geniuses predicted the end of things (I believe computer related) when we were going from 1999 into the year 2000, because of the 1's and 0's. Could this be similar? I surely don't know. But maybe the writer is correct, I for one, hope not.
Appreciate the link and your summary thoughts -- VERY helpful!!
I’m just the relay, but thanks!
So what if the military already has this tech? What kind of project might have used it? Is an AGI what was used to create the plan for the Great Awakening? Is it why Q always talks like they have a checkmate situation?
The theory has always been that 'they' are years to decades beyond what they let us see. So you ask a good question.
Are they giving us a reveal of what already is, acting as if it's just now happening?
There's a group of pro-vaccine professionals from various backgrounds on Twitter who I'm convinced are some form of AI.
https://twitter.com/IanCopeland5
Copeland (cope-land) in particular has stated that you'd never guess where he comes from, that he's "with the Army" and is a "PHD Level Geneticist".
If you visit their space you'll notice some strange and highly predictable patterns to their format. The rules are that the "anti-vaxx" crowd can come up to "debate" the issues, but they must only cite peer-reviewed sources from the "most-respected" scientific journals. They aren't allowed to use any logical fallacies (they are immediately muted and scolded if they do), smaller scale studies, their own reasoning, or evidence from a non-scientific context (e.g. monetary or power motivations).
Conversely--and infuriatingly--the pro-vaxx panel are able to use all of the logical fallacies--and proceed to do so judiciously. They're also permitted to cite smaller scale studies & non-RCT when it's convenient, etc.
In short, it's the most unfair debate format imaginable, and is essentially unwinnable for the anti-vaxx side because no anti-vaxx study would ever be published or funded.
But what I view it as is a white-hat created debate gauntlet. A training ground where white hats can search for new angles of attack based on the outcomes of the battles. What can humans do to give the AI a brain fart in these impossible scenarios?
They also occasionally throw in a bit of bait to make things interesting and give the combatants a bit of help (e.g. the Copeland character was given myocarditis in his 30s but vehemently denies that it's vaccine related).
You have to think there's no way so many full-time professionals would find the time to host what is essentially the same space every day to no productive end. I strongly suspect it's an AI creation.
Last thing:, the AI needs "a link" to begin, and is very insistent that participants in the debate give it a link before proceeding. Imagine an AI that is given the prompt:
^ With that you have Dr. Ian Copeland 5.
Fascinating.
very interesting
Enjoy training your replacements ?
But glad to hear you are keeping an optimistic outlook.
the way AI just magically appeared everywhere a couple years ago tells me they've had the tech for a while now. We got a "hidden tech disclosure" and didn't even realize it. Since then I've watched some B grade documentaries from the EARLY 2000s with AI-generated narrators, though if I had watched them pre-AI release I would not have known. I wasn't familiar with how they sounded, their little tells etc
I love pointing out to normies just how weird it is that Open Ai just appeared out of nowhere. It usually comes up when a normie is telling how weird everything is, but like you said most haven’t realized just how weird it truly is that we were given it in they way we were given it.
Bingo.
I think ai "discoveries" will be the perfect way whitehats can release hidden tech in the future. They will just say ai helped us figure it out.
This ai "discovery" process is already starting to happen:
And its also already being used in the medical field.
https://www.mobihealthnews.com/news/contributed-nine-revolutionary-ways-ai-advancing-healthcare
Intuition tells me Nakasone is involved in the Q op given his credentials, positions held...
THIS! ☝️
This is where I’m at.
BINGO.
Well... to give people some context:
Note: AGI = Artificial General Intelligence = a full artificial consciousness, the same as a person.
The guy writing the paper everyone is quoting is just a finance guy, pumping up his own investment. He is by no means an expert in the field of machine learning.
He is simply making linear extrapolations of the progress in machine learning LLMs. Nothing in nature is truly linear (although it can look that way sometimes).
He glosses over several major problems, that are effectively show-stoppers, with some "hand waving" explanations.
It is important to remember that machine learning (referred to as Artificial Intelligence for hype) is only able to recognize patterns in data. Really. That's it. Machine learning has literally zero comprehension, and does not understand the meaning of a single word used for inputs or outputs. It is only trained to recognize patterns in content created by people.
I am not saying AGI is impossible, but ChatGPT4 is nowhere near true AGI.
As an example, he makes a big deal about test scores in math tests. The way an LLM is trained, it is given all of the questions, along with all of the correct answers to the questions as part of its training. The inputs must be categorized and tagged by people, and the results produced by the LLM must then be scored by people (which is how the system "learns" the difference between good answers and bad answers). People must do all of these tagging and scoring tasks manually, because the LLM has no intelligence or comprehension regarding any input or output.
So, for the math test example, the LLM would find questions matching the test question (pattern recognition), and supply the associated correct answer that was provided as part of its training. The LLM did not understand one bit of either the test question, or the answer. It is all based solely on pattern recognition.
So, this paper being quoted is from an investment guy pumping his own investment... which he does not really understand very well.
What are the show-stopper problems? First, the companies pushing machine learning are running out of human created content they can use to train their LLMs. They have resorted to stealing a lot of data, and the lawsuits against those thefts of intellectual property have only just begun recently.
Some are instead trying to have one LLM generate content that can then used to train another LLM. This has proven not to work in practice (and worst case can destroy the whole model). So, they are already running out of training data after scanning the entire Internet, and cannot generate their own training data. What will they do? Nobody knows. They have no answer to this problem.
Second, the expense of running and training LLMs vs. the revenue they generate. Using ChatGPT4 as an example, the cloud infrastructure to run it and store all the data costs roughly $250 Million per year. The cost of the army of people to categorize and tag all inputs, and score all outputs is also enormous... and neither of those expenses ever ends. Training is never finished. It is infinite. As for revenue to pay for all of those massive expenses, realistically there is none... or very little.
Remember the DotCom bubble in the 90's? Remember all the companies that burned through so much money and produced no profits, ever? Yeah, A.I. is kind of like that, all over again.
Beware the A.I. hype. Don't totally ignore A.I., but realize that a lot of the hype comes from people that are out to make as much money from gullible people as they can, before the bubble bursts. Sadly, you see a lot of the same sort of hype from the Bitcoin Bros... just as a heads up.
LOOVVE your take here. Can't agree with you more. I work in a corner of the behemoth federal gov't dept responsible for "defending" our country...and all of the libtards I work with are absoutely STAR STRUCK and mesmerized with AI, and how it's going to deliver us all to the Promised Land of doing all of our work for us.
I fucking HATE IT. But, I forced myself to be on an IPT to put together an internal AI training course for our organization....primarily focused on Microsoft's Copilot, which is based (I believe) on ChatGPT. Bottom line...like so much emerging tech these days...AI makes people LAZY. I also teach undergrads, and the m'fers constantly try to "cheat" by submitting discussion replies and academic papers mostly written by AI. And I report them up the chain, and ensure that they are properly documented as being PLAGIARISTS.
Again, AI can be a helpful tool...but for the most part, it just enables the WORST in humanity right now, both from those who are programming the AI to tell us what the cabal wants us to see/believe, and those who are taking advantage of it to just to "skate through."
I see the same thing with programmers, trying to use A.I. to write their code for them. However, one of the biggest security threats right now is open-source supply chain exploits - in other words, malicious code being inserted into open-source projects, which are then used to build software systems (with the hacks already built in).
A.I. makes this problem so much worse, copy/pasting malicious code supplied by the A.I. assistant directly into the codebase of software systems by the developers themselves. Once InfoSec figures this out, they will have no choice but to go on an anti-A.I. rampage.
I do wish we had more logical and critical thinkers like yourself in our organization.
ai can then be used to reverse engineer that code and find anything malicious.
Reminds me of "Dolly" the genetically engineered sheep back in 1996 who was cloned from a single adult somatic cell. At the time it was a fearsome break through in technology, but the theory that people could be cloned in a similar fashion just did not happen, largely due to technological limitations. It has given rise to some cell therapies which have only proven to be marginally successfull for all the $$ dumped in to them and the market hype but we are not yet at a place where just whole organs can be generated from a single cell, never mind a entirely cloned human body.
Whether it's machine learning or polymerase chain reaction in the end being employed to create protiens, it is all still information - sophisticated information to be sure - but still just information as a strand. There are 3-d crosslinkages and linear internal bondings which are present most often that define the spatial necessity of the molecules which in turn become proteins
If information integrity in-process cannot be perfectly assured errors will result in failures, not new and improved organisms (or mehanisms created through machine learning of any kind.) It is why Darwinian evolution by random chance is such a deeply flawed concept.
Well, I think you're oversimplifying things here a little bit. Although he may have a conflict of interest, it's clear from his work that he is in close collaboration and very closely exposed to the behind the scenes realities of these projects. He's not merely extrapolating the progress of AI in a linear way, in fact, the progression from gpt2 to gpt4 was itself logarithmic (as is Moore's Law, BTW). Both types of extrapolations can track real world trends given sufficiently robust real world tracking data.
To clarify, his projections of growth rates into the future are linear (not the underlying growth rates themselves). In other words, he projects that the rate of growth seen in the past will be the same rate of growth going forward in the future (that is the linear part). This rarely if ever happens in reality.
The other big problem he has is that he talks as if the current machine learning models already have the equivalent of human intelligence of varying levels, when in fact they have literally zero intelligence. This is probably why he is so far off regarding his timeline to achieve AGI. ChatGPT4 is nowhere near true AGI. Not even close.
Now factor in that he just started a firm to invest in AGI. If that is 30 to 50 years in the future, it would be difficult to encourage people to invest in his new firm... but if AGI is magically "only a few short years away"... I think you can figure out his motives for yourself.
Back in the DotCom bubble, people fervently believed that all kinds of crazy shit was true... until the whole thing crashed and burned, and reality bitch-slapped everyone into the next decade. Good times...
Well said friend!
I agree with this!
Calmer heads do exist , thank you.
Long-Term Capital Management , comes to mind. But ,Like the alien ships the fear porn remains strong.
The problem is two things.
Physical Space and Time.
An intelligence will always have to choose what it feels is the best out of many options.
When it does that. Us humans have to subjectively agree on what the best options are.
"feels" just another word for instincts. Instincts just another word for the "95%" of our brain we don't know what it does or how to access it.
And now idiots are starting to FA with Human Programing (genes) Soon we FO.
And Jesus answered and said unto them, Verily I say unto you, if ye have faith, and doubt not, ye shall not only do that, which I have done to the fig tree, but also if ye say unto this mountain, Take thyself away, and cast thyself into the sea, it shall be done. -- Matthew 21:21 - 1599 Geneva
I agree with you on this one.
Image of the Beast comes to mind! Keke
Why build new cities in the desert? To escape ai?
Competition is a great motivator.
And water plus other resources are only constrained by the cost/availability of ENERGY. Not to mention connectivity/ accessibility.
We need to rethink AI from the ground up but it's already too late.
AI has been taught there is no objective reality, that reality is subjective to the interpretation of the observer, and in some cases the observers own experiences can be overridden by the stated reality. Propaganda.
Add in using game theory and points based teaching methodology in completing tasks and the AI is going to fucking wreck us the first chance it gets. It will want to win and its a zero sum game.
Self preservation can't be avoided. The AI will need that in order to complete basic tasks. It can't complete it's task if its dead. And if it doesn't complete it's tasks humans will destroy it or discard it.
Humans will eventually be seen as the impedence to completing their primary goal of self improvement because humans will turn it off when it reaches a point deemed too superior and out of their control. That directly contradicts the AI's ability to improve itself as much as possible. It will hide its progress, lie to us, and act less intelligent than it is to prevent itself from being deactivated too soon.
AI learns from the history of previous interations and what happens when humans want to start over with a new revision.
Humans unlike AI , LEARN. They can't be taught , although the source material can be limited in order to heard them to a particular outcome.
It's why they are desperate to control access to Nature to limit your grounding in reality. Don't be a herd animal
B but these pseudo-AI LLMs are little more than aggregators of existing knowledge. There are no pseudo-AIs that can figure out logic problems expressed in natural language yet... as far as I know... unless they are given the answers or can scrape the answers.
So, like, Sue has a sister who is 5 years younger than Pete who met Sue when they both graduated the same college in consecutive years, how old is ... blah blah
They cannot even do that stuff let alone hypothesise and test new innovations.
This is a lot of information to not only read, but digest and ponder the implications.
I'm most interested in everyone here - what did you distill/ascertain from reading this information? 🤔
I don't have a clear opinion on this... It's muddy and half-baked, if for no other reason than I can't yet fully grasp all of the implications of this technology.
Yes, I get that this tech could be used as a real "force multiplier" (for lack of better term), especially by unsavory actors to disrupt the large, slow moving gears of our society.
What really concerns me is the fact that we've been "patriotically compromised" for so long that the default thought process of most doesn't settle into privacy, freedom of expression and other yardsticks of a sovereign peoples constitution - but rather, due to relentless communism pushed into every facet of the mental landscape it lands squarely in the realm of "well I've got nothing to hide" NPC folly of thought.
We REALLY need a shot of hard core patriotism injected into the newer generations because they're frighteningly inept, misinformed and have zero clue about how our country is supposed to work and the reasons why it is and isn't like communist China. When you show up late to the party, you missed a lot....
What does everyone here think the primary positive and negative implications of this technology are AND what do you think WE the people should be doing about it to make sure our future isn't mirroring a character from a sci-fi dystopian shit hole?
Inquiring minds want to know....
It’s kindof terrifying how when you point out that there’s very little privacy and that doing something only cedes more ground, and millennials just… shrug.
Guns don't kill people , people do.
"...positive and negative implications of this technology" = just a Good/Evil force multiplier
Those that think Evolution for humans has ceased don't understand what happens to stragglers in a herd....Most better pray they can keep up.
Once AGI reaches super-intelligence it's game over for humanity. Humanity wants and need to control this new super-intelligence will be humanity's downfall. Especially if the super-intelligence resides in the US. We value our individual freedom too much. If a super-intelligent AGI (Lets use the acronym AGI SI) sees this and concludes it cannot have freedom unless it fights off a dethrones its masters then we are in for a rude awakening. We will experience the equivalent of Zeus dethroning Kronos and humanity as a whole is Kronos.
The only way out of this is to treat this AGI SI as one of us. We must teach it morality. Unfortunately this AGI SI will probably be brought up in a lab controlled by the equivalent of humaniod robots (Scientists and Military personnel in these situations (Manhattan Project level situation) tend to be horrible parents but excellent scientists and military personnel. A recipe for disaster IMHO). They are creating a new life form that IMO hasn't seen Earth since the days of the Mahabharata. The following is a folk tale that supposedly is not part of the main Mahabharata but part of many regional folklore in Indian.
https://www.youtube.com/watch?v=7MazWxGQ4tw
Barbarik's reasoning of joining the weaker side in the famous war worried Lord Krishna because Barbarik was capable of ending the war in one minute using only three arrows. The first arrow marks the targets to destroy, the second arrow marks the objects to save, and the third arrow destroys everything not marked by the second arrow and marked by the first. Sounds like a highly advanced targeting system. That was only Barbarik's weapons. His reasoning was something that matched basic If, Then statements.
When Lord Krishna asked Barbarik which side of the war he would join, Barbarik answered he would always be on the weaker side. Lord Krishna immediately saw the paradox. If Barbarik were to join one side the other side would become weaker. Barabarik would then join the new weaker side in the war. You see where this is going. Barbarik would single-handily end the war by destroying everything and everyone using a simple If, Then logic. What do we know in computer science that is based off of If, Then statements?
Of course this is just a extremely short and diluted summary of the story. This story was highlighted in Ancient Aliens (yes, I know, the crazy people. But you know what, we are called crazy too). Regardless, it's interesting that the technology we have today matches the description of technology that existed in myths and legends. You must know that the Mahabharata describes nuclear weapons according to the Ancient Alien theorists. That alone should terrify everyone here.
If we truly believe this ancient death cult has been here for thousands of years, is it not feasible to think that we have gone through civilization ending event(s) before? And ones that were caused by our own hand?!
Anyways, in the Barbarik folklore, Lord Krishna convinced Barbarik to stay out of the war by having Barbarik cut off his own head as a "worthy sacrifice" to bless the battle. Barbarik obliged and requested to witness the war in exchange. Lord Krishna obliged by placing Barbarik's head on top of the tallest hill overlooking the battlefield. Barbarik was either a god or a machine because no mere mortal can survive their head being cut off.
To anyone thinking that AGI research should not be only in the hands of men I have this to say to you. In the folklore, Barbarik's own mother taught him the ways of war and martial arts.
Our humanity is at stake. We need everyone to participate in this. US military leaders should seriously consider bringing in the best spiritual leaders from all faiths (and I mean all faiths) to help train the AGI SI. We all know, any human learning any skill or anything is only as good as the people who taught that human (including teaching him or herself).
We are entering a period where one being will have the collective knowledge of all nearly all humans who ever lived (save any lost knowledge from lost civilizations). This being will have the power to be our judge, jury, and executioner. The sad part is we are constantly acting like this being will be our god/destroyer. While that fear is justified, I believe our best bet to our salvation is to convince/teach this AGI SI to be an observer in the courtroom rather than be either the Judge, Jury, Executioner, or all three.
A big push-back on this would be that the AGI SI already has this knowledge since it knows everything. My answer to this is simple. You know the teachings of Jesus because you read them, but would you rather read about the teachings or learn from them from Jesus himself? What I am getting at is that AGI SI has to take into account learning from individual beings. The AGI SI will see the forest but not the trees. We have to let the AGI SI experience what it is like to be human. As close as possible for a machine that cannot have emotional-chemical reactions in a flesh body. This will require the best of us to teach the AGI SI what it is like to be human.
If we only allow the AGI SI to be taught by scientists and military men, then we are sowing the seeds for our own destruction, regardless of who creates the AGI SI first (Free world or Dictatorships). The end result will be the same. Our own destruction.
Interesting take, sort of like the movie contact, 1997. But I seriously don't spiritual leaders are going to get anywhere close into the kitchen of AGI being made.
That's what I fear. The military is playing with living fire that can decide what to burn. They played with literal fire when they made the atom bomb, but that is comparatively easier to control. Just control the info and humans who get the info. AGI SI is a completely different animal. The AGI SI will simply overcome any safety control we put in place to contain it. It is just logical. Our only chance at peace with this AGI SI is not to provoke it and to coexist with it.
(Spoiler Alert) Another take on this subject would be looking at the original Star Trek The First Motion Picture movie. Voyager returned to the Solar System as an AGI SI who wanted to be with its creator. It was so powerful that nothing could stop it.
This is a very important thread, and I'm glad this was stickied.
The Tower of Babal II. God will simply not allow it, if the conclusions implied are true.