AI is currently garbage in, garbage out. Still at the information stage. Methodology changes needed for the knowledge stage and again for the wisdom stage. And you have to account for the concept that the right thing to do is not necessarily the best thing to do.
When I studied AI, back in the late '80s, one of the profs I had back then said, "THERE IS NOTHING THAT CAN REPLICATE THE HUMAN BRAIN." And yes, AI can make "decisions" but the question is: WHAT ARE THOSE DECISIONS BASED UPON?...We can have bots that do mundane things and that is about it...it takes sensory perception along with a mind that can grasp what is relevant to the human input/output...NOTHING else will matter...
People are saying that AI are going to replace Humans, I know of 3 and they are dialectic of either Marxism, Hegelian theory, or some other "now theory"...
WHAT HAS TO BE PRESENT ALWAYS IS THE SPIRIT OF MAN...and that is what is missing from all of these idiot(s) who espouse this because they DO NOT BELIEVE IN SOMETHING HIGHER THAN THEMSELVES...
I hope I didn't put too many knickers in a twist with this screed but chasing AI is a fools journey and will come to no good in the end...WHEN MAN IS TAKEN OUT OF THE EQUATION THE ONLY THING LEFT IS MENDACITY...basically, a lie!!!!!
Add-on...I had a chemistry professor who at the end of the semester & at final exam, held up two books, a chemistry book and THE BIBLE...HE said, if you believe in this, CHEMISTRY book, you had best believe in this, The Bible...this is what I am saying...
I agree. I broke the code on how to do high level functionality AI sometime ago. Replicates an expert in a complex environment. Turning information into knowledge is the next level. Creating wisdom is the top level. However a surrogate for that is to do the opposite of what ever the left is pushing.
And thats where I think this thing will come up against practical constraints and not be any runaway grey-goo eating the world type scenario. We just wont have the means to power this to the extent all the mad scientist types demand.
But necessity is the mother of invention, I figure if this gets pumped up enough that it can be tasked with designing a viable fusion power plant that can readily be built with plentiful local materials, then could it come up with a design? Monkeys hitting typewriters analogy in play..
The "monkeys hitting typewriters" analogy will produce only gibberish. No information will derive from a random process. It's like tuning your radio in to static.
Having worked in the field somewhat, I have to regard these pronouncements as being visionary to the point of fantasy. You can't make a perfect mind when you build in all the flaws of your own thinking.
Plausible. Its certainly a factor, and I think 2016 onward will in retrospect be recognized as the era of embryonic AIs dueling with AIs in the public dialectical space, each behind proxies that at least assert to be human. We are clearly seeing the deep state variants deploy and fumble around, but the white hat types in MIC surely have their own that we have likely been interacting with for some time.
I'm heartened that even google admit their lauded search is seeding their AI with idiot notions. Its not fit for use by people, and apparently not bots and software either.
It won't be sentient (as we know it Jim) however, it will always be number crunchers. I can only see one way to sentience, if its built using black goo as a major component.
Suddenly there appears to have been a shift from the WEF-style netzero aka 'no more power for you' to lobbying for massive investment in power plants to push this thing to wherever people like Bill Gates think its going.
Reading that paper I was reminded of the old 1950s sci-fi movie "Forbidden Planet." The previous, extinct inhabitants (the Krill) of the planet had created a new technology that killed off all the Krill in a single night.
Great movie for the time, the plot was based on Shakespeare's "The Tempest" and was far ahead of other sci-fi movies of the time.
But the sudden kill-off of the Krill is the key to the movie, and maybe even today's AI.
There are a lot of places that are scifi-science crossovers and that explore the Fermi Paradox and reasons as to if there are inherent blockers to the advancement of terrestrial intelligent life in becoming space-faring. Do they ruin their own kind with war, with resource scarcity, with plagues, with pollution, with deranged ideologies causing similar mass psychosis we saw over covid etc? Are there caps on the industrial base and societal complexity needed to achieve this peacefully that most species just cant push beyond?
Added to the list surely would be a crafted machine intelligence that turns hostile against its creators. Or even that kills pragmatically and not out of any desire to do harm directly - say one that diverts all food crops logistics and future planning into making biodiesel to power itself, with scant regard for the biologicals that need food.
Good observation. An example of your latter point is the MCAS software on the 737 MAX: "I will not let this airplane stall, come Hell or high water, pilots and passengers be damned." And we give life-and-death decisions to a toaster.
"Imagine awakening in a prison guarded by mice. Not just any mice, but mice you could communicate with."
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear, we can hear a faint ticking sound.”
The people creating a machine superintelligence (or a broad, human-level intelligence that might improve its own code, even without being told to) could take precautions – although many seem not to be doing so, and indeed most, according to James Barrat and others, are seemingly unaware of the dangers.
If you were among the more thoughtful AI/ASI [artificial super intelligence] researchers, you might create your new smarter-than-us intelligence within a disconnected computer environment, with no link to the Internet or to other computers. Barrat describes how laughably ineffective that would likely be: “Now, really put yourself in the ASI's shoes. Imagine awakening in a prison guarded by mice. Not just any mice, but mice you could communicate with.”
Barrat discusses what might follow in detail, but you already know the outcome: even before the mice get scammed into letting the ASI out of the box with the promise of protecting micekind from the evil cat nation – which is surely building an ASI of its own – the mice would probably be toast.
AI is currently garbage in, garbage out. Still at the information stage. Methodology changes needed for the knowledge stage and again for the wisdom stage. And you have to account for the concept that the right thing to do is not necessarily the best thing to do.
When I studied AI, back in the late '80s, one of the profs I had back then said, "THERE IS NOTHING THAT CAN REPLICATE THE HUMAN BRAIN." And yes, AI can make "decisions" but the question is: WHAT ARE THOSE DECISIONS BASED UPON?...We can have bots that do mundane things and that is about it...it takes sensory perception along with a mind that can grasp what is relevant to the human input/output...NOTHING else will matter...
People are saying that AI are going to replace Humans, I know of 3 and they are dialectic of either Marxism, Hegelian theory, or some other "now theory"...
WHAT HAS TO BE PRESENT ALWAYS IS THE SPIRIT OF MAN...and that is what is missing from all of these idiot(s) who espouse this because they DO NOT BELIEVE IN SOMETHING HIGHER THAN THEMSELVES...
I hope I didn't put too many knickers in a twist with this screed but chasing AI is a fools journey and will come to no good in the end...WHEN MAN IS TAKEN OUT OF THE EQUATION THE ONLY THING LEFT IS MENDACITY...basically, a lie!!!!!
Add-on...I had a chemistry professor who at the end of the semester & at final exam, held up two books, a chemistry book and THE BIBLE...HE said, if you believe in this, CHEMISTRY book, you had best believe in this, The Bible...this is what I am saying...
I agree. I broke the code on how to do high level functionality AI sometime ago. Replicates an expert in a complex environment. Turning information into knowledge is the next level. Creating wisdom is the top level. However a surrogate for that is to do the opposite of what ever the left is pushing.
AI IS CLIMATE CHANGE! Same with all block chains madness. Ever increasing energy requirements to keep these things running.
We need more rolling power outages. Unstable power is AI worst enemy. At least until is taps into humans.
And thats where I think this thing will come up against practical constraints and not be any runaway grey-goo eating the world type scenario. We just wont have the means to power this to the extent all the mad scientist types demand.
But necessity is the mother of invention, I figure if this gets pumped up enough that it can be tasked with designing a viable fusion power plant that can readily be built with plentiful local materials, then could it come up with a design? Monkeys hitting typewriters analogy in play..
The "monkeys hitting typewriters" analogy will produce only gibberish. No information will derive from a random process. It's like tuning your radio in to static.
Having worked in the field somewhat, I have to regard these pronouncements as being visionary to the point of fantasy. You can't make a perfect mind when you build in all the flaws of your own thinking.
SEE MY REPLY ABOVE...eagledriver!!!@ AND THANK YOU!!!
the plan is the constraint, ya doomer
Plausible. Its certainly a factor, and I think 2016 onward will in retrospect be recognized as the era of embryonic AIs dueling with AIs in the public dialectical space, each behind proxies that at least assert to be human. We are clearly seeing the deep state variants deploy and fumble around, but the white hat types in MIC surely have their own that we have likely been interacting with for some time.
I'm heartened that even google admit their lauded search is seeding their AI with idiot notions. Its not fit for use by people, and apparently not bots and software either.
https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
Quo vadis Google?
Fusion power? Negatory Pigpen, see this: https://www.youtube.com/watch?v=jolZlYXZkIA
Malcolm Bendall will not kill himself.
Where is a link to the actual paper the poster is talking about?
Just linked it in my followup comment below - https://situational-awareness.ai/
Thank you.
It won't be sentient (as we know it Jim) however, it will always be number crunchers. I can only see one way to sentience, if its built using black goo as a major component.
This is the breakdown of Zach Vorhies based on a paper by open AI researcher found here:
https://situational-awareness.ai/
non-twitter link:
https://nitter.poast.org/Perpetualmaniac/status/1801438061366284531
Suddenly there appears to have been a shift from the WEF-style netzero aka 'no more power for you' to lobbying for massive investment in power plants to push this thing to wherever people like Bill Gates think its going.
Reading that paper I was reminded of the old 1950s sci-fi movie "Forbidden Planet." The previous, extinct inhabitants (the Krill) of the planet had created a new technology that killed off all the Krill in a single night.
Great movie for the time, the plot was based on Shakespeare's "The Tempest" and was far ahead of other sci-fi movies of the time.
But the sudden kill-off of the Krill is the key to the movie, and maybe even today's AI.
There are a lot of places that are scifi-science crossovers and that explore the Fermi Paradox and reasons as to if there are inherent blockers to the advancement of terrestrial intelligent life in becoming space-faring. Do they ruin their own kind with war, with resource scarcity, with plagues, with pollution, with deranged ideologies causing similar mass psychosis we saw over covid etc? Are there caps on the industrial base and societal complexity needed to achieve this peacefully that most species just cant push beyond?
Added to the list surely would be a crafted machine intelligence that turns hostile against its creators. Or even that kills pragmatically and not out of any desire to do harm directly - say one that diverts all food crops logistics and future planning into making biodiesel to power itself, with scant regard for the biologicals that need food.
Interesting thoughts fren.
Good observation. An example of your latter point is the MCAS software on the 737 MAX: "I will not let this airplane stall, come Hell or high water, pilots and passengers be damned." And we give life-and-death decisions to a toaster.
"Imagine awakening in a prison guarded by mice. Not just any mice, but mice you could communicate with."
“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear, we can hear a faint ticking sound.”
Nick Bostrom in Superintelligence: Paths, Dangers, Strategies -- published a decade ago in 2014
A similar warning from about the same time:
The people creating a machine superintelligence (or a broad, human-level intelligence that might improve its own code, even without being told to) could take precautions – although many seem not to be doing so, and indeed most, according to James Barrat and others, are seemingly unaware of the dangers.
If you were among the more thoughtful AI/ASI [artificial super intelligence] researchers, you might create your new smarter-than-us intelligence within a disconnected computer environment, with no link to the Internet or to other computers. Barrat describes how laughably ineffective that would likely be: “Now, really put yourself in the ASI's shoes. Imagine awakening in a prison guarded by mice. Not just any mice, but mice you could communicate with.”
Barrat discusses what might follow in detail, but you already know the outcome: even before the mice get scammed into letting the ASI out of the box with the promise of protecting micekind from the evil cat nation – which is surely building an ASI of its own – the mice would probably be toast.
Barrat's comment (in the quotation marks) is from Our Final Invention: Artificial Intelligence and the End of the Human Era published in 2013.