SUBMISSION STATEMENT: This is deeply disturbing if true. It is said that the advent and adoptions of computers into our lives was the "computing revolution." Then along came the Internet, and the "connectivity revolution." But now, we stand at the precipice of the "cognitive revolution," whereby humans (and indeed perhaps all of humanity) might well one day soon be made obsolete.
The global competition to build the fastest supercomputers is a high-stakes technological race, driven, we are told, by both national pride and practical applications. Countries vie for the top spot, showcasing their prowess in technological innovation, scientific research ("ClImAtE MoDeLlInG," right?), and computational capabilities.
But beneath the veneer of "practical applications for humanity," the usual National Security / Global Surveillance State monsters lurk.
Parallel computing in supercomputers has increasingly become the defining characteristic of modern supercomputing. Parallel computing involves using multiple processors (or computers) to work on different parts of a single task. This approach significantly increases computational speed and efficiency, which is crucial for tackling complex, data-intensive tasks—and it also is the key platform necessary for generating advanced AI-based tools, including, we are coming to see evidence for, AGI: ARTIFICIAL GENERAL INTELLIGENCE (an AI with the intelligence equal to that of any human) and, some are even positing, ARTIFICIAL SUPER INTELLIGENCE (an AI with an intelligence that surpasses all of humanity, combined).
AI, particularly in its more advanced forms driven by exotic deep learning techniques (A-star, and Q-star, specifically), depend on the existence of more and more powerful forms of parallel computing. This computing architecture allows for the simultaneous processing of large datasets and the execution of complex algorithms, which are fundamental in training and operating AI models. The use of multiple processors or GPUs enables faster computation, more efficient handling of large neural networks, and quicker analysis of vast amounts of data, making it a crucial component in the development and application of AI technologies.
In fact, ALL of the supercomputers topping the list of the race to be the biggest have been based in parallel computing, including: Summit (USA), Fugaku (Japan), Sunway TaihuLight (China), Tianhe-2 (China), and Sierra (USA) are notable supercomputers that exemplify the advancement in parallel computing technology.
And, with the advent of quantum computing (IBM currently markets a 1000 qubit quantum platform) that is, by its very nature, inherently capable of a the most advanced forms of super parallel processing (due to the principles of quantum mechanics, like superposition and entanglement), it can be said that these platforms have enabled even more complex tasks that would take conventional computers much longer to complete, thus rendering most of the world's supercomputers obsolete.
Competitive Landscape
In the 21st century, the race intensified with China and the United States frequently trading places at the top. The competition is fueled by the strategic importance of supercomputers in economic, scientific, military, and technological advancement.
Technological Evolution
Supercomputers have evolved from early systems like CRAY-1, which was revolutionary in the 1970s, to modern behemoths like Fugaku (Japan) and Summit (USA). These machines feature advanced architectures, including massively parallel processing, energy-efficient designs, and capabilities for handling vast amounts of data.
Key Players
United States: Historically a leader with machines like Summit and Sierra, focusing on research and military applications.
China: Made significant strides with Sunway TaihuLight and Tianhe-2, prioritizing indigenous technology and challenging US dominance.
Japan: Known for innovative designs like Fugaku, prioritizing energy efficiency and versatility.
European Union: Investing in high-performance computing through initiatives like EuroHPC, aiming to compete globally.
Use Cases
Supercomputers are employed in a variety of fields, each with specific missions.
Weather Forecasting - Running complex models to predict weather patterns and natural disasters.
Scientific Research - Simulating molecular interactions, astronomical phenomena, and quantum mechanics.
National Security - Code-breaking, cybersecurity, and nuclear simulations.
Artificial Intelligence - Training large-scale neural networks and data analysis.
Biomedical Research - Drug discovery and genomic sequencing.
Energy Exploration - Simulating geological formations for oil and gas exploration.
Future Prospects
The future of supercomputing is geared towards exascale computing, quantum computing, and more energy-efficient designs. The race continues not just for speed, but also for solving some of the most complex problems facing humanity.
Conclusion
The race for the fastest supercomputer is more than a quest for speed; it has now literally become a parallel processing arms race, a race which Elon Musk recently compared to "the development of nuclear weapons." Make no mistake—these platforms are indeed weapons. What are they 'attacking'?
You are dead on as far as AGI being right around the corner if not already here. I think Altman's weird booting and subsequent rehire had to do with internal politicking regarded AGI being achieved - though I'm not sure how that's measured with LLMs.
What I don't understand is the connection to Xi's visit. You spent a lot of time explaining AI/AGI/ASI and parallel computing to folks, but never mentioned the connection again. Care to expound?
Altman said in a recent talk that he wondered if he was talking to a technology, "or a creature." He said that he was proud many times to have "been in the room" for when "the veil was more than lifted" (or something to that effect) with respect to AI.
I should have put this in the main blurb but I felt it was already getting a bit too long. We know how DARPA had been working on the lifelong and how it was only days after lifelog was retired that Facebook started, in other words, they already had the software developed the way they wanted it, tested, they knew it was addictive or whatever, then they rolled it out. I believe strongly that AI is the same way. They have had this technology running for a long time, to have it running pretty much the way they want to or close to it, and now they are rolling it out publicly.
SUBMISSION STATEMENT: This is deeply disturbing if true. It is said that the advent and adoptions of computers into our lives was the "computing revolution." Then along came the Internet, and the "connectivity revolution." But now, we stand at the precipice of the "cognitive revolution," whereby humans (and indeed perhaps all of humanity) might well one day soon be made obsolete.
The global competition to build the fastest supercomputers is a high-stakes technological race, driven, we are told, by both national pride and practical applications. Countries vie for the top spot, showcasing their prowess in technological innovation, scientific research ("ClImAtE MoDeLlInG," right?), and computational capabilities.
But beneath the veneer of "practical applications for humanity," the usual National Security / Global Surveillance State monsters lurk.
Parallel computing in supercomputers has increasingly become the defining characteristic of modern supercomputing. Parallel computing involves using multiple processors (or computers) to work on different parts of a single task. This approach significantly increases computational speed and efficiency, which is crucial for tackling complex, data-intensive tasks—and it also is the key platform necessary for generating advanced AI-based tools, including, we are coming to see evidence for, AGI: ARTIFICIAL GENERAL INTELLIGENCE (an AI with the intelligence equal to that of any human) and, some are even positing, ARTIFICIAL SUPER INTELLIGENCE (an AI with an intelligence that surpasses all of humanity, combined).
AI, particularly in its more advanced forms driven by exotic deep learning techniques (A-star, and Q-star, specifically), depend on the existence of more and more powerful forms of parallel computing. This computing architecture allows for the simultaneous processing of large datasets and the execution of complex algorithms, which are fundamental in training and operating AI models. The use of multiple processors or GPUs enables faster computation, more efficient handling of large neural networks, and quicker analysis of vast amounts of data, making it a crucial component in the development and application of AI technologies.
In fact, ALL of the supercomputers topping the list of the race to be the biggest have been based in parallel computing, including: Summit (USA), Fugaku (Japan), Sunway TaihuLight (China), Tianhe-2 (China), and Sierra (USA) are notable supercomputers that exemplify the advancement in parallel computing technology.
And, with the advent of quantum computing (IBM currently markets a 1000 qubit quantum platform) that is, by its very nature, inherently capable of a the most advanced forms of super parallel processing (due to the principles of quantum mechanics, like superposition and entanglement), it can be said that these platforms have enabled even more complex tasks that would take conventional computers much longer to complete, thus rendering most of the world's supercomputers obsolete.
Competitive Landscape
In the 21st century, the race intensified with China and the United States frequently trading places at the top. The competition is fueled by the strategic importance of supercomputers in economic, scientific, military, and technological advancement.
Technological Evolution
Supercomputers have evolved from early systems like CRAY-1, which was revolutionary in the 1970s, to modern behemoths like Fugaku (Japan) and Summit (USA). These machines feature advanced architectures, including massively parallel processing, energy-efficient designs, and capabilities for handling vast amounts of data.
Key Players
Use Cases
Supercomputers are employed in a variety of fields, each with specific missions.
Future Prospects
The future of supercomputing is geared towards exascale computing, quantum computing, and more energy-efficient designs. The race continues not just for speed, but also for solving some of the most complex problems facing humanity.
Conclusion
The race for the fastest supercomputer is more than a quest for speed; it has now literally become a parallel processing arms race, a race which Elon Musk recently compared to "the development of nuclear weapons." Make no mistake—these platforms are indeed weapons. What are they 'attacking'?
Watch this video if you'd like to know more.
Exceptional.
You are dead on as far as AGI being right around the corner if not already here. I think Altman's weird booting and subsequent rehire had to do with internal politicking regarded AGI being achieved - though I'm not sure how that's measured with LLMs.
What I don't understand is the connection to Xi's visit. You spent a lot of time explaining AI/AGI/ASI and parallel computing to folks, but never mentioned the connection again. Care to expound?
Altman said in a recent talk that he wondered if he was talking to a technology, "or a creature." He said that he was proud many times to have "been in the room" for when "the veil was more than lifted" (or something to that effect) with respect to AI.
I should have put this in the main blurb but I felt it was already getting a bit too long. We know how DARPA had been working on the lifelong and how it was only days after lifelog was retired that Facebook started, in other words, they already had the software developed the way they wanted it, tested, they knew it was addictive or whatever, then they rolled it out. I believe strongly that AI is the same way. They have had this technology running for a long time, to have it running pretty much the way they want to or close to it, and now they are rolling it out publicly.