World’s Fastest Supercomputers in 2026: El Capitan, Exascale Power, and the Machines Shaping Our Future

Bottom Line First

The world’s fastest supercomputer right now is El Capitan — a machine that cost $600 million to build, fills 7,500 square feet of floor space, and performs 1.742 quintillion calculations every single second. It is so powerful that it would take one million smartphones working simultaneously on the same problem to match what El Capitan does in one second — and that stack of phones would be more than five miles tall. This is the story of how we got here, what these machines actually do, and why it matters to everyone on the planet.


What Is a Supercomputer — And How Is It Different From a Regular Computer?

Your laptop or smartphone is built to handle a wide variety of tasks reasonably well — browsing, video calls, editing documents. A supercomputer is built to do one thing: solve problems so mathematically complex, so astronomically large, that no regular computer — or even a million of them working separately — could finish in a reasonable time.

The performance of a supercomputer is measured in FLOPS — Floating Point Operations Per Second. A basic home computer today delivers around 100 GigaFLOPS (billion operations per second). A modern gaming PC might hit a few TeraFLOPS (trillion). The world’s fastest supercomputers now operate in the ExaFLOPS range — one quintillion operations per second. That is one followed by eighteen zeros.

This breakthrough threshold — one exaFLOP — was considered the “moon landing” of computing for decades. As of 2026, the world has crossed it with not one but four confirmed exascale systems.

World’s Fastest Supercomputer 2026: El Capitan & Exascale


El Capitan: The Undisputed #1 Supercomputer in the World

Lawrence Livermore National Laboratory (LLNL), in collaboration with the National Nuclear Security Administration (NNSA), Hewlett Packard Enterprise (HPE), and AMD, officially unveiled El Capitan as the world’s most powerful supercomputer and the first exascale system dedicated to national security. Lawrence Livermore National Laboratory

Deployed in 2024, El Capitan is capable of performing more than 2.79 exaFLOPS at theoretical peak performance. LLNL In real-world benchmark testing — the High-Performance Linpack (HPL) test used by the TOP500 organization — El Capitan runs at 1.742 exaFLOPS and achieves 58.89 gigaflops per watt, ranking among the top 20 most energy-efficient supercomputers on the planet. HPE

What Makes El Capitan Tick

El Capitan uses a combined 11,039,616 CPU and GPU cores, consisting of 43,808 AMD fourth Gen EPYC 24-core CPUs and 43,808 AMD Instinct MI300A GPUs. The MI300A integrates 24 Zen4-based CPU cores and a CDNA3-based GPU onto a single package, alongside 128 GB of HBM3 memory shared across both processors. Wikipedia

This is the key architectural innovation that sets El Capitan apart. Traditional supercomputers use separate CPUs and GPUs that must constantly transfer data between each other — creating bottlenecks. El Capitan’s AMD Instinct MI300A APUs eliminate this inefficiency by enabling seamless communication between processing units within a single package, allowing the system to handle massive data workloads with unprecedented speed and precision. Microchip

How Big Is El Capitan, Physically?

El Capitan takes up 7,500 square feet of floor space — similar to two tennis courts. It is made up of at least 87 compute racks and has a total memory of 5.4375 petabytes. Wikipedia

El Capitan employs a 100% fanless, direct liquid-cooling system — a complete departure from the fan-based cooling used in conventional computing. All heat is removed through liquid coolant running through the system, which is what makes it possible to achieve this level of energy efficiency at such extreme scale. AMD

How Much Power Does It Consume?

El Capitan consumes more than 35 megawatts of power at full utilization — enough electricity to power a mid-size city. Tom’s Hardware This is the fundamental challenge of exascale computing: raw performance and energy consumption scale together, and managing that relationship is as important as building faster chips.

How Much Did It Cost?

El Capitan cost $600 million to build. Construction began in May 2023 and it came online in November 2024, before being officially dedicated on January 9, 2025. Wikipedia


What Does El Capitan Actually Do?

This is the question most people never see answered. A $600 million computer sitting in California — what is it actually running?

1. Nuclear Weapons Stewardship (Without Testing)

The NNSA uses El Capitan to manage and modernize the nuclear stockpile and simulate the safety of nuclear weapons — ensuring the U.S. nuclear deterrent remains strong without returning to actual underground nuclear testing. HPCwire Every nuclear weapon in the United States arsenal has a specific design with specific tolerances. Over decades, the materials inside these weapons age and change. El Capitan runs simulations detailed enough to certify that these weapons remain safe and functional without ever detonating one.

2. Tsunami Early-Warning Systems

Researchers at Lawrence Livermore National Laboratory, the University of Texas, and Scripps Institution of Oceanography were awarded the prestigious ACM Gordon Bell Prize for developing a real-time tsunami early-warning framework powered by El Capitan. Lawrence Livermore National Laboratory This system processes seismic data fast enough to issue reliable warnings before destructive waves reach coastlines — directly saving lives.

3. Protein Folding and Drug Discovery

Scientists at LLNL and collaborators at AMD and Columbia University completed the largest and fastest protein structure prediction workflow ever run, using the full power of El Capitan. Lawrence Livermore National Laboratory Protein folding — understanding the 3D shape proteins form — is critical for designing drugs that target specific diseases. The speed at which El Capitan can process these structures is measured in breakthroughs per week, not per year.

4. Rocket Simulation

Researchers used El Capitan to perform the largest fluid dynamics simulation ever — surpassing one quadrillion degrees of freedom in a single computational fluid dynamics problem, focused on rocket-plume interactions. Lawrence Livermore National Laboratory This level of simulation fidelity is essential for designing next-generation propulsion systems where physical testing is extraordinarily expensive.

5. Climate and Fusion Energy Research

El Capitan supports research in fusion energy, climate research, power grid modernization, drug discovery, and other areas of public interest through its unclassified sibling system, Tuolumne. AMD


The Global Supercomputer Rankings: TOP500 List 2026

The TOP500 is the official global ranking of the world’s most powerful supercomputers, updated twice a year. Here is where the world stands:

RankNameLocationPerformance (HPL Rmax)Processor
🥇 1El CapitanLawrence Livermore, USA1.742 ExaFLOPSAMD MI300A APU
🥈 2FrontierOak Ridge, USA1.353 ExaFLOPSAMD MI250X + EPYC
🥉 3AuroraArgonne, USA~1.012 ExaFLOPSIntel Xeon Max + GPU Max
4JUPITER BoosterJülich, Germany1.000 ExaFLOPSFirst non-US exascale
5EagleMicrosoft AzureCloud HPCNVIDIA H100
6HPC6ENI, ItalyHPCNVIDIA A100
7FugakuRIKEN, Japan442 PetaFLOPSARM (Fujitsu A64FX)
8AlpsCSCS, Switzerland434.9 PetaFLOPSNVIDIA GH200
9LUMICSC, Finland379.7 PetaFLOPSAMD MI250X
10LeonardoCINECA, Italy241.2 PetaFLOPSNVIDIA A100

Source: TOP500.org, 65th and 66th editions (June 2025 / November 2025)

The most recent TOP500 list reflects continued U.S. leadership in high-performance computing, historic European milestones with JUPITER Booster becoming the first non-US exascale system, and growing global diversity across architectures and energy-efficient design. TOP500

The Exascale Club: Only Four Members Exist

With El Capitan, Frontier, Aurora, and JUPITER Booster, there are now four confirmed exascale systems in the world — all deployed within the last three years. All three US systems are installed at Department of Energy laboratories. TOP500

This is significant. For two decades, reaching one exaFLOP was the theoretical benchmark scientists and engineers chased. As of 2026, the world has four machines that exceed it.


Frontier: The Machine That Started the Exascale Era

Before El Capitan, there was Frontier — and its story is worth understanding.

In May 2022, the Frontier supercomputer broke the exascale barrier, completing more than a quintillion 64-bit floating point operations per second — clocking in at approximately 1.1 exaFLOPS and beating out the previous record-holder, Fugaku. Wikipedia

Frontier was the first machine in human history to demonstrably cross the exascale threshold in a verified, public benchmark. It held the #1 position until El Capitan came online in November 2024.

Frontier made history as the first publicly verified exascale supercomputer. Built using AMD processors and HPE’s Cray EX architecture, it supports diverse research from AI training and medical research to materials science and nuclear physics. The Knowledge Academy

Today, Frontier remains the world’s second-most powerful supercomputer — which should give some perspective on how extraordinary El Capitan’s margin of lead truly is. El Capitan tops Frontier by a massive 390 petaFLOPS — a gap so large that Frontier’s lead over the eighth-ranked system (LUMI) is smaller than El Capitan’s lead over Frontier. HPCwire


Fugaku: Japan’s Engineering Marvel That Changed ARM Forever

At rank #7, Fugaku deserves special mention because of its architectural significance.

Fugaku, with a theoretical peak performance of 537 petaFLOPS and 7,630,848 cores, was built by Fujitsu for the RIKEN Center for Computational Science in Japan. RankRed

The remarkable thing about Fugaku is not just its speed but what powers it: Fugaku is the first top-ranked supercomputer to be powered by ARM processors — specifically Fujitsu’s custom A64FX chip implementing the ARMv8 architecture. RankRed ARM chips are the same architecture that powers your Android phone. Fugaku proved at the highest level of scientific computing that ARM could compete with x86 — a validation that now echoes in everything from Apple Silicon to Google’s Axion CPUs powering the new TPU 8 systems.

Japan spent approximately $1 billion on Fugaku’s research, development, and application development since 2014. RankRed


JUPITER Booster: Europe’s Historic Milestone

JUPITER Booster at the Jülich Supercomputing Centre in Germany achieved something historic in the latest TOP500 edition.

JUPITER Booster submitted a new measurement of 1.000 ExaFLOP/s on the HPL benchmark — making it the fourth exascale system on the TOP500 and the first exascale machine outside the United States. TOP500

This matters geopolitically and scientifically. Europe’s €500 million investment in the EuroHPC Joint Undertaking — the initiative that funded JUPITER — is now producing exascale results. The US does not have a monopoly on this computing tier anymore.


China’s Secret Supercomputers

The TOP500 rankings tell only part of the story. There is a significant gap in the data.

China no longer submits new supercomputers to the TOP500 list but is widely believed to operate several exascale systems. Data Center Dynamics This means the actual global ranking of the world’s most powerful computers is unknown — China’s machines likely exist somewhere on or above this list, but the rankings cannot include systems their operators choose not to benchmark publicly.

One example of this pattern: the National Supercomputing Center at Qingdao’s OceanLight supercomputer, completed in March 2021, was submitted for and won the Gordon Bell Prize, but was not submitted to the TOP500 list — analysts suspected it was to avoid inflaming political sentiments in the context of the US-China trade disputes. Wikipedia


The Key Technologies Powering Today’s Supercomputers

AMD Instinct MI300A: The APU Revolution

El Capitan’s secret weapon is AMD’s Instinct MI300A — an Accelerated Processing Unit (APU) that combines what were previously separate components into one chip.

The MI300A integrates 24 AMD “Zen 4” x86 CPU cores with 228 AMD CDNA 3 GPU compute units and 128 GB of unified HBM3 memory that presents a single shared address space to both the CPU and GPU, interconnected through AMD’s 4th Gen Infinity Fabric architecture. AMD

The significance: data no longer has to travel between a separate CPU and GPU over a relatively slow external bus. Everything shares the same high-bandwidth memory pool. At the scale of 44,544 such chips working together, eliminating that bottleneck translates directly into the performance gap between El Capitan and everything that came before it.

HPE Slingshot: The Network That Connects 11,000 Nodes

A supercomputer’s performance is not just about its processors — it’s about how fast those processors can communicate with each other. HPE Slingshot, an ethernet-based high-speed fabric, serves as the backbone connecting El Capitan’s more than 11,000 nodes, enabling large calculations to be performed across the entire system as a single coordinated unit. HPE

Direct Liquid Cooling: The Only Way to Cool This Much Power

El Capitan employs a 100% fanless, direct liquid-cooling system with eight elements of cooling. AMD At 35+ megawatts of power consumption, air cooling is physically impossible — the heat generated would be equivalent to a small industrial facility. Liquid cooling runs coolant directly through the server racks, extracting heat at the point of generation rather than relying on airflow.


How Supercomputers and AI Are Converging

One of the most important trends in 2026 is the merging of supercomputing and AI infrastructure. These were once separate worlds — supercomputers handled scientific simulation, AI training ran on GPU clusters. That separation is dissolving.

AMD CEO Lisa Su elaborated on this convergence at El Capitan’s launch: “It’s basically the same building blocks, configured in a different way” — highlighting that the technology developed for El Capitan directly enhances AI training systems. Microchip

Google’s TPU 8t supercomputer pods (covered in our previous article) are designed around the same principle: massive parallelism, high-bandwidth memory, and purpose-built interconnects. Google can connect more than 1 million TPUs across multiple data center sites into a single training cluster — essentially transforming globally distributed infrastructure into one seamless AI supercomputer. Google Cloud

The difference is access model: government supercomputers like El Capitan are classified facilities. Google’s AI supercomputer infrastructure is commercially accessible via Google Cloud. Both represent the same fundamental engineering philosophy — more chips, faster interconnects, and memory that keeps pace with compute.


Technical Q&A: What People Want to Know

Q: Is El Capitan open for public or research use?

No. El Capitan’s primary mission is classified national security work for the NNSA. It primarily runs weapons simulation code that is not accessible to outside researchers. However, LLNL also operates Tuolumne — El Capitan’s unclassified sibling system — for a wide variety of non-classified scientific research, including energy security, earthquake simulations, cancer drug discovery, and other public interest science. LLNL

Q: Can El Capitan run AI models like ChatGPT or Gemini?

Technically yes — the hardware is capable. But it is not designed or deployed for that purpose. El Capitan does support AI-based workflows including material discovery, design optimization, advanced manufacturing, digital twins, and intelligent AI assistants trained on classified data. Lawrence Livermore National Laboratory The AI it runs is domain-specific scientific AI, not general-purpose conversational models.

Q: How does El Capitan compare to what NVIDIA is building?

El Capitan uses AMD processors, not NVIDIA GPUs. In fact, AMD powers five of the top ten fastest supercomputers on the TOP500 list, while Intel has three and NVIDIA has one. Tom’s Hardware NVIDIA remains dominant in the commercial AI training market, but the TOP500 supercomputer list tells a different story — AMD has taken a commanding lead in high-performance scientific computing.

Q: What is the next step beyond exascale?

Japan has announced plans to begin building the first “zeta-class” supercomputer — a machine 1,000 times more powerful than today’s fastest exascale systems. Construction is planned to start in 2025. RankRed A zettaFLOP machine — performing one sextillion operations per second — would represent the same leap from exascale that exascale was from petascale. No completion date has been publicly confirmed.

Q: Does India have supercomputers in the global rankings?

India operates several significant HPC systems through C-DAC (Centre for Development of Advanced Computing) under the National Supercomputing Mission (NSM). PARAM Siddhi-AI, built under the NSM, achieved 4.6 PetaFLOPS performance and ranked in the TOP500 list. India’s current national HPC infrastructure, while not in the top 10 globally, is actively expanding — the NSM targets building petascale systems across IITs, IISc, and national research institutions as foundational infrastructure for scientific research and AI development.


Why Supercomputers Matter to Every Person on Earth

This is a fair question. A classified government computer in California — why should someone in Mumbai or Chennai care about it?

The answer is that virtually every major scientific breakthrough that will affect your life in the next 20 years will either be directly computed on a supercomputer or depend on knowledge that was. Consider:

  • The COVID-19 vaccine development was accelerated using supercomputer simulations of protein-spike interactions — the same type of computation El Capitan now does orders of magnitude faster
  • Weather forecasting that tells you whether to carry an umbrella runs on HPC systems; better supercomputers mean more accurate forecasts further into the future
  • Drug discovery for cancer, Alzheimer’s, and antibiotic resistance depends on molecular simulations that are simply impossible without exascale computing
  • Nuclear fusion — the clean energy source that could power the planet without carbon emissions — requires plasma simulation at exactly the fidelity that exascale computing now makes possible
  • AI model training that produces tools like Gemini, ChatGPT, and Claude runs on infrastructure that shares its architecture with these machines

Summary: The State of Supercomputing in 2026

MilestoneDetail
World’s #1 supercomputerEl Capitan — 1.742 ExaFLOPS (HPL benchmark)
Peak theoretical performance2.79 ExaFLOPS
Number of confirmed exascale systems4 (El Capitan, Frontier, Aurora, JUPITER Booster)
First non-US exascale machineJUPITER Booster, Germany (1.000 ExaFLOPS)
Cost of El Capitan$600 million
Power consumption at full load35+ megawatts
Total cores in El Capitan11,039,616 CPU + GPU cores combined
Processor powering #1, #2 in TOP500AMD (Instinct MI300A and MI250X)
Next frontierZetta-class computing (Japan, planning phase)

We are living in the first era where exascale computing is real, operational, and producing results. The machines at the top of the TOP500 list are not science fiction — they are running right now, solving problems that no other instrument on Earth could approach. The next decade will be defined, in large part, by what these machines discover.


Sources & References

  • Lawrence Livermore National Laboratory — El Capitan Official Page: llnl.gov
  • TOP500 Organization — Supercomputer Rankings: top500.org
  • HPE Official Press Release — El Capitan Delivery: hpe.com/newsroom
  • AMD Engineering Blog — El Capitan Architecture: amd.com/blogs
  • Google Cloud Blog — AI Infrastructure at Next ’26: cloud.google.com/blog
  • LLNL Advanced Simulation and Computing Program: asc.llnl.gov

Published on Prowell Tech | All specifications sourced from official government and manufacturer documentation | TOP500 rankings current as of the 65th and 66th editions (June 2025 / November 2025)


Discover more from Prowell Tech

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top

Discover more from Prowell Tech

Subscribe now to keep reading and get access to the full archive.

Continue reading