The challenge of high-performance computing threatens U.S. innovation

High-performance computing, or HPC for short, sounds like something scientists use in secret labs, but it is actually one of the most important technologies in the world today. From predicting weather to finding new drugs, and even training AI, high-performance computing systems can help solve problems that are too difficult or too big.

Over the past 40 years, this technology has helped to discover a large number of discoveries in science and engineering. But now, high-performance computing is at a turning point, and the choices made by governments, researchers and the technology industry today could impact the future of innovation, national security and global leadership.

High-performance computing systems are basically superpower computers that work together by thousands or even millions of processors. They also use advanced memory and storage systems to move quickly and save a lot of data.

With all these features, high-performance computing systems can run extremely detailed simulations and calculations. For example, they could simulate how new drugs interact with the body, or how hurricanes move through the ocean. They are also used in areas such as automotive design, energy production and space exploration.

Recently, high-performance computing has become even more important due to artificial intelligence. AI models, especially those used for things like speech recognition and autonomous cars, require a lot of computing power to train. High-performance computing systems are perfect for this job. As a result, AI and high-performance computing are now working closely together to push each other.

[embed]https://www.youtube.com/watch?v=jf3it4sr-s4[/embed]

Lawrence Livermore National Laboratory's supercomputer, El Capitan, is currently the fastest in the world.

I am a computer scientist and my career has been a long one in high performance computing. I have observed that high performance computing systems are under pressure more than ever before, with higher system demands for speed, data and energy. At the same time, I see that high performance computing faces some serious technical problems.

Technical Challenges

A huge challenge in high performance computing is the gap between the speed of the processor and how the memory system keeps up with the output of the processor. Imagine owning a super fast car but being trapped in traffic - if the road can’t cope, speed will not help. Likewise, high-performance computing processors often have to wait because the memory system cannot send data quickly enough. This reduces the efficiency of the entire system.

Another problem is energy use. Today’s supercomputers use a lot of electricity, sometimes even a small town. This is expensive and not very good for the environment. In the past, as computer parts were smaller, they also used less power. But this trend, known as Dennard Scaling, stopped in the mid-2000s. Now, making computers more powerful usually means they also use more energy. To solve this problem, researchers are looking for new ways to design software for hardware and high-performance computing systems.

There are also problems with the type of chip making. The CHIP industry is mainly focused on AI, which works well in low-precision maths such as 16-bit or 8-bit numbers. But many scientific applications still require 64-bit accuracy to be accurate. The larger the bit count, the more numbers to the right of the decimal point the chip can handle, so the higher the accuracy. If chip companies stop manufacturing the parts scientists need, it may become more difficult to do important research.

The report discusses how trends in semiconductor manufacturing and business priorities differ from the needs of the scientific computing community and how a lack of tailored hardware can hinder advances in research.

One solution might be to build custom chips for high performance computing, but this is expensive and complex. Still, researchers are exploring new designs, including chips - small chips that can be combined like Lego bricks to make high-precision processors more affordable.

Global race

Globally, many countries are investing heavily in high-performance computing. Europe has the EuroHPC program, which is building supercomputers in places like Finland and Italy. Their goal is to reduce reliance on foreign technologies and lead in areas such as climate modeling and personalized medicine. Japan established the Fugaku supercomputer, which supports academic research and industrial work. China has also made significant progress by using local technology to build some of the world's fastest computers. Governments in all these countries understand that high-performance computing is key to their national security, economic strength and scientific leadership.

[embed]https://www.youtube.com/watch?v=_U4NGI3CTR8[/embed]

The competition of the US-Chinese supercomputers explains.

The United States has been a leader in high performance computing for decades, recently completing the Department of Energy's Exascale computing project. The computers created by the project can perform 1 billion operations per second. This is an incredible achievement. But even with success, the United States still has no clear long-term plan. Other countries are moving forward rapidly, without a national strategy, the United States is behind risks.

I think the U.S. national strategy should include funding new machines and training for people to use them. It will also include partnerships with universities, national laboratories and private companies. Most importantly, the program will focus not only on hardware, but also on the software and algorithms that make high-performance computing useful.

Signs of hope

An exciting area for the future is quantum computing. This is a completely new way to perform calculations based on the laws of physics of atomic layers. Quantum computers may one day solve the impossible problem of conventional computers. But they are still in their early stages and may need to be supplemented rather than replaced by traditional high-performance computing systems. This is why it is important to continue investing in both calculations.

The good news is that some steps have been taken. CHIPS and Science Act, passed in 2022, provide funding to expand the U.S. chip manufacturing industry, also created an office to help transform scientific research into real-world products. The U.S. Science and Technology Task Force Vision was launched on February 25, 2025 and led by Sudip Parikh, CEO of the American Association for the Advancement of Science, aims to rule nonprofits, academia and industry to help guide government decisions. Private companies are also spending billions of dollars on data centers and AI infrastructure.

All of these are positive signs, but in the long run, they don't completely solve the problem of how to support high performance computing. In addition to short-term funding and infrastructure investment, this means:

High-performance computing is more than just a fast computer. This is the basis of scientific discoveries, economic growth and national security. As other countries move forward, the United States is under pressure to develop a clear, coordinated plan. This means investing in new hardware, developing smarter software, training a skilled workforce, and partnerships between government, industry and academia. If the United States does so, the country can ensure that high-performance computing continues to power innovation in the decades ahead.