Since they were introduced in the 1960s, supercomputers have been teasing our imaginations as well as featured on shows involving international men of mystery and supervillains. They were almost mythical things capable of doing more than ordinary computers were able to do and in half the time.
Outside of fiction, however, supercomputers were being used for far less evil tasks. They required huge buildings and were often dedicated to academic and research-based tasks.
What might surprise some people is that even today, in the age of high-performance cell phones and commercial virtual reality, supercomputers are still very much in use around the world. We’re going to dive into the world of supercomputers for those who may not know much about them, what they are, and who uses or doesn’t use them but have always wanted to.
As the name implies, a supercomputer is a very powerful computer. That doesn’t mean it’s a top-of-the-line gaming PC or a high-end working machine. Supercomputers aren’t just fast, they’re incredibly fast.
Much like they were when they were first introduced, supercomputers are quite large. They work because they split up the computational load among hundreds or thousands of processors that work together.
Performance isn’t measured the same way as it would be for a traditional computer, which is hertz (usually either megahertz or gigahertz). Instead, it’s measured by FLOPS or floating-point operations per second and often reaches into the tera-or-petaFLOP range, meaning they’re super fast. What this means is that rather than waiting 10 to 15 days for a regular computer to do something, supercomputers are being used to cut the wait down to one or two days, max.
It can be hard to imagine that supercomputers first entered public awareness in 1931 to describe the tabulator machines that IBM had built for Columbia University. These machines were used for tasks like accounting and inventory control and they worked by reading the information stored on punch cards.
The first true supercomputer didn’t emerge until the 1960s when Control Data Corporation (CDC) debuted the CDC 6600. This computer outpaced the other computers available at the time (machines like the IBM 7030). Its ability to perform at three megaFLOPS earned it the title of the first supercomputer.
In the 1970s, Seymour Cray (who designed the CDC 6600) left CDC to start his own company, Cray Computing. They released the Cray-1 in 1976, the Cray X-MP in 1982, and the Cray-2 in 1985. The Cray-2 would be the first supercomputer to use Fluorinert to help cool the system down, which is still used today.
In the 1990s, companies like NEC and Fujitsu started to release supercomputers with thousands of processes (similar to what we’re seeing today). This was a shift away from the previous generation of supercomputers and allowed these systems to crack into the teraFLOP range of processing.
Like most high-performance computers, supercomputers require not only a ton of electricity to operate but a highly effective cooling system to prevent them from overheating. Electricity usage for supercomputers typically sits in the 4 – 7 megawatt range but can be upwards of 10 on some systems. For reference, one megawatt is enough electricity to power roughly 3,000 TVs (depending on all kinds of factors, like how much power the TV draws, etc.) Regardless, it’s a lot of electricity.
Keeping the whole thing cool enough to operate is probably one of the biggest issues when developing supercomputers. Most solutions involve lots and lots of water, like 3,000 gallons per minute for some systems. They also use a liquid called Fluroinert as part of the cooling process, which helps cool them down. This is to help offset the heat generated by the processors as they work on problems.
When supercomputers were first created, there was an obvious need for them as personal computers were 20 years away and any computers that existed at the time were the size of rooms. So why, in a time when hundreds of computers can link together to search for life on other planets (SETI) or help fold proteins (Folding@Home), do we even need supercomputers? The answer is the same as it was back then: they’re just faster.
As we mentioned, a supercomputer can take a task that would take a normal computer 14 days to complete, and finish it in two. This helps researchers, scientists, and businesses reduce the amount of time spent on computational tasks.
In recent years, we’ve started seeing a new kind of computing burst onto the scene – quantum computing. Quantum computing differs from supercomputing in that it takes advantage of the collective properties of quantum states to compute. Although the idea has been around since the 1980s, reaching the full potential of quantum computing has been a challenge. In 2019, Google and NASA claimed to have finally performed a quantum calculation that would be impossible on a standard computer.
These systems are quite a lot faster than traditional computers, relying on quantum particles (called qubits) and use technologies like transmons, ion traps, and topological quantum computers to create the qubits. Qubits are tricky to maintain in their quantum states and can suffer from issues like quantum decoherence and state fidelity.
That said, there have been some viable use cases for quantum computing, like cryptography, speeding up unstructured searches, and machine learning.
Supercomputers have almost always been the realm of researchers, like scientists. This is because these systems aren’t cheap (both to build and operate) and using them to do anything less would be a huge waste of time and money.
These days, supercomputers are far more common than you’d expect, and available worldwide. The Top500 is a list that ranks the top 500 supercomputers in the world based on factors like performance. Top500 has been ranking supercomputers since 1993.
Currently, China has the most supercomputers, with 188, followed by the United States, with 122, and Japan with 34.
What’s interesting is that despite ranking third, Japan sits on the top of the list with the Supercomputer Fugaku by Fujitsu. Japan is followed by two US supercomputers, IBM’s Summit and Sierra.
They get used for all sorts of reasons, but running simulations seems to be the main one these days. These aren’t simple simulations like creating a virtual heart that beats. It’s significantly more complex than that, like simulations of what happens inside the sun or mapping out the complexities of the human vascular system.
Supercomputers are ideal in situations where accuracy is paramount, such as major scientific breakthroughs, advances in medicine, and developing cures or vaccines. Any small mistake could have major implications such as financial ruin or death, and supercomputers help eliminate that risk.
Typical use cases for supercomputers include:
Supercomputers have been used in weather forecasting for decades. Data from all around the world (or country) gets fed into a supercomputer to help track the various changes to weather patterns (like pressure, temperature, etc.) and attempts to predict what’s going to happen in various regions on any given day.
Not only is this helpful on an average day, but becomes critical when tracking weather phenomena like hurricanes or tornadoes because a supercomputer can process data almost as fast as it’s received and can provide real-time predictions as new information becomes available. This can help when planning the mass evacuations that can come with major weather events.
Similarly, supercomputers are being used to track climate change, both historically and into the future. With all the data that we have available, including fossil records from millions of years ago, scientists are able to create increasingly accurate pictures of what the climate on Earth was like in prehistoric times and what is likely to happen in the future. Supercomputers help with the challenge of dealing with variables like ocean currents and reflectivity (or how much the earth reflects the sun).
Supercomputers are the darlings of the research world. They help researchers solve complex problems fast, which can be really helpful in situations where time is critical (like we saw in 2020 with the COVID-19 pandemic). Supercomputers from around the world were used to help determine the severity of COVID-19, as well as working to find viable options for vaccines to help reduce the time it takes to develop a vaccine from 10 years down to a handful of months.
Along with medical research, supercomputers have been at the center of some of the biggest scientific discoveries in recent years, like the Higgs-Boson particle. The sheer amount of data that was produced by the Large Hadron Collider could only have been processed by a supercomputer, or we’d still be waiting for the answers from a regular computer.
Looking for oil and gas reservoirs is getting harder and, in order to ensure that companies aren’t left in the dust by more technologically advanced competitors, they’re turning to supercomputers. These systems can help oil and gas companies make sense of the complex data that is produced during geologic surveys. This helps them to not only find oil and gas more accurately, and faster but also reduces the amount of money needed to find productive wells. Rather than companies digging where they think a well might be, supercomputers can pinpoint a location and help determine the economic viability of the well.
It’s a little hard to predict what the future of supercomputers looks like, partly because technology is advancing so rapidly that we’re basically living the future of supercomputers in real-time. No one is really sure what’s going to happen next until we’re there (think of how iPhones almost seemed to come out of nowhere in 2007 and immediately changed the world).
We’re likely going to see advances in processing times that push into the exaFLOP range, possibly even into zettaFLOPs. Scientists are hoping that complex modeling for things like weather prediction will reach the level where we can accurately predict up to two weeks out, which will help with things like planning transportation routes and agriculture.
It’ll be interesting to see what unfolds in the next five to 10 years.
The post Supercomputers – Explained appeared first on Manhattan Tech Support.