cft

SUPERCOMPUTING- FROM HIGH PERFORMANCE TO QUANTUM

A supercomputer pinpointed and eased congestion in Chattanooga, Tennessee. Eagle (the supercomputer) operated by the Oak Ridge National Laboratory and the National Renewable Energy Laboratory (NREL) used machine learning to trawl through tons of traffic data from satellites, traffic cameras, weather stations, etc., looking for patterns.


user

Samiran Ghosh

3 years ago | 6 min read

A supercomputer pinpointed and eased congestion in Chattanooga, Tennessee. Eagle (the supercomputer) operated by the Oak Ridge National Laboratory and the National Renewable Energy Laboratory (NREL) used machine learning to trawl through tons of traffic data from satellites, traffic cameras, weather stations, etc., looking for patterns. Eagle found that traffic lights on a road leading into Chattanooga were causing delays, leading to congestion. Switching traffic timings resulted in a 16% decrease in fuel use. Eagle itself reuses 97% of the wasted heat to be used in other facilities in the complex, making it very environmentally conscious. 

The technology could potentially reduce fuel consumption across the US by 20%. And unlock USD100 billion worth of productive time.

Welcome to the mainstream usage of High-Performance Computing (HPC).

So what is the technology behind supercomputing?

What is HPC, and why is it so important for everyday life and businesses today? The US Geological Survey defines HPC as the practice of aggregating computing power to deliver disproportionately higher performance than is possible through a workstation to solve significant problems in science, engineering, or business.

Where it all started

Many consider Seymour Cray to be the pioneer of HPC. While the UNIVAC and a few others came before it, the CDC-6600 was the first computer to be classified as a “super” computer. With a vision to build the world’s fastest computing systems, Cray produced the Cray-1 supercomputer and many more after that. Cray Research’s mission was to build the fastest computers and lead large-scale scientific computing, which later evolved to helping solve scientific and industrial problems that make the world safer, healthier, and smarter. A fun fact – the Cray Y-MP supercomputer has a cameo in the 1992 film “Sneakers.” The film’s leads (Ben Kingsley and Robert Redford) sit on a Y-MP while discussing how to change the world. Cray is an HPE company since 2019.

Photo : RIKEN - Currently the fastest supercomputer in the world.
Photo : RIKEN - Currently the fastest supercomputer in the world.
By Clemens PFEIFFER -own the work, CC BY 2.5, https://commons.wikimedia.org/w/index.php?cuird=1441453 Cray 1 - the first super computer in the world.
By Clemens PFEIFFER -own the work, CC BY 2.5, https://commons.wikimedia.org/w/index.php?cuird=1441453 Cray 1 - the first super computer in the world.

Supercomputers have since played an essential role in computational science. We Suse them for a wide range of compute-intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, and physical simulations.

Supercomputing League Tables

Since 1993, the TOP500 project ranks and details the 500 most powerful non-distributed computer systems globally. The Top 10 systems on this list are primarily for research purposes. So, it is a matter of pride that here in Asia, we have the Dammam-7, ranked 10th, installed at Saudi Aramco in Saudi Arabia – it is the second commercial supercomputer in the current top 10.

PARAM was India’s first supercomputer, and we have made a lot of progress since. In 2015, India announced a “National Supercomputing Mission” (NSM) to install 73 indigenous supercomputers throughout the country by 2022. NSM is a 7-year program whose aim is to create a cluster of geographically distributed HPC centres linked over a high-speed network, connecting various academic and research institutions across India (National Knowledge Network). As of Nov 2020, there are three systems based in India on the TOP500 supercomputer list – Centre for Development of Advanced Computing (No. 62), Indian Institute of Tropical Meteorology (No. 77), and National Centre for Medium-Range Weather Forecasting (No. 144). 

HPC for the masses

The high cost of entry has remained the barrier to applying HPC to a broader range of business applications. The power of supercomputers has primarily been the preserve of government and medical researchers, academics, and innovative moviemakers. Due to some of the mysteries around supercomputers, they became essential characters in science fiction writing. Much of it deals with the relations of humans with the computers they build and with the possibility of conflict eventually developing between them. Examples of supercomputers in fiction include HAL-9000, Multivac, WOPR, and Deep Thought. It’s safe to say that outside fiction, supercomputing wasn’t an easily accessible technology.

COVID has changed that forever. It needed very high computing to be available for the entire scientific and medical community – all at the same time. This demand for computing prowess led to the COVID-19 High-Performance Computing (HPC) Consortium; a group made up of industry, academia, US federal agencies, and others that made the world’s most powerful computers available for free to researchers trying to battle the virus. More than 100 projects using 600 petaflops with 6.8M CPU cores already rely on them, with new researchers joining weekly. The consortium is a first step to making HPC easily accessible and affordable for all. 

Added Kicker

An artificial intelligence methodology referred to as swarm learning is being applied to COVID-19 research data. Swarm learning combines the resources of dispersed HPC systems for better insights. If one hospital in the UK has data on hundreds of patients, that’s not enough to train any machine learning algorithm. But if that data could be seen along with data from hospitals in other countries, such as France, Spain, the USA, and India, you would have enough for machine learning purposes. Swarm learning enables this without sharing any patient data, thus overcoming privacy issues.

COVID-19 Research – not just the domain of medicine

Rickett, Maschhoff, and Sukumar investigated potential therapies for COVID-19 when they discovered that individuals with COVID-19 who had previously vaccinated for tetanus showed less severe symptoms. A recent study of pregnant women found that 88% of those who tested positive for the virus were asymptomatic, a rate approximately 2X that of the general population. Could the TDaP vaccine, commonly administered to pregnant women, offered an unexpected level of immunity? Their article, detailing the research and proposing the theory, has been accepted for publication in the journal, Medical Hypotheses.

What’s interesting about these findings, aside from their sheer novelty, is that they are not medical researchers. They’re engineers for Cray, the supercomputing arm of Hewlett Packard Enterprise. Before COVID-19, they had no experience with medical research. Yet, earlier this year, they theorized that the capabilities of Cray’s parallel-processing graph database could be leveraged to investigate therapies for the emerging COVID-19 pandemic — and on a far more efficient scale than had ever been done before.

They came up with the idea to do a protein sequence analysis; a comparison for similarities between one protein sequence (COVID-19 Spike protein) and the rest of them in the known universe. Their hypothesis – if they could find a way to map that information back to something that medicine already knew more about, experts could narrow down compounds that are more useful as treatments as they target a similar protein.

Medical research has become as much of a big data computational problem as it is a medical research one. Today, you are as likely to see a supercomputer in the laboratory as you are racks of tissue samples.

HPC as a Service

As businesses discover the power of supercomputing, they are gravitating to the cloud for some of their workloads. Enterprises are finding cloud services handy, with benefits like rapid deployment, the ability to handle erratic spikes in demand, and a pay-per-use model. According to Hyperion, 20% of all HPC jobs were executed through cloud services in 2019. They project this market to increase at a CAGR of 16.8% between 2019 and 2024. And Hyperion anticipates new HPC demand to combat COVID-19 may cause public cloud computing for HPC workloads to grow even faster.

The next generation of supercomputers will be Quantum

Unlike traditional computers, which work on bits and bytes, quantum computers use quantum bits (qubits). It uses quantum mechanics concepts like super-positioning and entanglement to process information, leading to incredible processing speeds as it can account for uncertainty in its calculations compared to purely binary choices. Under lab conditions, a quantum computer has even processed computations that would typically take the world’s fastest machines nearly 10,000 years to solve. This ability might have minimal practical uses, but it is a start.

Accenture has documented over 150 use cases, focusing on finding those that are the most promising in various industries. They also have forged partnerships with the world’s leading quantum cloud vendors to induce trial amongst their clients and help them embrace the upcoming quantum computing revolution.

Experts predict that the early 2020s would be a critical phase for developments in quantum computing. A report from Allied Market Research says that the global enterprise quantum computing market may grow by about 30% annually from 2018–2025 and will reach about $5.9 billion by 2025.

Putting computing power in the hands of end users and making it super easy for all to use is the future where everyone consumers technology as a service. Antonio Neri, CEO, HPE further highlighted this trend at HPE Discover 2021 when he talked about how their industry-leading Green Lake everything-as-a-service platform could become better known than HPE itself in the next 5 years. In the not-so-distant future, we could very well imagine a world where school kids use HPC workloads to do homework while quantum machines gain their place in labs.

What do you think?

Upvote


user
Created by

Samiran Ghosh


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles