Story image

NASA, SpaceX and HPE team up to send a supercomputer to space

This article was originally published on the HPE blog here.

The SpaceX CRS-12 rocket, developed by Elon Musk’s SpaceX, launched from Kennedy Space Center, Florida, and sent its Dragon Spacecraft to the International Space Station (ISS) National Lab. Aboard the Dragon is an HPE supercomputer.

This supercomputer, called the Spaceborne Computer, is part of a year-long experiment conducted by HPE and NASA to run a high-performance commercial off-the-shelf (COTS) computer system in space, which has never been done before. The goal is for the system to operate seamlessly in the harsh conditions of space for one year, roughly the amount of time it will take to travel to Mars.

Advancing the Mission to Mars

Many of the calculations needed for space research projects are still done on Earth due to the limited computing capabilities in space, which creates a challenge when transmitting data to and from space. While this approach works for space exploration on the moon or in low Earth orbit (LEO) when astronauts can be in near real-time communication with Earth, once they travel farther out and closer to Mars, they will experience larger communication latencies. 

This could mean it would take up to 20 minutes for communications to reach Earth and then another 20 minutes for responses to reach astronauts. Such a long communication lag would make any on-the-ground exploration challenging and potentially dangerous if astronauts are met with any mission critical scenarios that they’re not able to solve themselves.

A mission to Mars will require sophisticated onboard computing resources that are capable of extended periods of uptime. To meet these requirements, we need to improve technology’s viability in space in order to better ensure mission success. By sending a supercomputer to space, HPE is taking the first step in that direction. Future phases of this experiment will eventually involve sending other new technologies and advanced computing systems, like Memory-Driven Computing, to the ISS once we learn more about how the Spaceborne Computer reacts in space.

Lessons from the Mission to the Moon

When the United States successfully put two men on the moon, it captivated the world and inspired technological advancements from the microchip to memory foam. The mission to Mars is the next opportunity to propel technological innovation into the next frontier. The Spaceborne Computer experiment will not only show us what needs to be done to advance computing in space, it will also spark discoveries for how to improve high-performance computing (HPC) on Earth and potentially have a ripple effect in other areas of technology innovation.

HPC in Space

The Spaceborne Computer includes the HPE Apollo 40 class systems with a high-speed HPC interconnect running an open-source Linux operating system. Though there are no hardware modifications to these components, we created a unique water-cooled enclosure for the hardware and developed purpose-built system software to address the environmental constraints and reliability requirements of supercomputing in space. Generally, in order for NASA to approve computers for space, the equipment needs to be “ruggedized” or hardened to withstand the conditions in space. 

Think radiation, solar flares, subatomic particles, micrometeoroids, unstable electrical power, irregular cooling. This physical hardening takes time, money and adds weight, so HPE took a different approach to “harden” the systems with software. HPE’s system software will manage real time throttling of the computer systems based on current conditions and can mitigate environmentally induced errors. Even without traditional ruggedizing, the system still passed at least 146 safety tests and certifications in order to be NASA-approved for space.

This article was originally published on the HPE blog here.

You can watch the video here:

Vertiv reveals new ‘plug-and-play’ data centre options
The new product families are said to enable the rapid deployment of right-sized, just-in-time data centre and power capacity.
Fujitsu takes conservation prize for immersion cooling system
The prize was awarded for the Fujitsu Server PRIMERGY Immersion Cooling System that can reduce power consumption by up to 40%.
5G will propel RAN market to $160b in near future
5G growth is expected to advance at a faster pace than LTE, particularly within the APAC region.
Telstra partnerships boost subsea cable infrastructure
Telstra’s customers across Asia Pacific will soon be able to take advantage of major major boosts to Telstra’s network services and subsea cables.
Expert comment: Google fined US$57mil for GDPR breaches
The committee examining the breaches found two types of breaches of the GDPR.
NTT Com launches Azure stack in Singapore
NTT Communications Corporation (NTT Com) has introduced the Managed Microsoft Azure Stack Solution to its Singapore operations.
Liquid cooling key to silencing a noisy data centre
Data centre are famous for being very noisy, but Schneider Electric's Steven Carlini says liquid cooling infrastructure could change that.
Achieving cyber resilience in the telco industry - Accenture
Whether hackers are motivated by greed, or a curiosity to assess a telco’s weaknesses; the interconnected nature of the industry places it in a position of increased threat