Data centre automation for HPC

by Eduardo Aguilar Pelaez on 29 June 2020

Friction points in HPC DevOps

Many High Performance Computing (HPC) setups are still handcrafted configurations where tuning changes can take days or weeks. This is because the more you tune and optimise something, the more bespoke and unique it is, and the more unique something is, the lower the chances that things will just work out of the box, and HPC is no exception.

A new school of HPC

Now physical servers are a lot easier to set up, provision and configure thanks to tools such as MAAS. For example, connecting servers and selecting which ones will be configured for networking and which for data, is as easy as clicking a button on a web UI. This may seem innocuous but it means that a server farm can be used for one project in the morning and for something completely different in the afternoon. 

In reality, the server configuration is only the start, the base from which everything bubbles up. Re-configuration at the server level allows for use of higher-level tools such as LXD VMsKubernetes and Juju to quickly put together an environment with reusable code without needing to be a DevOps expert or having to wait for an expert to do it for you. 

What we are going to see in the next few years is a growth of HPC with cloud native tools. Or, in other words, bringing cloud software tools and good developer experience into the world of HPC to make the operations easier. 

What next?

A cloud-native experience in HPC is not a new idea [1, 2] but has been thrust into the limelight given the recent need for more scientific work being done in the fight against COVID-19. In these real-life applications what matters is no longer the ‘wall time’ the software takes from start to finish but rather the time the overall project takes to reach a practical conclusion, factoring in human time and operational processes. 

Modern cloud-native software can help with time to delivery. If you are interested and would like to explore this further let us know or watch this webinar from Scania’s Erik Lönroth in the upcoming Ubuntu Masters event.

Related posts

What is a Kubernetes operator?

Kubernetes is the open source, industry-standard platform for deploying, managing and scaling containerized applications – and applications on Kubernetes are easier with operators. […]

Operate popular open source on Kubernetes – Attend Operator Day at KubeCon EU 2024

Operate popular open source on Kubernetes – Attend Operator Day at KubeCon EU 2024 […]

Understanding roles in software operators

In today’s blog we take a closer look at roles – the key elements that make up the design pattern – and how they work together to simplify maintaining application infrastructure. […]