Skip to main content

Get ready: LUMI supercomputer will be here next year!

The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range exascale supercomputers for processing big data. One of the pan-European pre-exascale supercomputers, LUMI, will be located in CSC’s data center in Kajaani, Finland.

The supercomputer will be hosted by the LUMI consortium. The LUMI (Large Unified Modern Infrastructure) consortium countries are Finland, Belgium, Czech Republic, Denmark, Estonia, Norway, Poland, Sweden and Switzerland. LUMI will become operational in mid-2021.

Preparations in CSC’s data center in Kajaani, Finland are in full swing now as 2021 will be here in no time. Researchers using computational methods should also start to think about how to best exploit the possibilities that LUMI will open up. So what kind of supercomputer will LUMI be? Let’s go through some of the details of LUMI’s architecture.

LUMI, one of the most competitive in the world

When LUMI starts operating in mid-2021, it will be one of the most competitive supercomputers in the world. The design philosophy for LUMI was to create a platform which makes it possible to use AI (especially deep learning) and traditional large-scale simulations combined with high-performance data analytics to solve a single research problem.

The theoretical peak performance of LUMI will be over 200 petaflops (2*1017 floating point operations per second). This, like the performance of applications on LUMI, is approximately ten times faster than with the Piz Daint supercomputer, which is the fastest in Europe at the moment. LUMI achieves its high performance thanks to having a large number of nodes with accelerators (that is, Graphic Processing Units, otherwise known as GPUs).

LUMI will have a number of supporting compute and storage resources which maximize its overall value: the system will be complemented by a partition containing only CPUs (Central Processing Units), as well as IaaS (Infrastructure as a service) cloud services and a large object storage solution.

LUMI will have a highly capable parallel file system consisting of over 60 petabytes of storage with a sizeable flash layer (with an expected volume around 5 PB) for I/O providing more than 1 terabyte/s of bandwidth, plus an extreme input/output per second capability. There will also be object storage, which is anticipated to be 30 PB in size, for convenient data management.

LUMI, a leadership-class AI platform

LUMI’s massive computational capability (which is based on GPUs and the extreme connectivity between the nodes) makes LUMI a leadership-class platform for research based on AI (Artificial Intelligence). The I/O capabilities of LUMI make it possible to use big datasets, and perform computations on them. As LUMI will have both GPU and CPU processors, it will be possible to mix CPUs and GPUs in the same workflow, even for the same application, thereby enabling researchers to combine AI and simulation.

 

LUMI’s architecture. Image: CSC

Easy-to-use, rich stack of pre-installed software for research

So that was a brief overview of the architecture. Next, we will have a look at the software that is likely to be available on LUMI.

In addition to traditional command-line interfaces, we intend to support high-level interfaces on LUMI. Applications such as Jupyter Notebooks, Rstudio and Matlab will be seamlessly integrated to the back-end of LUMI, so ultimately LUMI will function as an extension to each researcher’s laptop. LUMI will have a rich stack of pre-installed software, including code developed by research communities and also commercial applications. Furthermore, LUMI will feature a collection of reference datasets that are readily available and curated (Datasets as a Service), and will provide capabilities for the interactive visualization of results during the execution of simulations.

The plan is that LUMI will be in operation early next year. So research communities should start thinking now about how to utilize the significant increase of computing capacity that LUMI will provide, and start to prepare software for LUMI’s architecture. Software that only uses conventional processors can be modernized to employ GPUs or, alternatively, research groups may start to use software that already uses GPUs with similar functionalities.

Eco-friendly data center

Reducing CO2 emissions is a globally critical target, and the location of the EuroHPC machines has a huge impact on that as supercomputers consume plenty of electricity. Supercomputers the size of LUMI would generate 50,000 tons of CO2 emissions annually if they were powered by fossil-fuel-based electricity. However, the LUMI data center in Kajaani uses renewable and CO2-neutral electricity.

Furthermore, LUMI will use warm-water cooling, which enables its waste heat to be utilized in the district heating network of Kajaani, and thus replaces heat produced by fossil fuels. The waste heat from LUMI that can be used in Kajaani’s district heating network is equivalent to up to 20 per cent of the energy that Kajaani needs to use in the area’s district heating. This reuse of waste heat will reduce the annual CO2 footprint of Kajaani by 13,500 tons – an amount that equals the output from 4000 passenger cars.

Who can access LUMI?

So who will be eligible to use LUMI and how can researchers apply for time on the system?

Half of the LUMI resources belong to the EuroHPC Joint Undertaking, and the other half of the resources belong to the participating countries i.e. the LUMI consortium countries. Each consortium country has a share of the resources based on the country’s contribution to the LUMI funding. The shares for each of the countries will be allocated according to local considerations and policies – so LUMI will be seen and handled as an extension to national resources.

The LUMI shares belonging to the EuroHPC JU will be allocated by a peer-review process (comparable to that used for PRACE Tier-0 access). In addition, up to 20% of the EuroHPC resources will be available to industry and SMEs.

So what does this mean in practice? A researcher affiliated with one of the LUMI consortium countries or a company which has its headquarters in the LUMI consortium countries will be able to apply for LUMI resources via the LUMI consortium country’s national share and, in addition, via EuroHPC’s technical and scientific peer-review process.

By partnering with European research groups and/or companies, the EuroHPC resources will be available to non-European research projects as well. (Note that the Principle Investigator for any projects that apply for time on LUMI will need to be based in the EU or an associated country.)
The peer-review and application processes are still under negotiation and further details will be announced later closer to the opening of LUMI.

The resources of LUMI will be allocated for projects in terms of three different pools: GPU-hours, CPU-hours, and storage hours. All users of the system will have the access to the whole system in accordance with batch job policies, i.e. there will be no dedicated hardware for any of the partners.

What will be solved with LUMI?

The creativity of European research communities will eventually determine what kinds of problems are solved with LUMI. To give you some idea of the wide range of issues that could be addresses using LUMI’s resources, we foresee that LUMI will help remarkably with the following kinds of research questions.

• More precise climate models and the interconnection of different climate models: how will living conditions change when the climate is warming?

• The sequencing and analyzing of full genomes combined with data analysis and correlations to clinical data will bring more light to diseases and hereditary diseases: shedding light on the causes of illness and assisting with personalized treatment and medicine

• Artificial intelligence (deep learning): analyzing large data sets (simulated and measured) and reanalyzing e.g. in atmospheric science, environmental science, climate modelling, material science and linguistics

• Self-driving cars and vessels: the study of algorithms related to these with previously unprecedented computing power

• Social sciences: large-scale data set analytics from social networks and the modelling of different phenomena

• LUMI will also have a channel for urgent computing needs. This, so called “director’s share”, type of allocation will allow to grant some of LUMI’s resources on an ad hoc basis for time- and mission-critical simulations. This kind of simulations might be, for example, related to national or EU security, or some massive disturbance affecting the partner countries, for example a large epidemia or pandemic disease.

Authors: Anni Jakobsson and Pekka Manninen, CSC