Skip to main content

PASC project leaders discuss their new astrophysics simulation code, securing a major allocation on LUMI-G

Since its launch in 2013, the Platform for Advanced Scientific Computing (PASC) has been supporting the development of computer applications for future Exascale-class systems. This is exactly what an interdisciplinary team of researchers in PASC has now achieved. Thanks to their novel developments, they have gained access to LUMI, one of the most advanced pre-Exascale supercomputers currently available in Europe. This article was originally published on the CSCS website.

Inspired by astrophysics codes the team led by Florina Ciorba, Professor of High Performance Computing at the University of Basel (Switzerland), Lucio Mayer, Professor of Astrophysics at the University of Zurich (Switzerland), and Rubén Cabezón, Senior Scientist at the computing center (sciCORE) of the University of Basel, desigened in their first PASC project from 2017-2021 a special mini-app called SPH-EXA. SPH-EXA applies the so-called smoothed particle hydrodynamics (SPH) technique to help solve hydrodynamic equations at Exascale.

Florina Ciorba, Professor of High Performance Computing at the University of Basel, Lucio Mayer, Professor of Astrophysics at the University of Zurich, and Rubén Cabezón, Senior Scientist at the computing center (sciCORE) of the University of Basel.

Caption: Florina Ciorba, Professor of High Performance Computing at the University of Basel, Lucio Mayer, Professor of Astrophysics at the University of Zurich, and Rubén Cabezón, Senior Scientist at the computing center (sciCORE) of the University of Basel. Image source: CSCS

In their second PASC project, beginning in 2021 and continuing until 2024, and as part of the SKACH Consortium, they evolved the mini-app into a high performance production code by including additional physics so that it efficiently simulates real systems — such as the formation of stars and galaxies, or supernova explosions — realistically and in high resolution. Thanks to their novel developments, the scientists have now succeeded in gaining access to LUMI, one of the most advanced pre-Exascale supercomputers currently available in Europe, to get to the bottom of a challenging research question about the role of turbulence and gravity in star formation.

In the following interview, Ciorba, Mayer and Cabezón explain to us more about their recent success from a scientific point of view.

Your SPH-EXA2 PASC project bears fruits! With your application through EuroHPC, you managed to secure 5,500,000 node hours on the GPU partition of the LUMI supercomputer (LUMI-G) for a period of 12 months.

Rubén Cabezón: This is a major milestone for us and a massive accomplishment. It is not only recognizing the effort put into developing a next-generation hydrodynamics code, but it is also enabling cutting-edge science. It is very exciting!

Florina Ciorba: This allocation embodies the pinnacle of our project’s interdisciplinary co-design philosophy. It also showcases the pivotal role of HPC in advancing the SPH-EXA simulation framework. Simulations at this scale are what supercomputers are built for!

What is the scientific question that you want to answer with the planned simulations on LUMI-G?

Lucio Mayer: The new simulations will shed new light on one of the biggest open problems in astrophysics: the origin of the masses of stars in our galaxy, and namely, in simple terms, why most stars are of similar mass as the Sun or a bit lower, while very few are much more massive. During the past two decades, the theory of star formation has evolved a lot, propelled mostly by the advance of numerical simulations. These simulations suggested that interstellar turbulence and self-gravity, mainly the internal gravitational pull of the densest clumps of gas that form in the interstellar medium, are the two ingredients that shape the mass function of stars. However, no calculation so far has been able to model a large enough volume of the interstellar medium with a high enough resolution to resolve both the smallest and the largest clumps of dense gas, which will produce, respectively, the smallest and the largest star. In other words, broad statistical sampling, hence large computational volume, and high resolution are both necessary, because massive stars are rare, and the smallest stars need to be resolved.

RC: With the simulations, we intend to unveil the role of turbulence and gravity in the formation of proto-stellar cores. This will provide information about the mass distribution of stars with unprecedented resolution, which has important implications for the observable properties of galaxies. As additional objectives, we plan to study turbulent mixing, so as to better understand the apparent chemical homogeneity of stellar clusters and to contribute to the general theory of turbulence.

FC: As computer scientists, we want to understand how the algorithms, data management, and visualization techniques in SPH-EXA behave at this – for our entire team – unprecedented scale. Our quest is not only about handling immense computations and data for simulating the coupling of turbulence with gravity, but about innovating and discovering new ways to ensure the simulations are load balanced, energy-efficient, and the results visualized in intuitive ways. These aspects are crucial for more sustainable and accessible scientific computing. Energy-efficient computing is particularly important in an era where the environmental impact of technology is a growing concern. These are all challenges of modern science, with implications across multiple application domains.

How does the PASC project and CSCS’s infrastructure support this success in the run-up?

LM: I think I speak for all of us when I say that the PASC projects and PASC funding have been essential over the past few years in developing the SPH-EXA code, which newly conceptualizes simulation code for astrophysics and cosmology. In fact, this code has been developed with the notion of parallel performance and computational efficiency in mind since the beginning, and it is optimized to run on GPUs, all features that place it ahead of the most established codes in the field. Only with SPH-EXA are these new simulations on LUMI-G possible. PASC has enabled us to put together a diverse team of computer scientists and astrophysicists, which is the key to developing SPH-EXA. CSCS has also been greatly helpful in many ways, not only by allowing us to test and validate earlier versions of the code on its systems, but also by employing some of its staff members to collaborate with our PASC team directly. This allowed us to participate in the LUMI-G early access pilot program, where through two hero runs, we obtained the preliminary results to submit a strong proposal to the EuroHPC extreme-scale call.

The allotted 5,500,000 node hours are almost a quarter of the entire system of LUMI-G for one year. How does that make you feel?

FC: We feel both extreme excitement about the scientific discoveries that lie ahead and a huge responsibility to ensure that we conduct the simulations responsibly, in terms of resource usage, to maximize every node hour for ground-breaking scientific advancements both in in Computer Science and Astrophysics.

RC: As Florina says, on the one hand, we are very excited, as we are in a situation where we can make significant contributions to several research fields. On the other hand, we bear a large responsibility that roots in the scientific and moral obligation of using such an allocation efficiently and productively.

The simulation will produce a tremendous amount of data. How will you deal with this volume?

LM: This is something that we are figuring out along the way, as this simulation will break previous records in terms of amount of data in the field of star formation. Luckily, we have some experience in the team and broadly at the Institute of Computational Science of the University of Zurich, which I am heading, dealing with large datasets — for example with past large cosmological simulations, some run at CSCS, such as the Euclid Flagship simulations. One strategy is to carry out some analyses on-the-fly rather than in post-processing, to minimise the number of snapshots that we will have to write. At the same time, simulations like this raise the question of how to organise an efficient and sustainable storage system for simulation data at the national level, as our field is not the only one that faces these challenges. This is even more true as funding agencies, such as the Swiss National Science Foundation, rightly call for scientists to make their data available in open access form, which implies we should actually save snapshots with raw data and make them available to the community.

RC: This is an extremely relevant question, and even more so when talking about extreme-scale simulations. Dealing with large data is not an easy task and requires careful planning. In that respect, we have a detailed data management plan, and we have put a lot of effort into using advanced compression algorithms, coupling the simulation with on-the-fly visualisation, and developing optimised pipelines to carry out the necessary analysis, all to minimise the impact on long-term storage. Big Data is a pressing and general problem for many research fields and poses considerable challenges in terms of management and infrastructure, which are far from being solved. Implementing robust data management strategies, incorporating the best possible practices, and investing in advanced technologies constitute our best approach.

FC: Complementing Lucio’s and Ruben’s answers, the only way to deal with data generated at unprecedented scales is to develop a robust and efficient data management strategy of our own. Our strategy is a synergy of cutting-edge data compression techniques and state-of-the-art data transfer protocols, ensuring not only a reduction in storage requirements but also rapid and secure data movement. We also employ in-situ data analysis and visualization, allowing us to extract pivotal insights as soon as data is produced by simulations and their execution, close to real-time. This strategy ensures that every Byte of data is not just stored but is actively contributing to our understanding of the phenomena we’re studying: star formation, load imbalance, and energy consumption.

After this very big simulation, what might come next? Do you have any plans for “Alps” supercomputer?

LM: We have big plans for what comes next. One major target is what we call ExaPHOEBOS, namely the largest cosmological hydrodynamical simulation ever performed. Once again, we will leverage the power of SPH-EXA by combining large sampling volume with unprecedented resolution, like the star formation simulations running now on LUMI-G. Currently we are running lower resolution versions of this planned simulation, the early PHOEBOS suite, on “Piz Daint” with the legacy code ChaNGa, whose physics modules represent the reference for the Cosmology physics modules of SPH-EXA. Once scaled, ExaPHOEBOS will set a new standard in the field, deepening our understanding of the early stages of galaxy formation and evolution as well as the properties of cold hydrogen gas, the most abundant in the universe, both inside and outside galaxies. These objectives are in line with the science goals of SKA, which Switzerland joined in 2022, now represented through the Swiss SKACH Consortium to which the PASC team also belongs; and these new tools will also be useful for explaining and interpreting new, surprising discoveries from the James Webb Space Telescope in the past year concerning the earliest galaxies in the universe.

RC: The current situation, in which we have a mature code capable of leveraging the computational power of the world’s largest machines coupled with the rapid advancement of powerful computational infrastructures, opens the door to a wide range of exciting, cutting-edge scientific opportunities. I have a personal interest in pushing the boundaries of simulations currently performed by the SPHYNX code, thanks to SPH-EXA. SPHYNX serves as the benchmark for implementing hydrodynamics and astrophysics modules in SPH-EXA, but cannot provide such extreme scalability. I aim to extend its current simulations with SPH-EXA to achieve exceptionally high resolution in several demanding scenarios. One of those is the so-called D6 scenario, a recently favoured model to explain a sizable fraction of Type Ia supernova explosions, where two compact stars trigger that type of supernova and, potentially, a fast-moving remnant white dwarf. This scenario challenges common beliefs about the symmetry of such explosions, which could have other interesting implications for astrophysics and cosmology. Another extremely interesting scenario is the simulation of fast rotating core-collapse supernovae and their gravitational wave signal. To prepare ourselves for future detection of gravitational waves produced by nearby core-collapse supernovae, it is crucial to conduct highly detailed magneto-hydrodynamical simulations that will allow us to understand the features of those gravitational waves and their impact in our knowledge of fundamental properties of matter.

FC: In light of the ambitious and pioneering simulations that SPH-EXA is already planned to undertake we are already anticipating the evolving portability, performance, and sustainability challenges as we exploit the capabilities of “Alps”. We are pioneering multilevel scheduling in SPH-EXA, with a framework currently being developed in my team, to intelligently select and combine various scheduling techniques for optimal load balancing at different stages of the simulation. This adaptability is crucial due to the dynamic nature of our simulations’ computational requirements, as by enhancing the complexity of the physics involved in the simulations, load imbalance becomes more relevant and can even be a predominant challenge. Particularly, when we have physical processes that are computationally demanding in just some regions of the simulated domain and that vary drastically with time, for example such as nuclear reactions in Supernova explosions or collapsing regions in star formation simulations. We are also studying the sustainability aspects of our simulations on current top-tier systems. Our plan is to pave the way for sustainable simulations on future systems such as “Alps” by balancing computational efficiency with environmental considerations in a responsible manner. This will require accurate energy measurements at the code, node, and system level and their systematic and seamless integration into code design, development, and deployment decisions. Moreover, the rich scientific and performance data generated from these simulations present a unique opportunity: We plan to create sophisticated visualizations that integrate both types of data, offering insights at large-to-extreme scales. This will not only advance our scientific understanding of astrophysical phenomena, but also of performance phenomena at scale, enhancing our ability to optimize performance in highly complex computing environments.

Members of the research group:

  • At CSCS, computational scientists Sebastian Keller, Jonathan Cole, and Jean-Guillaume Piccinali are closely involved with the SPH-EXA team. Keller and Cole are contributing to software development and simulations, while Piccinali to data management and CI/CD.
  • At the University of Basel, in the team of Florina Ciorba, and in collaboration with Rubén Cabezón from sciCORE, computer scientist Osman Seckin Simsek is responsible for software development, simulations, data analysis, and performance engineering specifically focusing on load balancing and energy efficiency improvements. Computational scientist Yiqing Zhu is responsible for scientific and performance visualization of the simulations, with focus on compression, parallel I/O, and in-situ data processing. Computational Sciences undergraduate student Lukas Schmidt implements initial conditions and observables for several hydrodynamic tests to validate and verify SPH-EXA.
  • At the University of Zurich, PhD student Noah Kubli in the team of Lucio Mayer is implementing the physics modules inspired by legacy codes such as ChaNGa for simulations of galaxy, star and planet formation. Among these the radiative cooling and gas chemistry and sub-grid models for star formation and stellar feedback on galactic scales. Research scientists Darren Reed and Pedro Capelo aid with the design of initial conditions for cosmological simulations, and with designing and employing tests to validate the code with the new physics modules progressively added.


Read also:

Author: Simone Ulmer, CSCS, Switzerland