Skip to main content

LUMI powers AI assistant development for privacy-preserving machine learning models

Professor Antti Honkela from the University of Helsinki, Finland, and Professor Samuel Kaski from Aalto University, Finland, and the University of Manchester, UK, are part of the Finnish Center for Artificial Intelligence FCAI. Their Japanese collaboration partner is from RIKEN, which hosts Japan’s national flagship supercomputer, Fugaku.

– We’re collaborating with Professor Jun Sakuma from RIKEN and Tokyo Institute of Technology. The collaboration builds on earlier work on privacy-preserving machine learning with Japanese researchers Honkela and Kaski tell.

Developing a tool for private machine learning models

The project of Professors Honkela and Kaski naturally relates to artificial intelligence, more specifically to privacy-preserving machine learning.

– Machine learning models have been shown to be prone to memorizing their training data. This can cause problems if the training data contains personal data or other sensitive information, such as health data, and as the model is subsequently made available to others, they may be able to recover the sensitive data, the researchers explain.

The project aims to aid this conundrum:

– The memorization can be avoided by employing differential privacy in model training. Unfortunately, this can reduce the accuracy of the model. Careful adjustment of the training process can minimize the loss of accuracy, but this can be computationally expensive and requires expertise. The ultimate aim of our project is to develop an artificial intelligence (AI) assistant for differentially private machine learning. The assistant will allow less experienced users to train strong models without excessive computation, they elaborate.

AI research requires powerful platforms

LUMI, one of the world-leading AI platforms, also plays a significant role in this project.

– We use LUMI to train many models for different machine learning tasks. Collecting this information allows us to distill the knowledge of how to perform effective training in different tasks and ultimately train our AI assistant. The assistant will be developed iteratively so that its capabilities will increase as the project progresses, Professors Honkela and Kaski describe.

They underline the importance of suitable platforms for artificial intelligence research in general:

– Research in machine learning and AI has become extremely compute-intensive during the past few years. World-class computational research infrastructure allows us to work on state-of-the-art models and problems that are relevant to users of the methods, rather than smaller inferior models that are computationally cheaper but usually not practically relevant, they clarity.

In this blog series, we will delve into the collaborative efforts of seven international research collaboration projects between Finland, Japan, and Finland, Colorado, which use the LUMI supercomputer to address global challenges and top-level research topics in different fields. Join us as we interview the project leads and hear how these collaborations came to life and how they use LUMI for cutting-edge research in their field!

Read also the previous parts of the blog series:

Developing large computer model ensembles with LUMI to simulate ice flows in the Antarctic

LUMI powers the study of light scattering in space

Accelerating the discovery of materials with LUMI to advance clean energy and zero-emission vehicles

LUMI used to simulate supernova explosions

Harnessing LUMI to analyse greenhouse gas emissions

Authors: Maari Alanko, Elisa Halonen and Pihla Kauranen, CSC – IT Center for Science

Image: Jamillah Knowles / Better Images of AI / Data People / CC-BY 4.0