Heidelberg University Sites logo

Gérard Biau

Professor

Sorbonne University

gerard.biau@sorbonne-universite.fr

 

 

Prof. Dr. Gérard Biau

Introduction

My research: 

Statistics, machine learning, physics-informed machine learning

My expertise is 

in statistics and machine learning 

A problem I’m grappling with 

understanding how to incorporate physical knowledge into machine learning models

I’ve got my eyes on 

machine learning for science

I want to know more about 

the connections between physics and machine learning

Research Interest

Gérard Biau is a full professor at the Probability, Statistics, and Modeling Laboratory (LPSM) of Sorbonne University in Paris. His research primarily focuses on developing new methodologies and rigorous mathematical theories in statistical learning and artificial intelligence. Gérard Biau was a junior member of the Institut Universitaire de France from 2012 to 2017 and served as the president of the French Statistical Society from 2015 to 2018. In 2018, he received the Michel Monpetit - Inria Prize from the French Academy of Sciences. Currently, he is the director of the Sorbonne Center for Artificial Intelligence (SCAI) and a senior member of the Institut Universitaire de France. His recent research contributions explore the convergence between the fields of physics and deep neural networks, aiming to harness their combined potential.

Projects

Extrapolation in ML

Address the challenge that high energy physicists currently face using ML techniques. We intend to propose new analytic descriptions in the ML model, and further push the logic of infusing more physics into ML in the LHC domain.

Comprehensive uncertainties for generative models

Develop a method to include uncertainties, starting from Bayesian generative networks; expand strategies to model systematic uncertainties using conditional training on nuisance parameters; extend NNPDF methodology for architecture-driven and parameter-driven uncertainties to generative models; study the effect of guided implicit bias on amplification factors between training and generated sample size.