Raphaël Bonnet-Guerrini

Introduction
My research:
Explainable methods and Interpretability in Artificial Intelligence for applications in fundamental physics
My expertise is:
Machine Learning for applications in scientific research
A problem I’m grappling with:
Adaptation of standard XAI methods for regression problem
I’ve got my eyes on:
Unexploited concept of interpretability in Neural network, White Box model, Optimization of ML techniques, HEP and Cosmology.
I want to know more about:
The different needs of physicist in terms of trust and interpretation of their AI model.
Projects
Explainable AI for Online and Transferable Learning
Develop XAI techniques for online and transfer learning applied to experimental data from physics and signal and image understanding; handle efficiently real-time massive sensor data in online and transfer learning; develop XAI techniques for high robustness and accuracy in multi-sensor environments.