SAFE AI Chair


SAFE AI chair "Robust and cautious learning for a safer AI (SAFE AI)"
SAFE AI is a 5 year research program led by Sébastien Destercke (CR CNRS, Heudiasyc, UTC), the "Fondation UTC pour l'innovation", SOPRA STERIA, the "université de technologie de Compiègne", CNRS and SCAI (The Sorbonne Center for Artificial Intelligence).
The chair topic is centered around the concept of SAFE IA, that can considered as a specific concerns of trutworthy IA. It focuses on making IA more robust and reliable.
The notion of trustworthy IA includes many aspects: transparency, ethics, explainability, and last but not least robustness and reliability. In one sentence, we aim to obtain safe IA models by quantifying the uncertainty of the said models, of their prediction and of the data they use to guarantee their robustness and reliability. Such guarantees are essential in industry and society; roduction default detection, obstacle avoidance, medical diagnosis for a given patient, ...
The SAFE IA chair aims at working both at the theoretical and practical level, ensuring a connection between academia and industry. To achieve this goal, the chair includes, in addition to the Heudiasyc Laboratory (whose core expertise cover uncertainty quantification in IA) and to the LMAC Laboratory (whose expertise lies in applied mathematics, among which statistics), three laboratories whose expertise include application fields linked to the chair goals.
Those three laboratories are:
- Roberval laboratory, specialised in mechanic, mechanical engineering and industry 4.0.;
- BMBI laboratory, specialised in bio-mechanic and e-health;
- Avenues laboratory, sepcialised in smart cities and urban engineering.
Within the chair program, at least one research engineer will be recruited, whose main tasks will include to apply robust and safe AI methods to case studies, possibly going beyond proofs of concept. Ideally, we aim at creating an AI engineering team or a start-up dedicated to robustness in AI, benefitting to all partners.
The Chair combines theoretical aspects to produce new, generic AI methods and tools able to answer practical needs of the industry and the public in general, with applications of those same methods to real case studies, so as to test their fitness to real world challenges.
The goal of this research axis is to develop methods able to produce prediction whose error rate is finely control, with the aim to augment the confidence one can have in the produced model, advancing towards integrity and certification. In particular, this axis will focus on the issue of providing instance-wise, or individual guarantees, and not just average ones. It will also address the issue of producing predcition in complex output spaces (sequential data, images, ...)
Keywords
- Calibration
- Statistical guarantees
- Uncertainty Quantification
- Conformal prediction
- Learning with abstention
Some areas of applications
- Industry 4.0. (default/anomaly prediction)
- Autonomous/intelligent transports (obstacle recognition)
- Smart cities and energy grids (consumption prediction)
- Health (medical diagnostic)

The goal of this research axis is to obtain models that are robust either to imperfection in the available data ("small and bad" rather than big), or to the fact that the working environment of the AI is different from the one in which it has been trained, for instance when going from simulations (in-silico) or fully controlled environment (in-vitro) to a real-world environment (in-vivo), or when new classes appear once the training has been done.
Keywords :
- Transfer learning,
- Robust optimisation,
- Self-learning,
- Optimal transport,
- Partial or missing data,
- Novelty detection.
Possible areas of applications :
- Autonomous driving and flying (simulation towards real-world)
- Health (from model to patient)
- Industry 4.0. (detecting new defaults)

The goal of this axis is to improve the (predictive) quality of learned models, either trough a collaboration between models (for instance models relying on different sensors, attributes or modalities) as well as human-model collaboration (by solliciting human experts only when needed).
Keywords :
- Co-training
- Self-supervised learning
- Active learning
- Classifier merging and fusion
Possible areas of application :
- E-health (smart home)
- Smart cities and transports (mutliple sensors with possible unavailabilities and defaults).
The holder of the SAFE AI chair is Sébastien Destercke, CNRS junior researcher at the Heudiasyc laboratory of UTC (UMR UTC CNRS 7253).
His speciality : quantifying uncertainty and reasoning with it, notably with data and using AI approaches. He works within the Heudiasyc laboratory (Heuristique et Diagnostic des Systèmes Complexes-UMR CNRS - 7253) whose main research domains include computer and information sciences, among which artificial intelligence, robotics or control.
The chair also associated many researchers and teachers of UTC laboratories :
SAFE AI currently gathers 5 partners together :
- Fondation UTC pour l'innovation
- SOPRA STERIA, mécène fondateur de la Fondation UTC pour l'innovation
- SCAI (The Sorbonne Center for Artificial Intelligence)
- CNRS (Centre National de la Recherche Scientifique)
- UTC (université de technologie de Compiègne)


On the 11th May, Sébastien has presented some existing as well as on-going works regarding the problem of learning under uncertainly observed data, describing some of the challenges that comes with such a problem, but also how it can be used to improve AI models in general, by better accounting for uncertainties. If you missed it, some information can be found by clicking here


Patrick Dupin,President of the "Fondation UTC pour l’innovation" and Mohammed Sijelmassi, CTO Sopra Steria have officially launched the industrial chair "Robust and cautious learning for a safer AI" on thursday the 3rd January 2022 in the presence of Christophe Guy, UTC director within the buildings of Sopra Steria in Paris.
Contact
Hélène
Hélène
Phone : 03 44 23 44 23
|
Contact by email