Sebastien Destercke

Sebastien Destercke

Sebastien Destercke

Sebastien Destercke

Sebastien Destercke

research

My research mainly concerns uncertainty modeling and treatment with the help of imprecise probabilistic approaches (Lower previsions theory, Dempster-Shafer theory, Possibility theory). Roughly speaking, these approaches propose to blend interval and probabilistic methods to deal with situations of severe uncertainty (missing or imprecise data, few available information, expert opinions, non-reliable information, …).

Inside these theories, most of my research energy is spent in proposing practical solutions to various problems using these methods or in linking the different solutions each theory proposes to solve common problems (information fusion, independence modeling, …).

Work (mainly) benefiting from collaborations and discussions with D. Dubois, M. Troffaes, E. Miranda, L. Utkin, E. Chojnacki, E. Quaeghebeur and I. Sanchez

There exist many practical representations in imprecise probability theories, including possibility distributions, belief functions, imprecise probability assignments, pari-mutuel models, imprecise cumulative distributions (p-boxes), clouds, …

Aside establishing new relations between the properties of several models (i.e. clouds, p-boxes, possibility distributions), we have proposed a model called generalized p-box that models uncertainty by probabilistic bounds over collection of nested sets. Such models appear naturally in elicitation procedures or statistical confidence structures, and first results indicates that generalized p-boxes may be an interesting non-parametric model to handle multivariate problems and/or to handle bipolar information.

Our latest research on the topic include the specification of p-boxes limitations, as well as the investigation of some specific cases of comparative probabilities (where only singletons probabilities are qualitatively compared).

Work (mainly) benefiting from collaborations and discussions with D. Dubois, P. Buche, B. Charnomordic, E. Chojnacki, R. Thomopoulos, F. Sais, T. Burger and F. Pichon

Merging information from multiple sources is a recurring problem in modern systems. Common problems encountered by such merging are to cope with dependent and conflicting sources and to take account of sources characteristics (reliability, propensity to lie, precision, conflict level, …).

For practical purposes, we have proposed to use the notion of maximal coherent subsets as a way to deal with conflict among sources, and have applied it to the problem of estimating source reliability form meta-information.

Our current focus in this area concerns the characterization of inconsistency and reliability degrees resulting either from assumptions about the source of information or from the combination of the different pieces of information. Such degrees can then be used to guide the fusion process.

Work (mainly) benefiting from collaborations and discussions with D. Dubois, G. De Cooman, E. Chojnacki, J. Baccou, T. Burger, M. Sallak, M.C.M. Troffaes, F. Coolen, S. Ferson, F. Aguirre and I. Sanchez

How to propagate uncertainty analysis in various models is an important issue that may face several difficulties. Most of my research in this domain has concerned the propagation of uncertainty model through deterministic functions with methods combining Monte-Carlo simulation and interval analysis, with an industrial risk-assessment purpose.

Related problems are how to model independence to obtain tractable joint models (and how to compute with such latter models), or how to simulate a given imprecise probabilistic model.

Some of our recent research also deals with the problem of how to efficiently evaluate the reliability when the component reliabilities are uncertain and modelled by imprecise probabilistic knowledge (more speicifically belief functions).

Work (mainly) benefiting from collaborations and discussions with B. Quost, T. Denoeux, B. Ben Yaghlane, N. Sutton-Charani, G. Yang, M. Masson, E. Hüllermeier, A. Antonucci, G. Corani, M. Poss, V-L. Nguyen, Y. Alarcon and N. Ben Abdallah

Outside of extending some classical classifiers (k-NN methods, Naïve networks) to imprecise probabilistic settings, our work currently focuses on the combination of classifiers, to address both the usual multi-classification problem, as well as more complex problems such as label ranking and multilabel classification.

Our research currently focus on three issues:

- Learning from imprecise/soft data, where we propose methods to learn from uncertain data in a general way;
- Robust/skeptical inference for complex problems, where we try to produce cautious models, that is models delivering set-valued rather than point-valued predictions, for complex problems typically presenting a combinatorial structures. This is in particular multi-task learning problems such as multi-target regression, multi-label issues or learning-to-rank problems;
- Instrumentalizing imprecision in learning, where we try to identify those learning scenarios where using imprecision can actually improve the results when compared to more traditional (e.g., probabilistic) methods.

Work (mainly) benefiting from collaborations and discussions with P. Buche, B. Charnomordic, O. Strauss, V. Guillard, E. Chojnacki, M. Sallak, I. Thouvenin

We have applied ideas coming from imprecise probability theories and more generally concerning uncertainty handling to a number of frameworks, including:

- Flexible querying in data bases (P. Buche, V. Guillard)
- Signal filtering with kernels (O. Strauss, F. Comby, A. Rico)
- Knowledge Engineering (B. Charnomordic, R. Thomopoulos)
- Risk analysis and robust design (E. Chojancki, V. Guillard, M. Sallak)
- Process modelling (C. Baudrit)
- Virtual training (I. Thouvenin)