by Yves GRANDVALET, DR CNRS researcher
Machine learning aims at discovering regularities from examples. In this process, sparsity can be introduced from different perspectives. For a given task, it can target (1) computational efficiency, by avoiding to process insignificant pieces of information; (2) interpretability, by putting forward the salient pieces of information; (3) prediction accuracy, by introducing an induction bias preventing overfitting to the training examples. I will consider two facets of sparsity, by looking at the two dimensions of the data table that represents the training sample: examples and variables. We will see how these two approaches can be formalized and motivated from the theoretical point of view. I will then focus on some properties and practical issues and finally conclude by some open questions on the topic.