Model selection via penalization, resampling and cross-validation, with application to change-point detection

Tutorial given within the Non stationarity part of the cycle Non stationnarité en statistiques et gestion des risques at Cergy University (near Paris), from January, 30th to February 2nd 2012. Since this tutorial mostly comes from the "Cours Peccot" lectures I gave in January 2011 at College de France (Paris), you can also have a look to my lecture notes for the Cours Peccot (in French).


For most estimation or prediction tasks in statistics, many estimators are available, and each estimator usually depends itself on one or several parameters, whose calibration is crucial for optimizing the statistical performance.

These lectures will address the problem of data-driven estimator selection, focusing mostly (but not only) on the model selection problem, where all estimators are least-squares estimators. We will in particular tackle the problem of detecting changes in the mean of a noisy signal, which is a particular instance of change-point detection.

We will focus on two main kinds of questions: Which theoretical results can be proved for these selection procedures, and how these results can help practicioners to choose a selection procedure for a given statistical problem ? How can theory help to design new selection procedures that improve existing ones ?

The series of lectures will be split into three main parts:
1. Model selection via penalization, with application to change-point detection
2. Resampling methods for penalization, and robustness to heteroscedasticity in regression
3. Cross-validation for model/estimator selection, with application to detecting changes in the mean of a signal

Retour à l'index - Back to index