Sarah Filippi


Since June 2011, I hold a Medical Research Council Fellowship and work
on a range of topics in mathematical modelling and computational statistics, focussed on understanding biological processes and their role in diseases.

I originally studied mathematics and stochastic modelling at the University Denis Diderot (Paris 7). I then completed my PhD in reinforcement learning and parametrized bandit models at LTCI (Laboratoire Traitement et Communication de l’Information) which is a joint lab of TELECOM ParisTech and CNRS, under the supervision of Olivier Cappe and Aurelien Garivier . My PhD being funded by Orange Labs I applied my research to cognitive radio and ad selection for web portals. During this time, I also worked with Eric Moulines (at LTCI) and visited Csaba Szepesvari at the University of Alberta.

Current research activity

  • Development of inferential procedures for model selection and parameter estimation in complex dynamical systems using Approximate Bayesian Computation and Sequential Monte Carlo methods
  • Robust analysis of signal transduction underlying cellular variability in stem cells
  • Statistical analysis of the perturbation of haematopoietic stem and progenitor cell development by trisomy 21, in collaboration with Professor Irene Roberts in the Centre for Haematology at Imperial College London
  • Bayesian experimental design

Publications

The ecology of chronic myeloid leukemia: hematopoietic stem cell niche dynamics determine clinical outcome
Adam L. MacLean*, Sarah Filippi*, and Michael P. H. Stumpf
Proceedings of the National Academy of Sciences of USA, 2014
* joint first-authors

A framework for parameter estimation and model selection from experimental data in systems biology using approximate Bayesian computation
Juliane Liepe, Paul Kirk, Sarah Filippi, Tina Toni, Chris P. Barnes, Michael P. H. Stumpf
Nature Protocols, 2014

Optimizing threshold-schedules for sequential approximate Bayesian computation: applications to molecular systems
Daniel Silk*, Sarah Filippi*, Michael P. H. Stumpf
Statistical Applications in Genetics and Molecular Biology, 2013
* joint first-authors

On optimality of kernels for approximate Bayesian computation using sequential Monte Carlo
Sarah Filippi, Chris Barnes, Julien Cornebise, Michael P. H. Stumpf
Statistical Applications in Genetics and Molecular Biology, 2013

Maximizing the Information Content of Experiments in Systems Biology
Juliane Liepe*, Sarah Filippi*, Michal Komorowski and Michael P. H. Stumpf
PLoS Computational Biology, 2013
* joint first-authors

Perturbation of fetal liver hematopoietic stem and progenitor cell development by trisomy 21
Anindita Roy, Gillian Cowan, Adam J. Mead, Sarah Filippi, Georg Bohn, Aristeidis Chaidos, Oliver Tunstall, Jerry K. Y. Chan, Mahesh Choolani, Phillip Bennett, Sailesh Kumar, Deborah Atkinson, Josephine Wyatt-Ashmead, Ming Hu, Michael P. H. Stumpf, Katerina Goudevenou, David O’Connor, Stella T. Chou, Mitchell J. Weiss, Anastasios Karadimitris, Sten Eirik Jacobsen, Paresh Vyas, and Irene Roberts
Proceedings of the National Academy of Sciences of USA, 2012

Considerate approaches to constructing summary statistics for ABC model selection
Chris P. Barnes †, Sarah Filippi †, Michael P. H. Stumpf † and Thomas Thorne †
Statistics and Computing, 2012
† the authors contributed equally

Optimally Sensing a Single Channel Without Prior Information: The Tiling Algorithm and Regret Bounds
Sarah Filippi, Olivier Cappe and Aurelien Garivier
Journal of Selected Topics in Signal Processing (IEEE), 2011

Parametric Bandits: The Generalized Linear Case
Sarah Filippi, Olivier Cappe, Aurelien Garivier and Csaba Szepesvari
Neural Information Processing Systems (NIPS’2010), 2010

Optimism in Reinforcement Learning Based on Kullback-Leibler Divergence
Sarah Filippi, Olivier Cappe and Aurelien Garivier
ALLERTON Conference, 2010

A Near Optimal Policy for Channel Allocation in Cognitive Radio
Sarah Filippi, Olivier Cappe, Fabrice Clerot, and Eric Moulines
Lecture Notes in Computer Science, Recent Advances in Reinforcement Learning, Springer. 2008

Recent Oral/Poster communications

Optimizing Threshold-Schedule and Perturbation Kernels for Sequential Approximate Bayesian Computation
at ABC in Roma, Italie – May 2013 (invited speaker)

Inference for single cell systems
at MASAMB, Imperial College London, UK – April 2013 (invited speaker)

Simulation-based Bayesian experimental design and its application to complex models in system biology
at IoSSB, Imperial College London, UK – November 2012 (invited speaker)

Rational Threshold Schemes for Approximate Bayesian Computation via Unscented Transforms
at ISBA, Kyoto, Japan – June 2012 (poster)

How to choose summary statistics for model selection and model checking
at MCQMC, Sydney, Australia – February 2012 (invited speaker)

Optimistic Algorithms in Reinforcement Learning
at RLAI, Univertiy of Alberta, Edmonton, Canada – October 2010

Optimism in Reinforcement Learning and Kullback-Leibler Divergence
at Journees MAS, France – September 2010 (invited speaker)
and at the Statistical Machine Learning in Paris (SMILE) seminar – May 2010

Parametric Bandits: The Generalized Linear Case
at Conference Francophone sur l’Apprentissage Automatique (CAP’2010) – May 2010
and at Journees Francophones de Plannifications, Decision et Apprentissage – June 2010

PhD Thesis

Title: Optimistic strategies in Reinforcement Learning
Under the supervision of: Olivier Cappé and Aurélien Garivier
At: LTCI (Laboratoire Traitement et Communication de l’Information) which is a joint lab of TELECOM ParisTech and CNRS
Jury: Jean-Yves Audibert, Rémi Munos, Damien Ernst, Frédérick Garcia, Eric Moulines, Fabrice Clérot
Download the manuscript here.

Teaching

From 2007 to 2010 : Quadratic forms and geometry at University Paris VI
In 2007-2008 : Convex optimization at ENSAE (Statistical School in Paris)