news

Feb 21, 2024 Gaussian ensembles of deep random neural networks have recently made a come back as it was shown they can, in some situations, capture the generalisation behaviour of trained networks. In our recent pre-print, we provide an exact asymptotic analysis of the training of the last layer of such ensembles.
Feb 21, 2024 Resampling techniques such as bagging, the jackknife and the bootstrap are important tools to compute uncertainty of estimators in classical statistics. In our recent pre-print we answer the question: are they reliable in a high-dimensional regime?
Feb 08, 2024 Neural networks are notably susceptible to adversarial attacks. Understanding which features in the training data are more susceptible and how to protect them is therefore an important endeavour. In our recent pre-print we introduce a synthetic model of structured data which captures this phenomenology, and provide an exact asymptotic solution of adversarial training in this model. In particular, we identify a generalisation vs. robustness trade-off, and propose some strategies to defend non-robust features.
Feb 07, 2024 Understanding how neural-networks learn features during training and how these impact their capacity to generalise is an important open question. In our recent pre-print, we provide a sharp analysis of how two-layer neural networks learn features from data, and improve over the kernel regime, after being trained with a single gradient descent step.
Sep 28, 2023 Most of the asymptotic analysis in the proportional regime rely on Gaussianity on the covariates. In our new work High-dimensional robust regression under heavy-tailed data: Asymptotics and Universality with Urte Adomaityte, Leonardo Defilippis and Gabriele Sicuro we provide an asymptotic analysis for generalised linear models trained on heavy-tailed covariates. In particular, we investigate the impact of heavy-tailed contamination to robust M-estimators! Check it out!
Sep 22, 2023 Our paper Universality laws for Gaussian mixtures in generalized linear modelss in collaboration with Yatin Dandi, Ludovic Stephan, Florent Krzakala and Lenka Zdeborová has been accepted for a poster at the Conference on Neural Information Processing Systems (NeurIPS). I will be also presenting our recent work Escaping mediocrity: how two-layer networks learn hard generalized linear models as an oral contribution to the Optimization for Machine Learning workshop. Come discuss with us in New Orleans!
May 14, 2023 Our paper From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks in collaboration with Luca Arnaboldi, Florent Krzakala and Ludovic Stephan has been accepted for a poster at the Conference on Learning Theory (COLT). Come discuss with us in Bangalore!
Apr 24, 2023 Our papers: Have been accepted for a poster at the International Conference on Machine Learning (ICML). Come discuss with us in Honolulu!
Feb 17, 2023 Two recent works on universality results for generalised linear models are now out:
Feb 02, 2023 Our new paper titled From high-dimensional & mean-field dynamics to dimensionless ODEs: A unifying approach to SGD in two-layers networks is now out! In this work, we derive different limits for the one-pass SGD dynamics of two-layer neural networks, including: a) the classical gradient flow limit; b) the high-dimensional limit studied by Savid Saad & Sara Solla in the 90s; c) and the more recent infinite width mean-field limit. Check it out!
Feb 01, 2023 Deterministic equivalent and error universality of deep random features learning, work done in collaboration with Dominik Schröder, Hugo Cui and Daniil Dmitriev is now out on arXiv. In this work, we prove a deterministic equivalent for the resolvent of deep random features sample covariance matrices, which allow us to establish Gaussian universality of the test error in ridge regression. We also conjecture (and provide extensive support) for the error universality for other loss functions, allowing us to derive an asymptotic formula for the asymptotic performance beyond ridge regression. Check it out!
Jan 19, 2023 Our paper on the A study of uncertainty quantification in overparametrized high-dimensional models has been accepted at the International Conference on Artificial Intelligence and Statistics (AISTATS). See you in Valencia in April!
Sep 14, 2022 Our two works were accepted for publication at NeurIPS 2022! See you virtually in New Orleans in December!
Apr 26, 2022 Gaussianity of the input data is a classic assumption in high-dimensional statistics. Although this might seen unrealistic at a first sight, due to strong universality properties it turns out that this assumption is not so stringent in high-dimensions. This is the subject of our recent work Gaussian Universality of Linear Classifiers with Random Labels in High-Dimension! Check it out!
Apr 26, 2022 A widespread answer for the “curse of dimensionality” in machine learning is that the relevant features in data are often low-dimensional embeddings in a higher dimensional space. Subspace clustering is precisely the unsupervised task of finding low-dimensional clusters in data. In our recent work Subspace clustering in high-dimensions: Phase transitions & Statistical-to-Computational gap we characterise the statistical-to-computational trade-offs of subspace clustering in a simple Gaussian mixture model where the relevant features (the means) are sparse vectors. Check it out!
Apr 15, 2022 Our paper on the Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics for Convex Losses in High-Dimension has been accepted for a poster at the International Conference on Machine Learning (ICML). Stay tuned and check our video out!
Mar 28, 2022 One of the most classical results in high-dimensional learning theory provides a closed-form expression for the generalisation error of binary classification with the single-layer teacher-student perceptron on i.i.d. Gaussian inputs. Surprisingly, an analogous analysis for the corresponding multi-class teacher-student perceptron was missing. In our new work Learning curves for the multi-class teacher-student perceptron we fill this gap! Check it out!
Feb 09, 2022 A couple of exiting projects I have been working for the past year are finally out on arXiv. Check them out!
Sep 29, 2021 Our three papers were accepted for publication at NeurIPS 2021! See you virtually in December!
Jun 08, 2021 Our new paper Learning Gaussian Mixtures with Generalised Linear Models: Precise Asymptotics in High-dimensions is out on arXiv! Check it out!
Jun 01, 2021 Our new paper Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime is out on arXiv! Check it out!
Apr 06, 2021 Our paper on the The Gaussian equivalence of generative models for learning with shallow neural networks has been accepted at the Mathematical and Scientific Machine Learning Conference (MSML2021), which due to the covid-19 is going to be online! Registration is open and free!
Feb 17, 2021 Our new paper Capturing the learning curves of generic features maps for realistic data sets with a teacher-student model is out on arXiv! Check it out!
Dec 14, 2020 Our new paper on the The Gaussian equivalence of generative models for learning with shallow neural networks is out on arXiv! Check it out!
Sep 26, 2020 Our work on the Phase retrieval in high dimensions: Statistical and computational phase transitions has been accepted for NeurIPS 2020! See you virtually in December!
Sep 19, 2020 We will be presenting our work on the Generalisation error in learning with random features and the hidden manifold model on DeepMath 2020! Come check our poster!
Jun 09, 2020 Our work on the Phase retrieval in high dimensions: Statistical and computational phase transitions is out on arXiv!
Jun 01, 2020 Our paper on the Generalisation error in learning with random features and the hidden manifold model has been accepted at the International Conference on Machine Learning (ICML), which this year will be online, between the 12th and the 18th July. Stay tuned and check our video out.
Apr 11, 2020 Our paper on the Exact asymptotics for phase retrieval and compressed sensing with random generative priors has been accepted at the Mathematical and Scientific Machine Learning Conference (MSML2020), which due to the covid-19 is going to be online! Registration is open and free!
Feb 21, 2020 Our new paper on the Generalisation error in learning with random features and the hidden manifold model is out on arXiv! Check it out!
Dec 19, 2019 An extended version of the work we presented at the Deep Inverse NeurIPS workshop is now out on arXiv. Check it out!.
Oct 05, 2019 I will be presenting our ongoing work on the algorithmic and statistical thresholds for phase retrieval and compressed sensing in the NeurIPS 2019 workshop Solving inverse problems with deep networks: New architectures, theoretical foundations, and applications.
Sep 04, 2019 Our paper “The spiked matrix model with generative priors” was accepted for publication at NeurIPS 2019. See you in Vancouver!
May 29, 2019 Our new paper “The spiked matrix model with generative priors” is out! Check it on the arXiv.
Nov 07, 2015 A long announcement with details