Icml_2026
Our works:
- Sharp description of local minima in the loss landscape of high-dimensional two-layer ReLU neural networks
- A Noise Sensitivity Exponent Controls Large Statistical-to-Computational Gaps in Single- and Multi-Index Models (Spotlight)
- A Random Matrix Theory of Masked Self-Supervised Regression
- On the existence of consistent adversarial attacks in high-dimensional linear classification (Spotlight)
were accepted for ICML 2026, two of them as spotlights! Come see the talks and posters in Seoul!
Enjoy Reading This Article?
Here are some more articles you might like to read next: