In our recent pre-print, we provide a rigorous random matrix theory analysis of how feature learning impacts generalisation in two-layer neural networks after a single aggressive gradient step.