Title: A scalable estimate of the extra-sample prediction error via approximate leave-one-out
We propose a scalable closed-form formula (ALOλ) to estimate the extra-sample prediction error of regularized estimators. Our approach employs existing heuristic arguments to approximate the leave-one-out perturbations. We theoretically prove the accuracy of ALOλ in the high-dimensional setting where the number of predictors is proportional to the number of observations. We show how this approach can be applied to popular non-differentiable regularizers, such as LASSO. Our theoretical findings are illustrated using simulations and real recordings from spatially sensitive neurons (grid cells) in the medial entorhinal cortex of a rat.