A key problem in deep learning and computational neuroscience is relating the geometrical properties of neural representations to task performance. Here, we consider this problem for continuous decoding tasks where neural variability may affect task precision. Using methods from statistical mechanics, we study the average-case learning curves for $\varepsilon$-insensitive Support Vector Regression ($\varepsilon$-SVR) and discuss its capacity as a measure of linear decodability. Our analysis reveals a phase transition in the training error at a critical load, capturing the interplay between the tolerance parameter $\varepsilon$ and neural variability. We uncover a double-descent phenomenon in the generalization error, showing that $\varepsilon$ acts as a regularizer, both suppressing and shifting these peaks. Theoretical predictions are validated both on toy models and deep neural networks, extending the theory of Support Vector Machines to continuous tasks with inherent neural variability.
翻译:深度学习与计算神经科学中的一个关键问题在于如何将神经表征的几何特性与任务性能相关联。本文针对连续解码任务研究该问题,其中神经变异性可能影响任务精度。利用统计力学方法,我们研究了$\varepsilon$-不敏感支持向量回归($\varepsilon$-SVR)的平均学习曲线,并讨论了其作为线性可解码性度量的容量。分析揭示了在临界负载处训练误差的相变,捕捉了容差参数$\varepsilon$与神经变异性之间的相互作用。我们发现了泛化误差中的双下降现象,表明$\varepsilon$起到了正则化器的作用,既能抑制这些峰值又能使其偏移。理论预测在玩具模型和深度神经网络上均得到验证,从而将支持向量机理论拓展至具有固有神经变异性的连续任务。