Many practical prediction algorithms represent inputs in Euclidean space and replace the discrete 0/1 classification loss with a real-valued surrogate loss, effectively reducing classification tasks to stochastic optimization. In this paper, we investigate the expressivity of such reductions in terms of key resources, including dimension and the role of randomness. We establish bounds on the minimum Euclidean dimension $D$ needed to reduce a concept class with VC dimension $d$ to a Stochastic Convex Optimization (SCO) problem in $\mathbb{R}^D$, formally addressing the intuitive interpretation of the VC dimension as the number of parameters needed to learn the class. To achieve this, we develop a generalization of the Borsuk-Ulam Theorem that combines the classical topological approach with convexity considerations. Perhaps surprisingly, we show that, in some cases, the number of parameters $D$ must be exponentially larger than the VC dimension $d$, even if the reduction is only slightly non-trivial. We also present natural classification tasks that can be represented in much smaller dimensions by leveraging randomness, as seen in techniques like random initialization. This result resolves an open question posed by Kamath, Montasser, and Srebro (COLT 2020). Our findings introduce new variants of \emph{dimension complexity} (also known as \emph{sign-rank}), a well-studied parameter in learning and complexity theory. Specifically, we define an approximate version of sign-rank and another variant that captures the minimum dimension required for a reduction to SCO. We also propose several open questions and directions for future research.
翻译:许多实用的预测算法将输入表示为欧几里得空间中的向量,并用实值替代损失函数取代离散的0/1分类损失,从而将分类任务有效约简为随机优化问题。本文从关键资源(包括维度和随机性的作用)的角度,研究此类约简的表达能力。我们建立了将VC维为$d$的概念类约简为$\mathbb{R}^D$中随机凸优化(SCO)问题所需的最小欧几里得维度$D$的界,从而从形式化角度阐释了VC维作为学习该类所需参数数量的直观理解。为实现这一目标,我们推广了Borsuk-Ulam定理,将经典拓扑方法与凸性考量相结合。令人惊讶的是,我们证明在某些情况下,即使约简仅具有轻微的非平凡性,参数数量$D$也必须比VC维$d$指数级更大。我们还展示了通过利用随机性(如随机初始化技术),可以在更小维度中表示的自然分类任务。该结果解决了Kamath、Montasser与Srebro(COLT 2020)提出的开放性问题。我们的研究引入了学习与复杂性理论中已被深入研究的参数——\emph{维度复杂性}(亦称\emph{符号秩})的新变体。具体而言,我们定义了符号秩的近似版本,以及另一个刻画约简至SCO所需最小维度的变体。同时,我们提出了若干开放性问题及未来研究方向。