Logistic regression, the Support Vector Machine (SVM), and least squares are well-studied methods in the statistical and computer science community, with various practical applications. High-dimensional data arriving on a real-time basis makes the design of online learning algorithms that produce sparse solutions essential. The seminal work of \hyperlink{cite.langford2009sparse}{Langford, Li, and Zhang (2009)} developed a method to obtain sparsity via truncated gradient descent, showing a near-optimal online regret bound. Based on this method, we develop a quantum sparse online learning algorithm for logistic regression, the SVM, and least squares. Given efficient quantum access to the inputs, we show that a quadratic speedup in the time complexity with respect to the dimension of the problem is achievable, while maintaining a regret of $O(1/\sqrt{T})$, where $T$ is the number of iterations.
翻译:逻辑回归、支持向量机(SVM)和最小二乘法是统计学和计算机科学领域中被深入研究的经典方法,具有广泛的实际应用。实时到达的高维数据使得设计能够产生稀疏解的在线学习算法至关重要。\hyperlink{cite.langford2009sparse}{Langford, Li, and Zhang (2009)}的开创性工作提出了一种通过截断梯度下降获得稀疏性的方法,并展示了近乎最优的在线遗憾界。基于此方法,我们为逻辑回归、支持向量机和最小二乘法开发了一种量子稀疏在线学习算法。在能够高效量子访问输入数据的条件下,我们证明可以在保持 $O(1/\sqrt{T})$ 的遗憾界(其中 $T$ 为迭代次数)的同时,实现相对于问题维度的时间复杂度二次加速。