Learned indexes leverage machine learning models to accelerate query answering in databases, showing impressive practical performance. However, theoretical understanding of these methods remains incomplete. Existing research suggests that learned indexes have superior asymptotic complexity compared to their non-learned counterparts, but these findings have been established under restrictive probabilistic assumptions. Specifically, for a sorted array with $n$ elements, it has been shown that learned indexes can find a key in $O(\log(\log n))$ expected time using at most linear space, compared with $O(\log n)$ for non-learned methods. In this work, we prove $O(1)$ expected time can be achieved with at most linear space, thereby establishing the tightest upper bound so far for the time complexity of an asymptotically optimal learned index. Notably, we use weaker probabilistic assumptions than prior work, meaning our results generalize previous efforts. Furthermore, we introduce a new measure of statistical complexity for data. This metric exhibits an information-theoretical interpretation and can be estimated in practice. This characterization provides further theoretical understanding of learned indexes, by helping to explain why some datasets seem to be particularly challenging for these methods.
翻译:学习索引利用机器学习模型加速数据库中的查询应答,展现出令人瞩目的实际性能。然而,对这些方法的理论理解仍不完整。现有研究表明,与非学习方法相比,学习索引具有更优的渐近复杂度,但这些结论是在限制性概率假设下建立的。具体而言,对于一个包含 $n$ 个元素的有序数组,已有研究证明学习索引最多使用线性空间即可在 $O(\log(\log n))$ 期望时间内找到键值,而非学习方法的复杂度为 $O(\log n)$。本文中,我们证明了在最多使用线性空间的条件下可以实现 $O(1)$ 期望时间,从而为渐近最优学习索引的时间复杂度建立了迄今最严格的上界。值得注意的是,我们使用的概率假设弱于先前工作,这意味着我们的结果推广了前人的研究。此外,我们引入了一种新的数据统计复杂度度量。该度量具有信息论解释,且可在实践中进行估计。这一特征化描述为学习索引提供了进一步的理论理解,有助于解释为何某些数据集对这些方法而言似乎特别具有挑战性。