Learned indexes leverage machine learning models to accelerate query answering in databases, showing impressive practical performance. However, theoretical understanding of these methods remains incomplete. Existing research suggests that learned indexes have superior asymptotic complexity compared to their non-learned counterparts, but these findings have been established under restrictive probabilistic assumptions. Specifically, for a sorted array with $n$ elements, it has been shown that learned indexes can find a key in $O(\log(\log n))$ expected time using at most linear space, compared with $O(\log n)$ for non-learned methods. In this work, we prove $O(1)$ expected time can be achieved with at most linear space, thereby establishing the tightest upper bound so far for the time complexity of an asymptotically optimal learned index. Notably, we use weaker probabilistic assumptions than prior research, meaning our work generalizes previous results. Furthermore, we introduce a new measure of statistical complexity for data. This metric exhibits an information-theoretical interpretation and can be estimated in practice. This characterization provides further theoretical understanding of learned indexes, by helping to explain why some datasets seem to be particularly challenging for these methods.
翻译:学习索引利用机器学习模型加速数据库中的查询应答,展现出卓越的实际性能。然而,对这些方法的理论理解仍不完善。现有研究表明,与非学习索引相比,学习索引具有更优的渐近复杂度,但这些结论是在限制性概率假设下建立的。具体而言,对于包含 $n$ 个元素的有序数组,研究已证明学习索引最多使用线性空间即可在 $O(\log(\log n))$ 期望时间内找到目标键值,而非学习方法的复杂度为 $O(\log n)$。本文证明,在最多使用线性空间的条件下,可实现 $O(1)$ 期望时间,从而为渐近最优学习索引的时间复杂度建立了迄今最严格的上界。值得注意的是,我们采用了比先前研究更弱的概率假设,这意味着我们的工作推广了既有结论。此外,我们提出了一种新的数据统计复杂度度量。该度量具有信息论层面的解释,且可在实践中进行估计。这一特征化描述通过解释为何某些数据集对学习索引方法具有特殊挑战性,为理解学习索引提供了进一步的理论依据。