The emergence of learned indexes has caused a paradigm shift in our perception of indexing by considering indexes as predictive models that estimate keys' positions within a data set, resulting in notable improvements in key search efficiency and index size reduction; however, a significant challenge inherent in learned index modeling is its constrained support for update operations, necessitated by the requirement for a fixed distribution of records. Previous studies have proposed various approaches to address this issue with the drawback of high overhead due to multiple model retraining. In this paper, we present UpLIF, an adaptive self-tuning learned index that adjusts the model to accommodate incoming updates, predicts the distribution of updates for performance improvement, and optimizes its index structure using reinforcement learning. We also introduce the concept of balanced model adjustment, which determines the model's inherent properties (i.e. bias and variance), enabling the integration of these factors into the existing index model without the need for retraining with new data. Our comprehensive experiments show that the system surpasses state-of-the-art indexing solutions (both traditional and ML-based), achieving an increase in throughput of up to 3.12 times with 1000 times less memory usage.
翻译:学习索引的出现引发了索引设计范式的转变,它将索引视为预测数据集中键值位置的模型,从而显著提升了键值搜索效率并减少了索引大小;然而,学习索引建模面临的一个关键挑战在于其对更新操作的支持受限,这源于模型对记录固定分布的要求。先前研究提出了多种解决该问题的方法,但均因需多次模型重训练而产生高昂开销。本文提出UpLIF,一种自适应自调优学习索引,它能够调整模型以适应数据更新,预测更新分布以提升性能,并利用强化学习优化索引结构。我们还引入了平衡模型调整的概念,该方法通过确定模型的内在属性(即偏差与方差),使得这些因素能够在不依赖新数据重训练的情况下整合到现有索引模型中。综合实验表明,本系统在吞吐量上最高达到现有最优索引方案(包括传统方法与基于机器学习的方法)的3.12倍,同时内存使用量降低至千分之一。