Embedding techniques have become essential components of large databases in the deep learning era. By encoding discrete entities, such as words, items, or graph nodes, into continuous vector spaces, embeddings facilitate more efficient storage, retrieval, and processing in large databases. Especially in the domain of recommender systems, millions of categorical features are encoded as unique embedding vectors, which facilitates the modeling of similarities and interactions among features. However, numerous embedding vectors can result in significant storage overhead. In this paper, we aim to compress the embedding table through quantization techniques. Given that features vary in importance levels, we seek to identify an appropriate precision for each feature to balance model accuracy and memory usage. To this end, we propose a novel embedding compression method, termed Mixed-Precision Embeddings (MPE). Specifically, to reduce the size of the search space, we first group features by frequency and then search precision for each feature group. MPE further learns the probability distribution over precision levels for each feature group, which can be used to identify the most suitable precision with a specially designed sampling strategy. Extensive experiments on three public datasets demonstrate that MPE significantly outperforms existing embedding compression methods. Remarkably, MPE achieves about 200x compression on the Criteo dataset without comprising the prediction accuracy.
翻译:嵌入技术已成为深度学习时代大型数据库的核心组成部分。通过将离散实体(如词汇、物品或图节点)编码为连续向量空间,嵌入技术显著提升了大型数据库的存储、检索和处理效率。尤其在推荐系统领域,数百万类别特征被编码为独特的嵌入向量,这有助于建模特征间的相似性与交互关系。然而,海量的嵌入向量会导致巨大的存储开销。本文旨在通过量化技术压缩嵌入表。鉴于特征的重要性存在差异,我们致力于为每个特征寻找合适的精度以平衡模型精度与内存占用。为此,我们提出了一种新颖的嵌入压缩方法——混合精度嵌入。具体而言,为缩小搜索空间,我们首先按频率对特征进行分组,随后为每个特征组搜索最优精度。该方法进一步学习每个特征组在不同精度级别上的概率分布,结合专门设计的采样策略可确定最适宜的精度配置。在三个公开数据集上的大量实验表明,混合精度嵌入方法显著优于现有嵌入压缩技术。值得注意的是,该方法在Criteo数据集上实现了约200倍的压缩率,同时完全保持了预测精度。