Semantic embeddings to represent objects such as image, text and audio are widely used in machine learning and have spurred the development of vector similarity search methods for retrieving semantically related objects. In this work, we study the sibling task of estimating a sum over all objects in a set, such as the kernel density estimate (KDE) and the normalizing constant for softmax distributions. While existing solutions provably reduce the sum estimation task to acquiring $\mathcal{O}(\sqrt{n})$ most similar vectors, where $n$ is the number of objects, we introduce a novel algorithm that only requires $\mathcal{O}(\log(n))$ most similar vectors. Our approach randomly assigns objects to levels with exponentially-decaying probabilities and constructs a vector similarity search data structure for each level. With the top-$k$ objects from each level, we propose an unbiased estimate of the sum and prove a high-probability relative error bound. We run experiments on OpenImages and Amazon Reviews with a vector similar search implementation to show that our method can achieve lower error using less computational time than existing reductions. We show results on applications in estimating densities, computing softmax denominators, and counting the number of vectors within a ball.
翻译:在机器学习中,广泛使用语义嵌入来表示图像、文本和音频等对象,并推动了用于检索语义相关对象的向量相似性搜索方法的发展。在本工作中,我们研究其姊妹任务:估计集合中所有对象的和,例如核密度估计(KDE)和 softmax 分布归一化常数的计算。虽然现有解决方案在理论上将和估计任务简化为获取 $\mathcal{O}(\sqrt{n})$ 个最相似向量(其中 $n$ 为对象数量),但我们提出了一种新颖算法,仅需 $\mathcal{O}(\log(n))$ 个最相似向量。我们的方法以指数衰减的概率将对象随机分配到不同层级,并为每个层级构建一个向量相似性搜索数据结构。利用从每个层级获取的前 $k$ 个对象,我们提出了一个无偏的和估计量,并证明了其高概率相对误差界。我们在 OpenImages 和 Amazon Reviews 数据集上使用向量相似性搜索实现进行了实验,结果表明,与现有的归约方法相比,我们的方法能够以更少的计算时间实现更低的误差。我们展示了该方法在密度估计、计算 softmax 分母以及统计球内向量数量等应用中的结果。