Creating an adversary resilient construction of the Learned Bloom Filter with provable guarantees is an open problem. We define a strong adversarial model for the Learned Bloom Filter. Our adversarial model extends an existing adversarial model designed for the Classical (i.e not ``Learned'') Bloom Filter by prior work and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). Using our model, we construct an adversary resilient variant of the Learned Bloom Filter called the Downtown Bodega Filter. We show that: if pseudo-random permutations exist, then an Adversary Resilient Learned Bloom Filter may be constructed with $2\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. We construct a hybrid adversarial model for the case where a fraction of the query workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this hybrid model.
翻译:构建具有可证明保证的对抗性鲁棒学习型布隆过滤器是一个开放性问题。我们为学习型布隆过滤器定义了一个强对抗模型。该对抗模型扩展了先前工作为经典(即非“学习型”)布隆过滤器设计的现有对抗模型,并考虑了在概率多项式时间(PPT)内运行的计算有界对手。基于此模型,我们构建了一种名为Downtown Bodega Filter的对抗性鲁棒学习型布隆过滤器变体。我们证明:若伪随机置换存在,则对抗性鲁棒学习型布隆过滤器可通过使用$2\lambda$额外内存位及至多在关键路径上增加一次伪随机置换来构建。针对部分查询负载由对手选择的情况,我们构建了一种混合对抗模型。我们展示了在此混合模型下,相较于替代方案,使用Downtown Bodega Filter能在实际场景中提供更优的性能保证。