Fully Homomorphic Encryption (FHE) allows for computation directly on encrypted data and enables privacy-preserving neural inference in the cloud. Prior work has focused on models with dense inputs (e.g., CNNs), with less attention given to those with sparse inputs such as Deep Learning Recommendation Models (DLRMs). These models require encrypted lookup into large embedding tables that are challenging to implement using FHE's restrictive operators and introduce significant overhead. In this paper, we develop performance optimizations to efficiently support embedding lookups in FHE-based inference pipelines. First, we present an embedding compression technique using client-side digit decomposition that achieves a 56$\times$ speedup over state-of-the-art. Next, we propose a multi-embedding packing strategy that enables ciphertext SIMD-parallel lookups across multiple tables. Crucially, our goal is not only to retrieve the correct embeddings, but to do so in a way that produces ciphertext outputs in a layout that is directly compatible with downstream encrypted computations server-side. We name our approach HE-LRM and demonstrate end-to-end encrypted DLRM inference. We evaluate HE-LRM on UCI (health prediction) and Criteo (click prediction), achieving inference latencies of 24 and 489 seconds, respectively, on a single-threaded CPU. Finally, while our evaluation focuses on DLRMs, we investigate and apply our embedding-lookup primitives to other models such as LLMs, which require both batched and single-embedding lookups.
翻译:暂无翻译