Embeddings play a pivotal role across various disciplines, offering compact representations of complex data structures. Randomized methods like Johnson-Lindenstrauss (JL) provide state-of-the-art and essentially unimprovable theoretical guarantees for achieving such representations. These guarantees are worst-case and in particular, neither the analysis, nor the algorithm, takes into account any potential structural information of the data. The natural question is: must we randomize? Could we instead use an optimization-based approach, working directly with the data? A first answer is no: as we show, the distance-preserving objective of JL has a non-convex landscape over the space of projection matrices, with many bad stationary points. But this is not the final answer. We present a novel method motivated by diffusion models, that circumvents this fundamental challenge: rather than performing optimization directly over the space of projection matrices, we use optimization over the larger space of random solution samplers, gradually reducing the variance of the sampler. We show that by moving through this larger space, our objective converges to a deterministic (zero variance) solution, avoiding bad stationary points. This method can also be seen as an optimization-based derandomization approach and is an idea and method that we believe can be applied to many other problems.
翻译:嵌入表示在众多学科中发挥着关键作用,为复杂数据结构提供紧凑的表示形式。Johnson-Lindenstrauss(JL)等随机方法为实现此类表示提供了最先进且理论上近乎不可改进的保证。这些保证属于最坏情况分析,特别地,其分析过程和算法均未考虑数据可能存在的任何结构信息。这就引出一个自然问题:我们必须采用随机化方法吗?能否转而使用基于优化的方法直接处理数据?初步答案是否定的:正如我们所证明的,JL的距离保持目标在投影矩阵空间上具有非凸性景观,存在大量不良驻点。但这并非最终结论。我们提出一种受扩散模型启发的新方法,能够规避这一根本性挑战:不直接在投影矩阵空间进行优化,而是在更大的随机解采样器空间进行优化,逐步降低采样器的方差。我们证明,通过遍历这个更大的空间,我们的目标函数会收敛到确定性(零方差)解,从而避开不良驻点。该方法亦可视为基于优化的去随机化途径,我们相信这一思想与方法可推广应用于众多其他问题领域。