Modular addition is, on its face, a simple operation: given $N$ elements in $\mathbb{Z}_q$, compute their sum modulo $q$. Yet, scalable machine learning solutions to this problem remain elusive: prior work trains ML models that sum $N \le 6$ elements mod $q \le 1000$. Promising applications of ML models for cryptanalysis-which often involve modular arithmetic with large $N$ and $q$-motivate reconsideration of this problem. This work proposes three changes to the modular addition model training pipeline: more diverse training data, an angular embedding, and a custom loss function. With these changes, we demonstrate success with our approach for $N = 256, q = 3329$, a case which is interesting for cryptographic applications, and a significant increase in $N$ and $q$ over prior work. These techniques also generalize to other modular arithmetic problems, motivating future work.
翻译:模加法表面上是一种简单运算:给定$\mathbb{Z}_q$中的$N$个元素,计算它们在模$q$下的和。然而,该问题的可扩展机器学习解决方案仍然难以实现:先前研究训练的机器学习模型仅能处理$N \le 6$个元素在$q \le 1000$范围内的模加法。密码分析中机器学习模型的应用前景——通常涉及大$N$和大$q$的模运算——促使我们重新审视该问题。本研究提出对模加法模型训练流程的三项改进:更多样化的训练数据、角度嵌入方法和定制化损失函数。通过这些改进,我们成功实现了$N = 256, q = 3329$(该参数在密码学应用中具有重要意义)的案例,较先前研究在$N$和$q$的规模上取得显著提升。这些技术还可推广至其他模运算问题,为后续研究提供了新方向。