Post-quantum cryptography (PQC) is crucial for securing data against emerging quantum threats. However, its algorithms are computationally complex and difficult to implement efficiently on hardware. In this paper, we explore the potential of Large Language Models (LLMs) to accelerate the hardware-software co-design process for PQC, with a focus on the FALCON digital signature scheme. We present a novel framework that leverages LLMs to analyze PQC algorithms, identify performance-critical components, and generate candidate hardware descriptions for FPGA implementation. We present the first quantitative comparison between LLM-driven synthesis and conventional HLS-based approaches for low-level compute-intensive kernels in FALCON, showing that human-in-the-loop LLM-generated accelerators can achieve up to 2.6x speedup in kernel execution time with shorter critical paths, while highlighting trade-offs in resource utilization and power consumption. Our results suggest that LLMs can minimize design effort and development time by automating FPGA accelerator design iterations for PQC algorithms, offering a promising new direction for rapid and adaptive PQC accelerator design on FPGAs.
翻译:后量子密码学(PQC)对于保护数据免受新兴量子威胁至关重要。然而,其算法计算复杂度高,难以在硬件上高效实现。本文探讨了利用大语言模型(LLMs)加速PQC软硬件协同设计过程的潜力,重点关注FALCON数字签名方案。我们提出了一种新颖的框架,该框架利用LLMs分析PQC算法、识别性能关键组件,并为FPGA实现生成候选硬件描述。我们首次对LLM驱动的综合方法与基于传统高层次综合(HLS)的方法在FALCON底层计算密集型核心上的表现进行了定量比较,结果表明,采用人机协同循环的LLM生成加速器在核心执行时间上可实现高达2.6倍的加速,且关键路径更短,同时也突显了在资源利用率和功耗方面的权衡。我们的研究结果表明,LLMs能够通过自动化PQC算法的FPGA加速器设计迭代,最大限度地减少设计投入和开发时间,为在FPGA上实现快速、自适应的PQC加速器设计提供了一个前景广阔的新方向。