LLM watermarking, which embeds imperceptible yet algorithmically detectable signals in model outputs to identify LLM-generated text, has become crucial in mitigating the potential misuse of large language models. However, the abundance of LLM watermarking algorithms, their intricate mechanisms, and the complex evaluation procedures and perspectives pose challenges for researchers and the community to easily experiment with, understand, and assess the latest advancements. To address these issues, we introduce MarkLLM, an open-source toolkit for LLM watermarking. MarkLLM offers a unified and extensible framework for implementing LLM watermarking algorithms, while providing user-friendly interfaces to ensure ease of access. Furthermore, it enhances understanding by supporting automatic visualization of the underlying mechanisms of these algorithms. For evaluation, MarkLLM offers a comprehensive suite of 12 tools spanning three perspectives, along with two types of automated evaluation pipelines. Through MarkLLM, we aim to support researchers while improving the comprehension and involvement of the general public in LLM watermarking technology, fostering consensus and driving further advancements in research and application. Our code is available at https://github.com/THU-BPM/MarkLLM.
翻译:大型语言模型(LLM)水印技术通过在模型输出中嵌入难以察觉但可通过算法检测的信号来识别LLM生成的文本,已成为缓解大型语言模型潜在滥用的关键手段。然而,LLM水印算法的多样性、其复杂的内在机制,以及评估流程与视角的多元性,给研究者和社区轻松实验、理解与评估最新进展带来了挑战。为应对这些问题,我们推出了MarkLLM——一个开源的LLM水印工具包。MarkLLM提供了统一且可扩展的框架来实现LLM水印算法,同时通过友好的用户接口确保易用性。此外,该工具包支持对算法底层机制进行自动可视化,从而深化理解。在评估方面,MarkLLM提供了一套涵盖三个维度的12种评估工具,以及两类自动化评估流程。通过MarkLLM,我们旨在支持研究工作的同时,提升公众对LLM水印技术的理解与参与度,推动共识形成并促进研究与应用的新进展。我们的代码发布于 https://github.com/THU-BPM/MarkLLM。