LLM watermarking, which embeds imperceptible yet algorithmically detectable signals in model outputs to identify LLM-generated text, has become crucial in mitigating the potential misuse of large language models. However, the abundance of LLM watermarking algorithms, their intricate mechanisms, and the complex evaluation procedures and perspectives pose challenges for researchers and the community to easily experiment with, understand, and assess the latest advancements. To address these issues, we introduce MarkLLM, an open-source toolkit for LLM watermarking. MarkLLM offers a unified and extensible framework for implementing LLM watermarking algorithms, while providing user-friendly interfaces to ensure ease of access. Furthermore, it enhances understanding by supporting automatic visualization of the underlying mechanisms of these algorithms. For evaluation, MarkLLM offers a comprehensive suite of 12 tools spanning three perspectives, along with two types of automated evaluation pipelines. Through MarkLLM, we aim to support researchers while improving the comprehension and involvement of the general public in LLM watermarking technology, fostering consensus and driving further advancements in research and application. Our code is available at https://github.com/THU-BPM/MarkLLM.
翻译:大语言模型水印技术通过在模型输出中嵌入难以察觉但算法可检测的信号来识别大语言模型生成的文本,这对于减轻大语言模型的潜在滥用至关重要。然而,众多的大语言模型水印算法、其复杂的内部机制,以及评估流程和视角的多样性,给研究者和社区轻松实验、理解和评估最新进展带来了挑战。为解决这些问题,我们推出了MarkLLM,一个用于大语言模型水印的开源工具包。MarkLLM为实施大语言模型水印算法提供了一个统一且可扩展的框架,同时提供了用户友好的接口以确保易于访问。此外,它通过支持对这些算法底层机制的自动可视化,增强了理解。在评估方面,MarkLLM提供了一套涵盖三个视角的12种工具,以及两种类型的自动化评估流水线。通过MarkLLM,我们旨在支持研究者的同时,提升公众对大语言模型水印技术的理解与参与,凝聚共识,并推动研究和应用的进一步发展。我们的代码可在 https://github.com/THU-BPM/MarkLLM 获取。