The use of Large Language Models (LLMs) for writing has sparked controversy both among readers and writers. On one hand, writers are concerned that LLMs will deprive them of agency and ownership, and readers are concerned about spending their time on text generated by soulless machines. On the other hand, AI-assistance can improve writing as long as writers can conform to publisher policies, and as long as readers can be assured that a text has been verified by a human. We argue that a system that captures the provenance of interaction with an LLM can help writers retain their agency, conform to policies, and communicate their use of AI to publishers and readers transparently. Thus we propose HaLLMark, a tool for visualizing the writer's interaction with the LLM. We evaluated HaLLMark with 13 creative writers, and found that it helped them retain a sense of control and ownership of the text.
翻译:大语言模型(LLMs)在写作中的使用引发了读者与作者双方的争议。一方面,作者担心LLMs会剥夺其创作主体性和所有权,而读者则担忧花费时间阅读由无情机器生成的文本;另一方面,只要作者能遵守出版商政策,且读者能确信文本已通过人工审核,AI辅助即可提升写作质量。我们认为,记录与LLM交互溯源的系统能帮助作者保留主体性、遵守政策,并向出版商和读者透明地传达其AI使用情况。据此,我们提出可视化作者与LLM交互过程的HaLLMark工具。通过对13位创意写作作者的评估,我们发现该工具能帮助作者保持对文本的掌控感与归属感。