As NLP systems are increasingly deployed at scale, concerns about their potential negative impacts have attracted the attention of the research community, yet discussions of risk have mostly been at an abstract level and focused on generic AI or NLP applications. We argue that clearer assessments of risks and harms to users--and concrete strategies to mitigate them--will be possible when we specialize the analysis to more concrete applications and their plausible users. As an illustration, this paper is grounded in cooking recipe procedural document question answering (ProcDocQA), where there are well-defined risks to users such as injuries or allergic reactions. Our case study shows that an existing language model, applied in "zero-shot" mode, quantitatively answers real-world questions about recipes as well or better than the humans who have answered the questions on the web. Using a novel questionnaire informed by theoretical work on AI risk, we conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
翻译:随着自然语言处理系统的大规模部署,其潜在负面影响日益引发研究界关注,但当前风险讨论多停留在抽象层面,主要聚焦通用人工智能或自然语言处理应用。我们认为,当分析聚焦于更具体的应用场景及其潜在用户时,才能更清晰地评估用户面临的风险与危害,并制定切实可行的缓解策略。作为例证,本文以烹饪食谱程序性文档问答为研究场景,该领域存在对用户造成伤害或过敏反应等明确定义的风险。案例研究表明,现有语言模型在"零样本"模式下对真实食谱问题的定量回答效果,达到甚至超越了网络用户的人工回答水平。基于人工智能风险理论框架设计的新型问卷,我们开展了以风险为导向的错误分析,该分析可为未来系统设计提供依据,从而降低部署风险并提升性能。