An interesting class of commonsense reasoning problems arises when people are faced with natural disasters. To investigate this topic, we present \textsf{RESPONSE}, a human-curated dataset containing 1789 annotated instances featuring 6037 sets of questions designed to assess LLMs' commonsense reasoning in disaster situations across different time frames. The dataset includes problem descriptions, missing resources, time-sensitive solutions, and their justifications, with a subset validated by environmental engineers. Through both automatic metrics and human evaluation, we compare LLM-generated recommendations against human responses. Our findings show that even state-of-the-art models like GPT-4 achieve only 37\% human-evaluated correctness for immediate response actions, highlighting significant room for improvement in LLMs' ability for commonsense reasoning in crises.
翻译:当人们面临自然灾害时,会出现一类有趣的常识推理问题。为研究这一主题,我们提出了 \textsf{RESPONSE},这是一个人工标注的数据集,包含 1789 个标注实例,涵盖 6037 组问题,旨在评估大语言模型在不同时间框架下应对灾害情境的常识推理能力。该数据集包含问题描述、缺失资源、时间敏感的解决方案及其依据,其中部分数据已由环境工程师验证。通过自动指标和人工评估,我们将大语言模型生成的建议与人类回答进行了比较。研究结果表明,即使是 GPT-4 等最先进的模型,在即时响应行动方面也仅达到 37% 的人工评估正确率,凸显了大语言模型在危机情境下进行常识推理的能力仍有显著提升空间。