Data elicitation from human participants is one of the core data collection strategies used in empirical linguistic research. The amount of participants in such studies may vary considerably, ranging from a handful to crowdsourcing dimensions. Even if they provide resourceful extensive data, both of these settings come alongside many disadvantages, such as low control of participants' attention during task completion, precarious working conditions in crowdsourcing environments, and time-consuming experimental designs. For these reasons, this research aims to answer the question of whether Large Language Models (LLMs) may overcome those obstacles if included in empirical linguistic pipelines. Two reproduction case studies are conducted to gain clarity into this matter: Cruz (2023) and Lombard et al. (2021). The two forced elicitation tasks, originally designed for human participants, are reproduced in the proposed framework with the help of OpenAI's GPT-4o-mini model. Its performance with our zero-shot prompting baseline shows the effectiveness and high versatility of LLMs, that tend to outperform human informants in linguistic tasks. The findings of the second replication further highlight the need to explore additional prompting techniques, such as Chain-of-Thought (CoT) prompting, which, in a second follow-up experiment, demonstrates higher alignment to human performance on both critical and filler items. Given the limited scale of this study, it is worthwhile to further explore the performance of LLMs in empirical Linguistics and in other future applications in the humanities.
翻译:从人类参与者处获取数据是实证语言学研究中的核心数据收集策略之一。此类研究中的参与者数量差异显著,从少数个体到众包规模不等。尽管这两种设置都能提供丰富的大规模数据,但它们也伴随着诸多缺陷,例如在任务完成过程中对参与者注意力的控制不足、众包环境中不稳定的工作条件以及耗时的实验设计。因此,本研究旨在探讨将大型语言模型(LLMs)纳入实证语言学流程是否能克服这些障碍。为厘清此问题,我们进行了两项复制案例研究:Cruz(2023)和 Lombard 等人(2021)的研究。这两项最初为人类参与者设计的强制诱发任务,在提出的框架中借助 OpenAI 的 GPT-4o-mini 模型进行了复现。该模型在零样本提示基线下的表现显示了 LLMs 的有效性和高度适应性,其在语言任务中的表现往往优于人类受试者。第二次复现的结果进一步凸显了探索额外提示技术的必要性,例如思维链(CoT)提示。在后续的第二项实验中,该技术在关键项和填充项上均表现出与人类表现更高的一致性。鉴于本研究的规模有限,未来值得进一步探索 LLMs 在实证语言学及人文学科其他应用中的表现。