To what extent can LLMs be used as part of a cognitive model of language generation? In this paper, we approach this question by exploring a neuro-symbolic implementation of an algorithmic cognitive model of referential expression generation by Dale & Reiter (1995). The symbolic task analysis implements the generation as an iterative procedure that scaffolds symbolic and gpt-3.5-turbo-based modules. We compare this implementation to an ablated model and a one-shot LLM-only baseline on the A3DS dataset (Tsvilodub & Franke, 2023). We find that our hybrid approach is cognitively plausible and performs well in complex contexts, while allowing for more open-ended modeling of language generation in a larger domain.
翻译:大语言模型在多大程度上可作为语言生成认知模型的一部分?本文通过探索Dale & Reiter(1995)指称表达式生成算法认知模型的神经符号实现来探讨该问题。该符号化任务分析将生成过程实现为迭代程序,为符号化模块与基于gpt-3.5-turbo的模块提供结构化支持。我们在A3DS数据集(Tsvilodub & Franke, 2023)上将该实现与消融模型及单次提示的大语言模型基线进行对比。研究发现,我们的混合方法既具有认知合理性,在复杂语境中表现优异,同时能在更广泛领域中对语言生成进行更开放式的建模。