Large language models (LLMs) are increasingly reshaping learning paradigms, cognitive processes, and research methodologies across diverse domains. As their adoption expands, effectively integrating LLMs into professional fields and clarifying their role in domain-specific applications has become a key challenge for enterprise digital transformation and broader societal development. In the accounting domain, successful integration requires a systematic understanding of LLMs' domain-specific reasoning capabilities. In this study, we introduce the concept of accounting reasoning and propose a set of evaluation criteria grounded in an analysis of the training data characteristics of representative GLM-series models. These criteria establish a foundation for studying accounting-oriented reasoning paradigms and provide benchmarks for assessing and improving model performance. Building on this framework, we evaluate several representative LLMs, including GLM-6B, GLM-130B, GLM-4, and GPT-4, across a range of accounting reasoning tasks. Our experimental results show that prompt engineering strategies can yield varying degrees of performance improvement across models, with GPT-4 demonstrating the strongest overall accounting reasoning capability. Nevertheless, the results indicate that current LLMs remain insufficient for real-world accounting applications. In particular, further optimization is required for deployment in enterprise-level accounting scenarios to fully realize the potential value of LLMs in this domain.
翻译:大语言模型(LLMs)正日益重塑多个领域的学习范式、认知过程与研究方法。随着其应用范围的扩大,如何将LLMs有效融入专业领域并明确其在特定领域应用中的角色,已成为企业数字化转型及更广泛社会发展的关键挑战。在会计领域,成功的融合需要系统性地理解LLMs的领域特定推理能力。本研究提出了会计推理的概念,并基于对代表性GLM系列模型训练数据特征的分析,提出了一套评估标准。这些标准为研究面向会计的推理范式奠定了基础,并为评估和改进模型性能提供了基准。基于此框架,我们对包括GLM-6B、GLM-130B、GLM-4和GPT-4在内的多个代表性LLMs在一系列会计推理任务上进行了评估。实验结果表明,提示工程策略能在不同程度上提升各模型的性能,其中GPT-4展现出最强的整体会计推理能力。然而,结果也表明当前LLMs在实际会计应用中仍显不足。特别是在企业级会计场景的部署中,仍需进一步优化,以充分实现LLMs在该领域的潜在价值。