Urban systems are managed using complex textual documentation that need coding and analysis to set requirements and evaluate built environment performance. This paper contributes to the study of applying large-language models (LLM) to qualitative coding activities to reduce resource requirements while maintaining comparable reliability to humans. Qualitative coding and assessment face challenges like resource limitations and bias, accuracy, and consistency between human evaluators. Here we report the application of LLMs to deductively code 10 case documents on the presence of 17 digital twin characteristics for the management of urban systems. We utilize two prompting methods to compare the semantic processing of LLMs with human coding efforts: whole text analysis and text chunk analysis using OpenAI's GPT-4o, GPT-4o-mini, and o1-mini models. We found similar trends of internal variability between methods and results indicate that LLMs may perform on par with human coders when initialized with specific deductive coding contexts. GPT-4o, o1-mini and GPT-4o-mini showed significant agreement with human raters when employed using a chunking method. The application of both GPT-4o and GPT-4o-mini as an additional rater with three manual raters showed statistically significant agreement across all raters, indicating that the analysis of textual documents is benefited by LLMs. Our findings reveal nuanced sub-themes of LLM application suggesting LLMs follow human memory coding processes where whole-text analysis may introduce multiple meanings. The novel contributions of this paper lie in assessing the performance of OpenAI GPT models and introduces the chunk-based prompting approach, which addresses context aggregation biases by preserving localized context.
翻译:城市系统管理依赖于复杂的文本文档,这些文档需要编码与分析以设定要求并评估建成环境性能。本文致力于研究如何应用大语言模型(LLM)进行定性编码活动,从而在保持与人工编码相当可靠性的同时降低资源需求。定性编码与评估面临资源限制、偏见、准确性以及人工评估者间一致性等挑战。本研究报道了应用LLM对10份关于城市系统管理的案例文档进行演绎式编码,以识别其中17种数字孪生特征的存在情况。我们采用两种提示方法——全文分析与基于文本分块的分析,使用OpenAI的GPT-4o、GPT-4o-mini和o1-mini模型,比较LLM的语义处理能力与人工编码效果。研究发现不同方法间存在相似的内部变异趋势,结果表明当LLM在特定演绎编码上下文中初始化时,其表现可与人工编码员相媲美。采用分块方法时,GPT-4o、o1-mini和GPT-4o-mini与人工评估者达成显著一致性。将GPT-4o和GPT-4o-mini作为第四位评估者与三位人工评估者共同使用时,所有评估者间均显示出统计学显著的一致性,表明LLM有助于提升文本文档分析质量。我们的发现揭示了LLM应用的细微子主题,暗示LLM遵循人类记忆编码过程,而全文分析可能引入多重语义干扰。本文的创新贡献在于评估了OpenAI GPT系列模型的性能,并提出了基于分块的提示方法,该方法通过保持局部上下文有效解决了语境聚合偏差问题。