This study addresses the challenges of analyzing temporal discrepancies in large language models (LLMs) trained on data from different time periods. To facilitate the automatic exploration of these differences, we propose a novel system that compares in a systematic way the outputs of two LLM versions based on user-defined queries. The system first generates a hierarchical topic structure rooted in a user-specified keyword, allowing for an organized comparison of topical categories. Subsequently, it evaluates the generated text by both LLMs to identify differences in vocabulary, information presentation, and underlying themes. This fully automated approach not only streamlines the identification of shifts in public opinion and cultural norms but also enhances our understanding of the adaptability and robustness of machine learning applications in response to temporal changes. By fostering research in continual model adaptation and comparative summarization, this work contributes to the development of more transparent machine learning models capable of capturing the nuances of evolving societal contexts.
翻译:本研究旨在解决分析基于不同时期数据训练的大型语言模型(LLMs)中时序差异的挑战。为促进对这些差异的自动化探索,我们提出了一种新颖的系统,能够基于用户定义的查询,系统性地比较两个LLM版本的输出。该系统首先生成以用户指定关键词为根节点的层次化主题结构,从而实现对主题类别的有序比较。随后,系统评估两个LLM生成的文本,以识别其在词汇使用、信息呈现方式及潜在主题方面的差异。这种全自动方法不仅简化了对公众意见与文化规范变迁的识别过程,还深化了我们对机器学习应用在应对时序变化时的适应性与鲁棒性的理解。通过推动持续模型适应与比较性摘要生成领域的研究,本工作有助于开发更具透明度的机器学习模型,使其能够捕捉不断演进的社会语境中的细微差别。