In this study, we propose a structured methodology that utilizes large language models (LLMs) in a cost-efficient and parsimonious manner, integrating the strengths of scholars and machines while offsetting their respective weaknesses. Our methodology, facilitated through a chain of thought and few-shot learning prompting from computer science, extends best practices for co-author teams in qualitative research to human-machine teams in quantitative research. This allows humans to utilize abductive reasoning and natural language to interrogate not just what the machine has done but also what the human has done. Our method highlights how scholars can manage inherent weaknesses OF LLMs using careful, low-cost techniques. We demonstrate how to use the methodology to interrogate human-machine rating discrepancies for a sample of 1,934 press releases announcing pharmaceutical alliances (1990-2017).
翻译:本研究提出一种结构化方法,以成本高效且简约的方式利用大语言模型(LLMs),整合学者与机器的优势并弥补各自不足。该方法通过思维链与计算机科学中的少样本学习提示技术实现,将定性研究中合作团队的最佳实践扩展至定量研究中的人机协作团队。这使得人类能够运用溯因推理与自然语言,不仅审视机器的决策,也能反思人类自身的判断。我们的方法揭示了学者如何通过精细、低成本的技巧管理LLMs的固有缺陷。我们以1990年至2017年间发布的1,934份医药联盟新闻稿为样本,展示了如何运用该方法分析人机评分差异。