The spread of media bias is a significant concern as political discourse shapes beliefs and opinions. Addressing this challenge computationally requires improved methods for interpreting news. While large language models (LLMs) can scale classification tasks, concerns remain about their trustworthiness. To advance human-AI collaboration, we investigate the feasibility of using LLMs to classify U.S. news by political ideology and examine their effect on user decision-making. We first compared GPT models with prompt engineering to state-of-the-art supervised machine learning on a 34k public dataset. We then collected 17k news articles and tested GPT-4 predictions with brief and detailed explanations. In a between-subjects study (N=124), we evaluated how LLM-generated explanations influence human annotation, judgment, and confidence. Results show that AI assistance significantly increases confidence ($p<.001$), with detailed explanations more persuasive and more likely to alter decisions. We highlight recommendations for AI explanations through thematic analysis and provide our dataset for further research.
翻译:媒体偏见的传播是一个重要关切,因为政治话语塑造着人们的信念与观点。从计算角度应对这一挑战需要改进新闻解读方法。尽管大语言模型能够扩展分类任务,但其可信度仍存疑虑。为推进人机协作,本研究探讨了使用大语言模型按政治意识形态对美国新闻进行分类的可行性,并检验了其对用户决策的影响。我们首先在包含3.4万条数据的公共数据集上,比较了采用提示工程的GPT模型与最先进的监督机器学习方法。随后,我们收集了1.7万篇新闻文章,测试了GPT-4在简短解释和详细解释下的预测效果。通过一项被试间研究(N=124),我们评估了大语言模型生成的解释如何影响人工标注、判断及置信度。结果显示,AI辅助显著提升了置信度($p<.001$),其中详细解释更具说服力且更可能改变决策。我们通过主题分析提出了关于AI解释的建议,并提供了本研究的数据库以供进一步研究。