Generative search systems are increasingly replacing link-based retrieval with AI-generated summaries, yet little is known about how these systems differ in sources, language, and fidelity to cited material. We examine responses to 11,000 real search queries across four systems -- vanilla GPT, Search GPT, Google AI Overviews, and traditional Google Search -- at three levels: source diversity, linguistic characterization of the generated summary, and source-summary fidelity. We find that generative search systems exhibit significant \textit{source-selection} biases in their citations, favoring certain sources over others. Incorporating search also selectively attenuates epistemic markers, reducing hedging by up to 60\% while preserving confidence language in the AI-generated summaries. At the same time, AI summaries further compound the citation biases: Wikipedia and longer sources are disproportionately overrepresented, whereas cited social media content and negatively framed sources are substantially underrepresented. Our findings highlight the potential for \textit{answer bubbles}, in which identical queries yield structurally different information realities across systems, with implications for user trust, source visibility, and the transparency of AI-mediated information access.
翻译:生成式搜索系统正日益用AI生成的摘要取代基于链接的检索,但这些系统在信息来源、语言风格及对引用材料的忠实度方面有何差异,目前尚不明确。本研究从三个层面考察了四个系统——原始GPT、搜索GPT、谷歌AI概览与传统谷歌搜索——对11,000个真实搜索查询的响应:来源多样性、生成摘要的语言特征,以及来源与摘要的忠实度。研究发现,生成式搜索系统在引用中存在显著的\textit{来源选择}偏见,倾向于优先引用特定来源。引入搜索功能会选择性弱化认知标记,在保持AI生成摘要中确信性语言的同时,使模糊表述减少高达60\%。与此同时,AI摘要进一步加剧了引用偏见:维基百科与篇幅较长的来源被过度呈现,而引用的社交媒体内容与负面框架的来源则明显呈现不足。我们的研究结果揭示了\textit{答案气泡}现象的存在——相同查询在不同系统中会产生结构迥异的信息现实,这对用户信任、来源可见度以及AI中介信息访问的透明度具有重要影响。