Since the rapid expansion of large language models (LLMs), people have begun to rely on them for information retrieval. While traditional search engines display ranked lists of sources shaped by search engine optimization (SEO), advertising, and personalization, LLMs typically provide a synthesized response that feels singular and authoritative. While both approaches carry risks of bias and omission, LLMs may amplify the effect by collapsing multiple perspectives into one answer, reducing users ability or inclination to compare alternatives. This concentrates power over information in a few LLM vendors whose systems effectively shape what is remembered and what is overlooked. As a result, certain narratives, individuals or groups, may be disproportionately suppressed, while others are disproportionately elevated. Over time, this creates a new threat: the gradual erasure of those with limited digital presence, and the amplification of those already prominent, reshaping collective memory. To address these concerns, this paper presents a concept of the Right To Be Remembered (RTBR) which encompasses minimizing the risk of AI-driven information omission, embracing the right of fair treatment, while ensuring that the generated content would be maximally truthful.
翻译:随着大语言模型(LLM)的迅速扩张,人们已开始依赖其进行信息检索。传统搜索引擎显示的是由搜索引擎优化(SEO)、广告和个性化定制所塑造的排序来源列表,而LLM通常提供一个经过综合的、感觉单一且权威的回应。尽管两种方式都存在偏见和遗漏的风险,但LLM可能通过将多种视角压缩成一个答案来放大这种效应,从而降低用户比较不同选项的能力或意愿。这导致信息权力集中在少数LLM供应商手中,其系统有效地决定了哪些内容被记住,哪些被忽视。因此,某些叙事、个体或群体可能被不成比例地压制,而另一些则被不成比例地抬高。长此以往,这造成了一种新的威胁:数字存在有限的群体被逐渐抹除,而已然突出的群体则被进一步放大,从而重塑集体记忆。为应对这些担忧,本文提出了“被遗忘的权利”(RTBR)这一概念,其核心在于最小化人工智能驱动的信息遗漏风险,保障公平对待的权利,同时确保生成的内容具有最大程度的真实性。