Voice-based systems like Amazon Alexa, Google Assistant, and Apple Siri, along with the growing popularity of OpenAI's ChatGPT and Microsoft's Copilot, serve diverse populations, including visually impaired and low-literacy communities. This reflects a shift in user expectations from traditional search to more interactive question-answering models. However, presenting information effectively in voice-only channels remains challenging due to their linear nature. This limitation can impact the presentation of complex queries involving controversial topics with multiple perspectives. Failing to present diverse viewpoints may perpetuate or introduce biases and affect user attitudes. Balancing information load and addressing biases is crucial in designing a fair and effective voice-based system. To address this, we (i) review how biases and user attitude changes have been studied in screen-based web search, (ii) address challenges in studying these changes in voice-based settings like SCS, (iii) outline research questions, and (iv) propose an experimental setup with variables, data, and instruments to explore biases in a voice-based setting like Spoken Conversational Search.
翻译:诸如亚马逊Alexa、谷歌助手和苹果Siri等语音系统,以及日益流行的OpenAI的ChatGPT和微软Copilot,服务于包括视障和低识字率群体在内的多样化人群。这反映了用户期望从传统搜索向更具交互性的问答模式的转变。然而,由于纯语音通道的线性特性,在其中有效呈现信息仍然具有挑战性。这一限制可能影响涉及多视角争议性话题的复杂查询的呈现。未能呈现多样化观点可能会延续或引入偏见,并影响用户态度。在设计公平有效的语音系统时,平衡信息负载并处理偏见至关重要。为此,我们(i)回顾了基于屏幕的网络搜索中偏见和用户态度变化的研究方法;(ii)探讨了在语音环境(如SCS)中研究这些变化所面临的挑战;(iii)提出了研究问题;(iv)设计了一套包含变量、数据和测量工具的实验方案,以探索语音环境(如语音对话搜索)中的偏见问题。