In the age of Large Language Models (LLMs), much work has already been done on how LLMs support medication advice and serve as information providers; however, how mothers use these tools for emotional and informational support to avoid social judgment remains underexplored. In this study, we have conducted a 10-day mixed-methods exploratory survey (N=107) to investigate how mothers use LLMs as a non-judgmental resource for emotional support and regulation, as well as situational reassurance. Our findings show that mothers are asking LLMs various questions about childcare to reassure themselves and avoid judgment, particularly around childcare decisions, maternal guilt, and late-night caregiving. Open-ended responses also show that mothers are comfortable with LLMs because they do not have to think about social consequences or judgment. Although they use LLMs for quick information or reassurance to avoid judgment, the results also show that more than half of the participants value human warmth over LLMs; however, a significant minority, especially those who live in a joint family, consider LLMs to avoid human judgment. These findings help us understand how we can frame LLMs as low-risk interaction support rather than as a replacement for human support, and highlight the role of social context in shaping emotional technology use.
翻译:在大型语言模型(LLMs)时代,已有大量研究探讨LLMs如何提供用药建议并作为信息提供者;然而,母亲们如何利用这些工具获取情感和信息支持以规避社会评判,这一问题仍未得到充分探索。本研究通过一项为期10天的混合方法探索性调查(N=107),探究母亲们如何将LLMs用作无评判的情感支持与调节资源,以及情境性安抚工具。我们的研究结果表明,母亲们向LLMs提出各类育儿相关问题以获得自我安抚并避免评判,尤其涉及育儿决策、母职内疚感及深夜照护等情境。开放式回答亦显示,母亲们对LLMs感到自在,因为无需顾虑社会后果或评判。尽管她们使用LLMs获取即时信息或安抚以避免评判,但结果同时表明超过半数参与者更重视人类的情感温度;然而,仍有相当比例(特别是生活在联合家庭中的母亲)认为LLMs能帮助规避人际评判。这些发现有助于我们理解如何将LLMs定位为低风险互动支持而非人类支持的替代品,并凸显了社会情境在塑造情感技术使用中的关键作用。