In the age of Large Language Models (LLMs), much work has already been done on how LLMs support medication advice and serve as information providers; however, how mothers use these tools for emotional and informational support to avoid social judgment remains underexplored. This study conducted a 10-day mixed-methods exploratory survey ($N=107$) to investigate how mothers use LLMs as a non-judgmental resource for emotional support and regulation, and for situational reassurance. Our findings show that mothers are asking LLMs various questions about childcare to reassure themselves and avoid judgment, particularly around childcare decisions, maternal guilt, and late-night caregiving. Open-ended responses also show that mothers are comfortable with LLMs because they do not have to think about social consequences or judgment. Although mothers use LLMs for quick information or reassurance to avoid judgment, over half of the participants value human warmth more than LLMs; however, a significant minority, especially those in joint families, consider LLMs to avoid human judgment. These findings help understand how LLMs can be framed as low-risk interaction support rather than a replacement for human support, and highlight the role of social context in shaping emotional technology use.
翻译:在大型语言模型(LLMs)时代,已有大量研究关注LLMs如何提供用药建议和充当信息提供者;然而,母亲们如何利用这些工具获取情感和信息支持以规避社会评判的问题仍未得到充分探索。本研究通过一项为期10天的混合方法探索性调查(样本量N=107),探究母亲们如何将LLMs用作情感支持与调节、以及情境性安抚的非评判性资源。我们的研究结果表明,母亲们向LLMs提出各类育儿相关问题以获得自我安抚并避免评判,特别是在育儿决策、母亲愧疚感和深夜育儿等方面。开放式回答还显示,母亲们对LLMs感到自在,因为她们无需考虑社会后果或评判。尽管母亲们使用LLMs获取快速信息或安抚以规避评判,但超过半数参与者更重视人类的情感温度;然而,仍有相当一部分人群(尤其是生活在联合家庭中的母亲)认为LLMs能帮助其规避人际评判。这些发现有助于理解如何将LLMs定位为低风险互动支持而非人类支持的替代品,并凸显了社会情境在塑造情感技术使用中的重要作用。