Social media platforms, especially Facebook parenting groups, have long been used as informal support networks for mothers seeking advice and reassurance. However, growing concerns about social judgment, privacy exposure, and unreliable information are changing how mothers seek help. This exploratory mixed-method study examines why mothers are moving from Facebook parenting groups to large language models such as ChatGPT and Gemini. We conducted a cross-sectional online survey of 109 mothers. Results show that 41.3% of participants avoided Facebook parenting groups because they expected judgment from others. This difference was statistically significant across location and family structure. Mothers living in their home country and those in joint families were more likely to avoid Facebook groups. Qualitative findings revealed three themes: social judgment and exposure, LLMs as safe and private spaces, and quick and structured support. Participants described LLMs as immediate, emotionally safe, and reliable alternatives that reduce social risk when asking for help. Rather than replacing human support, LLMs appear to fill emotional and practical gaps within existing support systems. These findings show a change in maternal digital support and highlight the need to design LLM systems that support both information and emotional safety.
翻译:长期以来,社交媒体平台,尤其是Facebook育儿群组,一直是母亲寻求建议和情感支持的民间互助网络。然而,日益增长的对社会评判、隐私泄露及信息可靠性的担忧,正在改变母亲寻求帮助的方式。这项探索性混合方法研究探讨了母亲为何从Facebook育儿群组转向ChatGPT和Gemini等大型语言模型。我们对109位母亲进行了横断面在线调查。结果显示,41.3%的参与者因预期会遭受他人评判而避免使用Facebook育儿群组,该现象在地域和家庭结构维度均呈现统计学显著差异。居住在本国及生活在联合家庭中的母亲更倾向于回避Facebook群组。定性分析揭示了三大主题:社会评判与隐私暴露、LLM作为安全私密空间、快速结构化支持。参与者将LLM描述为即时、情感安全且可靠的替代方案,能有效降低求助时的社会风险。LLM并非取代人际支持,而是在现有支持体系中填补了情感与实践需求的空缺。这些发现揭示了母亲数字支持方式的转变,并凸显了设计兼具信息支持与情感安全保障的LLM系统的必要性。