Positive feedback via likes and awards is central to online governance, yet which attributes of users' posts elicit rewards -- and how these vary across authors and communities -- remains unclear. To examine this, we combine quasi-experimental causal inference with predictive modeling on 11M posts from 100 subreddits. We identify linguistic patterns and stylistic attributes causally linked to rewards, controlling for author reputation, timing, and community context. For example, overtly complicated language, tentative style, and toxicity reduce rewards. We use our set of curated features to train models that can detect highly-upvoted posts with high AUC. Our audit of community guidelines highlights a ``policy-practice gap'' -- most rules focus primarily on civility and formatting requirements, with little emphasis on the attributes identified to drive positive feedback. These results inform the design of community guidelines, support interfaces that teach users how to craft desirable contributions, and moderation workflows that emphasize positive reinforcement over purely punitive enforcement.
翻译:通过点赞和奖励实现的积极反馈是在线治理的核心,然而用户帖子的哪些属性会引发奖励——以及这些属性如何因作者和社区而异——仍不明确。为探究此问题,我们结合准实验因果推断与预测建模,分析了来自100个子版块的1100万条帖子。在控制作者声誉、发布时间和社区语境的前提下,我们识别了与奖励存在因果关联的语言模式和文体特征。例如,过度复杂的语言、试探性语气和毒性内容会减少奖励。我们利用精心筛选的特征集训练模型,该模型能以高AUC值检测高赞帖子。对社区准则的审计揭示了一种“政策与实践的差距”——多数规则主要关注文明礼仪和格式要求,而很少强调那些被识别为驱动积极反馈的属性。这些结果为社区准则的设计、教导用户如何撰写受欢迎内容的支持界面,以及强调积极强化而非单纯惩罚执行的审核工作流程提供了参考。