Traditional studies of memory for meaningful narratives focus on specific stories and their semantic structures but do not address common quantitative features of recall across different narratives. We introduce a statistical ensemble of random trees to represent narratives as hierarchies of key points, where each node is a compressed representation of its descendant leaves, which are the original narrative segments. Recall is modeled as constrained by working memory capacity from this hierarchical structure. Our analytical solution aligns with observations from large-scale narrative recall experiments. Specifically, our model explains that (1) average recall length increases sublinearly with narrative length, and (2) individuals summarize increasingly longer narrative segments in each recall sentence. Additionally, the theory predicts that for sufficiently long narratives, a universal, scale-invariant limit emerges, where the fraction of a narrative summarized by a single recall sentence follows a distribution independent of narrative length.
翻译:传统对有意义叙事记忆的研究主要关注特定故事及其语义结构,但未能解释不同叙事间回忆的共同定量特征。我们引入随机树的统计系综,将叙事表示为关键点的层次结构,其中每个节点是其后代叶子的压缩表示,而叶子则是原始叙事片段。回忆被建模为受工作记忆容量约束的、从这一层次结构中提取信息的过程。我们的解析解与大规模叙事回忆实验的观测结果一致。具体而言,该模型解释了:(1) 平均回忆长度随叙事长度呈亚线性增长;(2) 个体在每句回忆中概括的叙事片段长度逐渐增加。此外,理论预测对于足够长的叙事,会出现一个普适的、尺度不变的极限状态,此时单句回忆所概括的叙事比例服从一个与叙事长度无关的分布。