Traditional studies of memory for meaningful narratives focus on specific stories and their semantic structures but do not address common quantitative features of recall across different narratives. We introduce a statistical ensemble of random trees to represent narratives as hierarchies of key points, where each node is a compressed representation of its descendant leaves, which are the original narrative segments. Recall is modeled as constrained by working memory capacity from this hierarchical structure. Our analytical solution aligns with observations from large-scale narrative recall experiments. Specifically, our model explains that (1) average recall length increases sublinearly with narrative length, and (2) individuals summarize increasingly longer narrative segments in each recall sentence. Additionally, the theory predicts that for sufficiently long narratives, a universal, scale-invariant limit emerges, where the fraction of a narrative summarized by a single recall sentence follows a distribution independent of narrative length.
翻译:传统对有意义叙事记忆的研究主要关注特定故事及其语义结构,但未能解释不同叙事间回忆的共同量化特征。我们引入随机树的统计集合,将叙事表示为关键点的层次结构,其中每个节点是其子叶节点(即原始叙事片段)的压缩表示。回忆被建模为受工作记忆容量约束的、基于此层次结构的过程。我们的解析解与大规模叙事回忆实验的观测结果一致。具体而言,我们的模型解释了:(1)平均回忆长度随叙事长度呈亚线性增长;(2)个体在每句回忆中总结的叙事片段长度逐渐增加。此外,该理论预测,对于足够长的叙事,会出现一个普适的、尺度不变的极限,即单个回忆句所总结的叙事比例服从一个与叙事长度无关的分布。