We study how abstract representations emerge in a Deep Belief Network (DBN) trained on benchmark datasets. Our analysis targets the principles of learning in the early stages of information processing, starting from the "primordial soup" of the under-sampling regime. As the data is processed by deeper and deeper layers, features are detected and removed, transferring more and more "context-invariant" information to deeper layers. We show that the representation approaches an universal model -- the Hierarchical Feature Model (HFM) -- determined by the principle of maximal relevance. Relevance quantifies the uncertainty on the model of the data, thus suggesting that "meaning" -- i.e. syntactic information -- is that part of the data which is not yet captured by a model. Our analysis shows that shallow layers are well described by pairwise Ising models, which provide a representation of the data in terms of generic, low order features. We also show that plasticity increases with depth, in a similar way as it does in the brain. These findings suggest that DBNs are capable of extracting a hierarchy of features from the data which is consistent with the principle of maximal relevance.
翻译:本研究探讨了在基准数据集上训练的深度信念网络(DBN)中抽象表征如何涌现。我们的分析聚焦于信息处理早期阶段的学习原理,从欠采样区域的“原始汤”状态开始。随着数据被越来越深的层处理,特征被检测并移除,从而将越来越多的“上下文无关”信息传递至更深层。我们证明,该表征趋近于一个由最大相关性原则决定的通用模型——层次特征模型(HFM)。相关性量化了数据模型的不确定性,从而表明“意义”——即句法信息——是数据中尚未被模型捕获的部分。我们的分析表明,浅层网络能够很好地用成对伊辛模型描述,该模型以通用的低阶特征提供了数据的表征。我们还发现,可塑性随深度增加而增强,这与大脑中的情况类似。这些发现表明,深度信念网络能够从数据中提取出与最大相关性原则一致的特征层次结构。