How to find a natural grouping of a large real data set? Clustering requires a balance between abstraction and representation. To identify clusters, we need to abstract from superfluous details of individual objects. But we also need a rich representation that emphasizes the key features shared by groups of objects that distinguish them from other groups of objects. Each clustering algorithm implements a different trade-off between abstraction and representation. Classical K-means implements a high level of abstraction - details are simply averaged out - combined with a very simple representation - all clusters are Gaussians in the original data space. We will see how approaches to subspace and deep clustering support high-dimensional and complex data by allowing richer representations. However, with increasing representational expressiveness comes the need to explicitly enforce abstraction in the objective function to ensure that the resulting method performs clustering and not just representation learning. We will see how current deep clustering methods define and enforce abstraction through centroid-based and density-based clustering losses. Balancing the conflicting goals of abstraction and representation is challenging. Ideas from subspace clustering help by learning one latent space for the information that is relevant to clustering and another latent space to capture all other information in the data. The tutorial ends with an outlook on future research in clustering. Future methods will more adaptively balance abstraction and representation to improve performance, energy efficiency and interpretability. By automatically finding the sweet spot between abstraction and representation, the human brain is very good at clustering and other related tasks such as single-shot learning. So, there is still much room for improvement.
翻译:如何从大规模真实数据集中发现其自然分组结构?聚类需要在抽象与表征之间取得平衡。为识别簇结构,我们需要从个体对象的冗余细节中进行抽象,但同时需要一种能够强调组内对象共享且区别于其他组的关键特征的丰富表征。每种聚类算法在抽象与表征之间实现了不同的权衡。经典 K-means 采用高度抽象(细节被简单平均化)与极简表征(所有簇均为原始数据空间中的高斯分布)的结合。本教程将展示子空间聚类与深度聚类方法如何通过允许更丰富的表征来处理高维复杂数据。然而,随着表征表达能力的增强,需要在目标函数中显式地强化抽象约束,以确保方法实现的是聚类而非单纯的表征学习。我们将探讨当前深度聚类方法如何通过基于质心与基于密度的聚类损失来定义并实施抽象。平衡抽象与表征这两个相互冲突的目标具有挑战性。子空间聚类的思想为此提供了启示:通过学习一个与聚类相关的潜在空间来捕获关键信息,同时用另一个潜在空间来容纳数据中的所有其他信息。本教程最后将对聚类领域的未来研究进行展望。未来的方法将通过更自适应地平衡抽象与表征来提升性能、能效与可解释性。人脑能够自动找到抽象与表征之间的最佳平衡点,因此在聚类及单样本学习等相关任务上表现卓越。这意味着现有方法仍有巨大的改进空间。