Effective and controllable data selection is critical for LLM instruction tuning, especially with massive open-source datasets. Existing approaches primarily rely on instance-level quality scores, or diversity metrics based on embedding clusters or semantic tags. However, constrained by the flatness of embedding spaces or the coarseness of tags, these approaches overlook fine-grained knowledge and its intrinsic hierarchical dependencies, consequently hindering precise data valuation and knowledge-aligned sampling. To address this challenge, we propose Tree-aware Aligned Global Sampling (TAGS), a unified framework that leverages a knowledge tree built from fine-grained tags, thereby enabling joint control of global quality, diversity, and target alignment. Using an LLM-based tagger, we extract atomic knowledge concepts, which are organized into a global tree through bottom-up hierarchical clustering. By grounding data instances onto this tree, a tree-aware metric then quantifies data quality and diversity, facilitating effective sampling. Our controllable sampling strategy maximizes tree-level information gain and enforces leaf-level alignment via KL-divergence for specific domains. Extensive experiments demonstrate that TAGS significantly outperforms state-of-the-art baselines. Notably, it surpasses the full-dataset model by \textbf{+5.84\%} using only \textbf{5\%} of the data, while our aligned sampling strategy further boosts average performance by \textbf{+4.24\%}.
翻译:有效且可控的数据选择对于大语言模型(LLM)指令调优至关重要,尤其是在面对海量开源数据集时。现有方法主要依赖于实例级质量评分,或基于嵌入聚类、语义标签的多样性度量。然而,受限于嵌入空间的平坦性或标签的粗糙性,这些方法忽视了细粒度知识及其内在的层次依赖关系,从而阻碍了精确的数据价值评估与知识对齐的采样。为应对这一挑战,我们提出了树感知对齐全局采样(TAGS),这是一个统一的框架,它利用从细粒度标签构建的知识树,从而实现对全局质量、多样性和目标对齐的联合控制。通过使用基于LLM的标注器,我们提取原子级知识概念,并通过自底向上的层次聚类将其组织成全局树。通过将数据实例锚定到此树上,一个树感知度量随后量化数据质量与多样性,以促进有效采样。我们的可控采样策略最大化树级信息增益,并通过特定领域的KL散度强制实现叶级对齐。大量实验表明,TAGS显著优于现有最先进的基线方法。值得注意的是,仅使用**5%**的数据,其性能便超越全数据集模型**+5.84%**,而我们的对齐采样策略进一步将平均性能提升了**+4.24%**。