Oversampling is one of the most widely used approaches for addressing imbalanced classification. The core idea is to generate additional minority samples to rebalance the dataset. Most existing methods, such as SMOTE, require converting categorical variables into numerical vectors, which often leads to information loss. Recently, large language model (LLM)-based methods have been introduced to overcome this limitation. However, current LLM-based approaches typically generate minority samples with limited diversity, reducing robustness and generalizability in downstream classification tasks. To address this gap, we propose a novel LLM-based oversampling method designed to enhance diversity. First, we introduce a sampling strategy that conditions synthetic sample generation on both minority labels and features. Second, we develop a new permutation strategy for fine-tuning pre-trained LLMs. Third, we fine-tune the LLM not only on minority samples but also on interpolated samples to further enrich variability. Extensive experiments on 10 tabular datasets demonstrate that our method significantly outperforms eight SOTA baselines. The generated synthetic samples are both realistic and diverse. Moreover, we provide theoretical analysis through an entropy-based perspective, proving that our method encourages diversity in the generated samples.
翻译:过采样是解决不平衡分类问题最广泛使用的方法之一。其核心思想是生成额外的少数类样本来重新平衡数据集。大多数现有方法(如SMOTE)需要将分类变量转换为数值向量,这通常会导致信息丢失。最近,基于大语言模型的方法被引入以克服这一局限。然而,当前基于LLM的方法通常生成的少数类样本多样性有限,降低了下游分类任务的鲁棒性和泛化能力。为弥补这一不足,我们提出了一种新颖的基于LLM的过采样方法,旨在增强多样性。首先,我们引入一种采样策略,该策略基于少数类标签和特征共同调节合成样本的生成。其次,我们开发了一种新的排列策略用于微调预训练的LLM。第三,我们不仅在少数类样本上微调LLM,还在插值样本上进行微调,以进一步丰富变异性。在10个表格数据集上的大量实验表明,我们的方法显著优于八个SOTA基线。生成的合成样本既真实又多样。此外,我们通过基于熵的理论分析,证明了我们的方法能够促进生成样本的多样性。