Large Language Model (LLM) alignment conventionally relies on supervised fine-tuning or reinforcement learning based alignment frameworks. These methods typically require labeled or preference datasets and involve updating model weights to align the LLM with the training objective or reward model. Meanwhile, in social sciences such as cross-cultural studies, factor analysis is widely used to uncover underlying dimensions or latent variables that explain observed patterns in survey data. The non-differentiable nature of these measurements deriving from survey data renders the former alignment methods infeasible for alignment with cultural dimensions. To overcome this, we propose a parameter efficient strategy that combines soft prompt tuning, which freezes the model parameters while modifying the input prompt embeddings, with Differential Evolution (DE), a black-box optimization method for cases where a differentiable objective is unattainable. This strategy ensures alignment consistency without the need for preference data or model parameter updates, significantly enhancing efficiency and mitigating overfitting. Our method demonstrates significant improvements in LLama-3-8B-Instruct's cultural dimensions across multiple regions, outperforming both the Naive LLM and the In-context Learning (ICL) baseline, and effectively bridges computational models with human cultural nuances.
翻译:大语言模型(LLM)的对齐传统上依赖于基于监督微调或强化学习的对齐框架。这些方法通常需要标注数据或偏好数据集,并通过更新模型权重来使LLM与训练目标或奖励模型对齐。同时,在跨文化研究等社会科学领域,因子分析被广泛用于揭示解释调查数据中观测模式的潜在维度或潜变量。由于源自调查数据的这些测量具有不可微分的特性,使得前述对齐方法无法用于与文化维度进行对齐。为克服这一限制,我们提出一种参数高效的策略,该策略将软提示调优(冻结模型参数同时修改输入提示嵌入)与差分进化(DE)——一种适用于目标函数不可微情况的黑盒优化方法——相结合。该策略确保了对齐一致性,无需偏好数据或模型参数更新,显著提升了效率并缓解了过拟合。我们的方法在多个区域上显著提升了LLama-3-8B-Instruct的文化维度表现,优于朴素LLM和上下文学习(ICL)基线,并有效弥合了计算模型与人类文化细微差异之间的鸿沟。