Instruction tuning is now the default way to train and adapt large language models, but many instruction--input--output pairs are only weakly specified: for a given input, the same output can remain plausible under several alternative instructions. This raises a simple question: \emph{does the instruction uniquely determine the target output?} We propose the \textbf{Task--Specificity Score (TSS)} to quantify how much an instruction matters for predicting its output, by contrasting the true instruction against plausible alternatives for the same input. We further introduce \textbf{TSS++}, which uses hard alternatives and a small quality term to mitigate easy-negative effects. Across three instruction datasets (\textsc{Alpaca}, \textsc{Dolly-15k}, \textsc{NI-20}) and three open LLMs (Gemma, Llama, Qwen), we show that selecting task-specific examples improves downstream performance under tight token budgets and complements quality-based filters such as perplexity and IFD.
翻译:指令微调现已成为训练和适配大型语言模型的标准方法,但许多指令-输入-输出对仅被弱指定:对于给定输入,同一输出在多种替代指令下仍可能成立。这引出一个简单问题:\emph{指令是否唯一决定了目标输出?} 我们提出 \textbf{任务特异性评分(TSS)} 来量化指令对预测其输出的重要性,方法是将真实指令与同一输入下的合理替代指令进行对比。我们进一步提出 \textbf{TSS++},该方法使用困难替代指令并引入小型质量项以缓解易负例效应。在三个指令数据集(\textsc{Alpaca}、\textsc{Dolly-15k}、\textsc{NI-20})和三个开源大语言模型(Gemma、Llama、Qwen)上的实验表明,在严格标记预算下选择任务特异性样本能提升下游性能,并可补充基于质量的筛选指标(如困惑度和IFD)。