Large Language Models (LLMs) have demonstrated efficacy in various linguistic applications, including question answering and controlled text generation. However, studies into their ability to switch between opposite styles of responses in professional domains remain underexplored. This study introduces a novel approach, named ProSwitch, which enables a language model to switch between professional and non-professional answers, by tuning and evaluating through the guidance of domain and style knowledge. ProSwitch unfolds in three phases: LLM-augmented preparation to collect domain knowledge and QA pairs, instruction tuning to optimize LLMs with multiple levels of knowledge, and comprehensive evaluation to assess both style discrimination and reference-based quality of the generated text. Comparative analysis of ProSwitch against general and specialized LLMs reveals that our approach outperforms baselines in switching between professional and non-professional responses.
翻译:大型语言模型(LLM)已在问答和可控文本生成等多种语言应用中展现出显著效能。然而,关于其在专业领域中切换对立风格回答能力的研究仍显不足。本研究提出一种名为ProSwitch的新方法,该方法通过领域知识与风格知识的引导进行微调与评估,使语言模型能够在专业与非专业回答之间进行切换。ProSwitch包含三个阶段:基于LLM增强的数据准备以收集领域知识与问答对、基于多层级知识引导的LLM指令微调,以及对生成文本的风格区分度与基于参考的质量进行综合评估。将ProSwitch与通用及专用LLM进行对比分析表明,本方法在专业与非专业回答的切换任务上优于基线模型。