A proliferation of Large Language Models (the GPT series, BLOOM, LLaMA, and more) are driving forward novel development of multipurpose AI for a variety of tasks, particularly natural language processing (NLP) tasks. These models demonstrate strong performance on a range of tasks; however, there has been evidence of brittleness when applied to more niche or narrow domains where hallucinations or fluent but incorrect responses reduce performance. Given the complex nature of scientific domains, it is prudent to investigate the trade-offs of leveraging off-the-shelf versus more targeted foundation models for scientific domains. In this work, we examine the benefits of in-domain pre-training for a given scientific domain, chemistry, and compare these to open-source, off-the-shelf models with zero-shot and few-shot prompting. Our results show that not only do in-domain base models perform reasonably well on in-domain tasks in a zero-shot setting but that further adaptation using instruction fine-tuning yields impressive performance on chemistry-specific tasks such as named entity recognition and molecular formula generation.
翻译:大型语言模型(如GPT系列、BLOOM、LLaMA等)的激增正在推动多功能人工智能在各类任务(尤其是自然语言处理任务)中的创新发展。这些模型在一系列任务上展现出强大性能;然而,当应用于更专业或狭窄的领域时,存在证据表明其表现存在脆弱性,其中幻觉或流畅但错误的响应会降低性能。鉴于科学领域的复杂性,有必要研究针对科学领域使用现成基础模型与更具针对性基础模型之间的权衡。在本工作中,我们考察了针对特定科学领域(化学)进行领域内预训练的优势,并将其与开源现成模型在零样本和少样本提示下的表现进行比较。我们的结果表明,领域内基础模型不仅在零样本设置下对领域内任务表现出相当良好的性能,而且通过指令微调进行进一步适配后,在化学特定任务(如命名实体识别和分子式生成)上取得了令人印象深刻的性能。