We present ELTEX (Efficient LLM Token Extraction), a domain-driven framework for generating high-quality synthetic training data in specialized domains. While Large Language Models (LLMs) have shown impressive general capabilities, their performance in specialized domains like cybersecurity remains limited by the scarcity of domain-specific training data. ELTEX addresses this challenge by systematically integrating explicit domain indicator extraction with dynamic prompting to preserve critical domain knowledge throughout the generation process. We demonstrate ELTEX's effectiveness in the context of blockchain-related cyberattack detection, where we fine-tune Gemma-2B using various combinations of real and ELTEX-generated data. Our results show that the ELTEX-enhanced model achieves performance competitive with GPT-4 across both standard classification metrics and uncertainty calibration, while requiring significantly fewer computational resources. We release a curated synthetic dataset of social media texts for cyberattack detection in blockchain. Our work demonstrates that domain-driven synthetic data generation can effectively bridge the performance gap between resource-efficient models and larger architectures in specialized domains.
翻译:本文提出ELTEX(高效大语言模型令牌提取)框架,这是一种面向领域驱动的方法,用于在专业领域中生成高质量的合成训练数据。尽管大语言模型(LLMs)已展现出卓越的通用能力,但在网络安全等专业领域,其性能仍受限于领域特定训练数据的稀缺性。ELTEX通过系统性地整合显式领域指示器提取与动态提示技术,在整个生成过程中有效保留关键领域知识,从而应对这一挑战。我们在区块链相关网络攻击检测的背景下验证了ELTEX的有效性,通过使用真实数据与ELTEX生成数据的不同组合对Gemma-2B模型进行微调。实验结果表明,经ELTEX增强的模型在标准分类指标和不确定性校准方面均达到与GPT-4相竞争的性能水平,同时所需计算资源显著减少。我们发布了一个精心构建的区块链网络攻击检测社交媒体文本合成数据集。本研究表明,领域驱动的合成数据生成能够有效弥合专业领域中资源高效模型与大型架构之间的性能差距。