Large Language Models (LLMs) have witnessed rapid growth in emerging challenges and capabilities of language understanding, generation, and reasoning. Despite their remarkable performance in natural language processing-based applications, LLMs are susceptible to undesirable and erratic behaviors, including hallucinations, unreliable reasoning, and the generation of harmful content. These flawed behaviors undermine trust in LLMs and pose significant hurdles to their adoption in real-world applications, such as legal assistance and medical diagnosis, where precision, reliability, and ethical considerations are paramount. These could also lead to user dissatisfaction, which is currently inadequately assessed and captured. Therefore, to effectively and transparently assess users' satisfaction and trust in their interactions with LLMs, we design and develop LLMChain, a decentralized blockchain-based reputation system that combines automatic evaluation with human feedback to assign contextual reputation scores that accurately reflect LLM's behavior. LLMChain not only helps users and entities identify the most trustworthy LLM for their specific needs, but also provides LLM developers with valuable information to refine and improve their models. To our knowledge, this is the first time that a blockchain-based distributed framework for sharing and evaluating LLMs has been introduced. Implemented using emerging tools, LLMChain is evaluated across two benchmark datasets, showcasing its effectiveness and scalability in assessing seven different LLMs.
翻译:大语言模型(LLMs)在语言理解、生成与推理等新兴挑战及能力方面经历了快速发展。尽管LLMs在基于自然语言处理的应用中表现卓越,但它们容易产生不良和不可预测的行为,包括幻觉、不可靠推理以及生成有害内容。这些缺陷行为削弱了对LLMs的信任,并对其在精准性、可靠性和伦理考量至关重要的实际应用(如法律辅助和医疗诊断)中部署构成重大障碍。此外,这些行为还可能导致用户不满,而目前对此类不满的评估和捕获尚不充分。因此,为有效且透明地评估用户在与LLMs交互中的满意度与信任度,我们设计并开发了LLMChain——一种基于区块链的去中心化信誉系统,该系统结合自动评估与人工反馈,为LLM行为赋予具有上下文感知的信誉分值。LLMChain不仅能帮助用户和实体根据自身需求识别最可信赖的LLM,还能为LLM开发者提供宝贵信息以优化和改进其模型。据我们所知,这是首次提出基于区块链的分布式LLM共享与评估框架。通过采用新兴工具实现,LLMChain在两个基准数据集上进行了评估,展示了其在评估七种不同LLM时的有效性与可扩展性。