As large language models (LLMs) are deployed widely, detecting and understanding bias in their outputs is critical. We present LLM BiasScope, a web application for side-by-side comparison of LLM outputs with real-time bias analysis. The system supports multiple providers (Google Gemini, DeepSeek, MiniMax, Mistral, Meituan, Meta Llama) and enables researchers and practitioners to compare models on the same prompts while analyzing bias patterns. LLM BiasScope uses a two-stage bias detection pipeline: sentence-level bias detection followed by bias type classification for biased sentences. The analysis runs automatically on both user prompts and model responses, providing statistics, visualizations, and detailed breakdowns of bias types. The interface displays two models side-by-side with synchronized streaming responses, per-model bias summaries, and a comparison view highlighting differences in bias distributions. The system is built on Next.js with React, integrates Hugging Face inference endpoints for bias detection, and uses the Vercel AI SDK for multi-provider LLM access. Features include real-time streaming, export to JSON/PDF, and interactive visualizations (bar charts, radar charts) for bias analysis. LLM BiasScope is available as an open-source web application, providing a practical tool for bias evaluation and comparative analysis of LLM behaviour.
翻译:随着大语言模型(LLM)的广泛部署,检测并理解其输出中的偏见变得至关重要。本文介绍LLM BiasScope——一个支持大语言模型输出并行比较与实时偏见分析的网络应用程序。该系统支持多服务提供商(Google Gemini、DeepSeek、MiniMax、Mistral、美团、Meta Llama),使研究人员和从业者能够在相同提示词下比较不同模型,同时分析其偏见模式。LLM BiasScope采用两阶段偏见检测流程:首先进行句子级偏见检测,随后对存在偏见的句子进行偏见类型分类。该分析可自动针对用户提示词和模型响应同时执行,并提供统计数据、可视化图表以及偏见类型的详细分类。交互界面并排展示两个模型,支持同步流式响应显示、各模型偏见摘要,以及突出偏见分布差异的对比视图。该系统基于Next.js与React构建,集成Hugging Face推理端点进行偏见检测,并采用Vercel AI SDK实现多提供商的大语言模型访问。其功能包括实时流式传输、JSON/PDF格式导出,以及用于偏见分析的交互式可视化图表(条形图、雷达图)。LLM BiasScope作为开源网络应用程序公开发布,为大语言模型行为偏见评估与比较分析提供了实用工具。