Assessing the capabilities and limitations of large language models (LLMs) has garnered significant interest, yet the evaluation of multiple models in real-world scenarios remains rare. Multilingual evaluation often relies on translated benchmarks, which typically do not capture linguistic and cultural nuances present in the source language. This study provides an extensive assessment of 24 LLMs on real world data collected from Indian patients interacting with a medical chatbot in Indian English and 4 other Indic languages. We employ a uniform Retrieval Augmented Generation framework to generate responses, which are evaluated using both automated techniques and human evaluators on four specific metrics relevant to our application. We find that models vary significantly in their performance and that instruction tuned Indic models do not always perform well on Indic language queries. Further, we empirically show that factual correctness is generally lower for responses to Indic queries compared to English queries. Finally, our qualitative work shows that code-mixed and culturally relevant queries in our dataset pose challenges to evaluated models.
翻译:评估大型语言模型(LLM)的能力与局限性已引起广泛关注,但在真实场景中对多种模型的评估仍较为罕见。多语言评估通常依赖于翻译基准,而这些基准往往无法捕捉源语言中的语言及文化细微差异。本研究基于从印度患者与医疗聊天机器人交互过程中收集的真实数据(涵盖印度英语及其他四种印度语言),对24个LLM进行了全面评估。我们采用统一的检索增强生成框架生成回复,并通过自动化技术与人工评估相结合的方式,在四个与应用场景相关的具体指标上对这些回复进行评价。研究发现,不同模型的性能差异显著,且经过指令微调的印度语言模型在处理印度语言查询时表现并不总是优异。此外,我们通过实证分析表明,相较于英语查询,针对印度语言查询所生成回复的事实准确性普遍较低。最后,我们的定性研究表明,数据集中存在的语码混合及文化相关查询对评估模型构成了挑战。