IR in low-resource languages remains limited by the scarcity of high-quality, task-specific annotated datasets. Manual annotation is expensive and difficult to scale, while using large language models (LLMs) as automated annotators introduces concerns about label reliability, bias, and evaluation validity. This work presents a Bangla IR dataset constructed using a BETA-labeling framework involving multiple LLM annotators from diverse model families. The framework incorporates contextual alignment, consistency checks, and majority agreement, followed by human evaluation to verify label quality. Beyond dataset creation, we examine whether IR datasets from other low-resource languages can be effectively reused through one-hop machine translation. Using LLM-based translation across multiple language pairs, we experimented on meaning preservation and task validity between source and translated datasets. Our experiment reveal substantial variation across languages, reflecting language-dependent biases and inconsistent semantic preservation that directly affect the reliability of cross-lingual dataset reuse. Overall, this study highlights both the potential and limitations of LLM-assisted dataset creation for low-resource IR. It provides empirical evidence of the risks associated with cross-lingual dataset reuse and offers practical guidance for constructing more reliable benchmarks and evaluation pipelines in low-resource language settings.
翻译:低资源语言的信息检索仍受限于高质量、任务特定标注数据集的稀缺性。人工标注成本高昂且难以规模化,而使用大型语言模型作为自动标注器则引发了标签可靠性、偏见及评估有效性方面的担忧。本研究提出了一个通过BETA标注框架构建的孟加拉语信息检索数据集,该框架涉及来自不同模型家族的多个LLM标注器。该框架整合了上下文对齐、一致性检验与多数表决机制,并辅以人工评估以验证标签质量。除数据集构建外,我们还探究了其他低资源语言的信息检索数据集能否通过单跳机器翻译实现有效复用。通过在多语言对上应用基于LLM的翻译,我们实验检验了源数据集与翻译数据集之间的语义保持度与任务有效性。实验结果显示不同语言间存在显著差异,反映了语言依赖性的偏见及不一致的语义保持现象,这些因素直接影响跨语言数据集复用的可靠性。总体而言,本研究揭示了LLM辅助的低资源信息检索数据集构建的潜力与局限,为跨语言数据集复用相关的风险提供了实证依据,并为在低资源语言环境中构建更可靠的基准测试与评估流程提供了实践指导。