Recent advancements in language technology and Artificial Intelligence have resulted in numerous Language Models being proposed to perform various tasks in the legal domain ranging from predicting judgments to generating summaries. Despite their immense potential, these models have been proven to learn and exhibit societal biases and make unfair predictions. In this study, we explore the ability of Large Language Models (LLMs) to perform legal tasks in the Indian landscape when social factors are involved. We present a novel metric, $\beta$-weighted $\textit{Legal Safety Score ($LSS_{\beta}$)}$, which encapsulates both the fairness and accuracy aspects of the LLM. We assess LLMs' safety by considering its performance in the $\textit{Binary Statutory Reasoning}$ task and its fairness exhibition with respect to various axes of disparities in the Indian society. Task performance and fairness scores of LLaMA and LLaMA--2 models indicate that the proposed $LSS_{\beta}$ metric can effectively determine the readiness of a model for safe usage in the legal sector. We also propose finetuning pipelines, utilising specialised legal datasets, as a potential method to mitigate bias and improve model safety. The finetuning procedures on LLaMA and LLaMA--2 models increase the $LSS_{\beta}$, improving their usability in the Indian legal domain. Our code is publicly released.
翻译:语言技术与人工智能的最新进展催生了众多语言模型,它们被提出用于执行法律领域的各种任务,从判决预测到摘要生成。尽管这些模型具有巨大潜力,但已被证明会学习并表现出社会偏见,做出不公平的预测。在本研究中,我们探讨了当涉及社会因素时,大语言模型在印度背景下执行法律任务的能力。我们提出了一种新颖的度量指标——$\beta$加权的$\textit{法律安全分数($LSS_{\beta}$)}$,该指标涵盖了大语言模型的公平性与准确性两个方面。我们通过考量模型在$\textit{二元法规推理}$任务中的表现,以及其在印度社会多种差异维度上所展现的公平性,来评估大语言模型的安全性。LLaMA和LLaMA--2模型的任务表现与公平性分数表明,所提出的$LSS_{\beta}$度量指标能够有效判定一个模型在法律领域安全使用的准备就绪程度。我们还提出了利用专业法律数据集的微调流程,作为减轻偏见并提升模型安全性的潜在方法。对LLaMA和LLaMA--2模型的微调操作提高了$LSS_{\beta}$,从而增强了它们在印度法律领域的可用性。我们的代码已公开发布。