Large Language Models (LLMs) have demonstrated remarkable potential in diverse domains, yet their application in the legal sector, particularly in low-resource contexts, remains limited. This study addresses the challenges of adapting LLMs to the Palestinian legal domain, where political instability, fragmented legal frameworks, and limited AI resources hinder effective machine-learning applications. We present a fine-tuned model based on a quantized version of Llama-3.2-1B-Instruct, trained on a synthetic data set derived from Palestinian legal texts. Using smaller-scale models and strategically generated question-answer pairs, we achieve a cost-effective, locally sustainable solution that provides accurate and contextually relevant legal guidance. Our experiments demonstrate promising performance on various query types, ranging from yes/no questions and narrative explanations to complex legal differentiations, while highlighting areas for improvement, such as handling calculation-based inquiries and structured list formatting. This work provides a pathway for the deployment of AI-driven legal assistance tools tailored to the needs of resource-constrained environments.
翻译:大型语言模型(LLMs)已在众多领域展现出巨大潜力,但其在法律领域的应用,特别是在资源匮乏的语境中,仍然有限。本研究致力于应对将LLMs适配到巴勒斯坦法律领域的挑战,该领域存在政治不稳定、法律框架碎片化以及人工智能资源有限等问题,阻碍了有效的机器学习应用。我们提出了一个基于量化版Llama-3.2-1B-Instruct的微调模型,该模型在源自巴勒斯坦法律文本的合成数据集上进行训练。通过使用较小规模的模型和策略性生成的问答对,我们实现了一个成本效益高、本地可持续的解决方案,能够提供准确且与上下文相关的法律指导。我们的实验表明,该模型在处理从是/否问题、叙述性解释到复杂的法律区分等多种查询类型上均表现出良好性能,同时也指出了需要改进的领域,例如处理基于计算的查询和结构化列表格式。这项工作为部署符合资源受限环境需求的、由人工智能驱动的法律辅助工具提供了一条可行路径。