This report is the system description of the BeeManc team for shared task Plain Language Adaptation of Biomedical Abstracts (PLABA) 2024. This report contains two sections corresponding to the two sub-tasks in PLABA 2024. In task one, we applied fine-tuned ReBERTa-Base models to identify and classify the difficult terms, jargon and acronyms in the biomedical abstracts and reported the F1 score. Due to time constraints, we didn't finish the replacement task. In task two, we leveraged Llamma3.1-70B-Instruct and GPT-4o with the one-shot prompts to complete the abstract adaptation and reported the scores in BLEU, SARI, BERTScore, LENS, and SALSA. From the official Evaluation from PLABA-2024 on Task 1A and 1B, our \textbf{much smaller fine-tuned RoBERTa-Base} model ranked 3rd and 2nd respectively on the two sub-task, and the \textbf{1st on averaged F1 scores across the two tasks} from 9 evaluated systems. Our LLaMA-3.1-70B-instructed model achieved the \textbf{highest Completeness} score for Task-2. We share our fine-tuned models and related resources at \url{https://github.com/HECTA-UoM/PLABA2024}
翻译:本报告是BeeManc团队为2024年“生物医学摘要简明语言适应”(PLABA)共享任务提交的系统描述。报告包含两个部分,分别对应PLABA 2024的两个子任务。在任务一中,我们应用微调后的RoBERTa-Base模型来识别和分类生物医学摘要中的难词、术语和缩写,并报告了F1分数。由于时间限制,我们未完成术语替换任务。在任务二中,我们利用LLaMA3.1-70B-Instruct和GPT-4o模型,通过单样本提示完成摘要改写,并报告了BLEU、SARI、BERTScore、LENS和SALSA评分。根据PLABA-2024官方对任务1A和1B的评估,我们**规模小得多的微调RoBERTa-Base模型**在两个子任务中分别排名第3和第2,并且在9个参评系统中**两个任务的平均F1分数排名第1**。我们的LLaMA-3.1-70B-instruct模型在任务二中获得了**最高的完整性**分数。我们已将微调模型及相关资源发布于 \url{https://github.com/HECTA-UoM/PLABA2024}。