This report is the system description of the BeeManc team for shared task Plain Language Adaptation of Biomedical Abstracts (PLABA) 2024. This report contains two sections corresponding to the two sub-tasks in PLABA 2024. In task one, we applied fine-tuned ReBERTa-Base models to identify and classify the difficult terms, jargon and acronyms in the biomedical abstracts and reported the F1 score. Due to time constraints, we didn't finish the replacement task. In task two, we leveraged Llamma3.1-70B-Instruct and GPT-4o with the one-shot prompts to complete the abstract adaptation and reported the scores in BLEU, SARI, BERTScore, LENS, and SALSA. From the official Evaluation from PLABA-2024 on Task 1A and 1B, our \textbf{much smaller fine-tuned RoBERTa-Base} model ranked 3rd and 2nd respectively on the two sub-task, and the \textbf{1st on averaged F1 scores across the two tasks} from 9 evaluated systems. Our share our fine-tuned models and related resources at \url{https://github.com/HECTA-UoM/PLABA2024}
翻译:本报告为BeeManc团队针对2024年生物医学摘要简明语言适应(PLABA)共享任务的系统描述。报告包含两个部分,分别对应PLABA 2024的两个子任务。在任务一中,我们应用微调后的ReBERTa-Base模型来识别和分类生物医学摘要中的难懂术语、专业行话和缩写词,并报告了F1分数。由于时间限制,我们未完成术语替换任务。在任务二中,我们利用Llamma3.1-70B-Instruct和GPT-4o模型,通过单样本提示完成摘要改写,并报告了BLEU、SARI、BERTScore、LENS和SALSA评分。根据PLABA-2024对任务1A和1B的官方评估,我们**规模小得多的微调RoBERTa-Base**模型在两个子任务中分别位列第3和第2,并在9个参评系统中**取得了两项任务平均F1分数的第1名**。我们已将微调模型及相关资源发布于 \url{https://github.com/HECTA-UoM/PLABA2024}。