This paper describes the approach of the UniBuc - NLP team in tackling the SemEval 2024 Task 8: Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection. We explored transformer-based and hybrid deep learning architectures. For subtask B, our transformer-based model achieved a strong \textbf{second-place} out of $77$ teams with an accuracy of \textbf{86.95\%}, demonstrating the architecture's suitability for this task. However, our models showed overfitting in subtask A which could potentially be fixed with less fine-tunning and increasing maximum sequence length. For subtask C (token-level classification), our hybrid model overfit during training, hindering its ability to detect transitions between human and machine-generated text.
翻译:本文介绍了UniBuc - NLP团队在应对SemEval 2024任务8(多生成器、多领域、多语言黑盒机器生成文本检测)中所采用的方法。我们探索了基于Transformer的架构与混合深度学习架构。在子任务B中,我们基于Transformer的模型在77支参赛队伍中取得了**第二名**的优异成绩,准确率达到**86.95%**,证明了该架构对此任务的适用性。然而,我们的模型在子任务A中表现出过拟合现象,这或许可以通过减少微调次数和增加最大序列长度来修正。对于子任务C(词元级分类),我们的混合模型在训练过程中出现过拟合,这阻碍了其检测人工文本与机器生成文本之间过渡区域的能力。