Text compression shrinks textual data while keeping crucial information, eradicating constraints on storage, bandwidth, and computational efficacy. The integration of lossless compression techniques with transformer-based text decompression has received negligible attention, despite the increasing volume of English text data in communication. The primary barrier in advancing text compression and restoration involves optimizing transformer-based approaches with efficient pre-processing and integrating lossless compression algorithms, that remained unresolved in the prior attempts. Here, we propose a transformer-based method named RejuvenateForme for text decompression, addressing prior issues by harnessing a new pre-processing technique and a lossless compression method. Our meticulous pre-processing technique incorporating the Lempel-Ziv-Welch algorithm achieves compression ratios of 12.57, 13.38, and 11.42 on the BookCorpus, EN-DE, and EN-FR corpora, thus showing state-of-the-art compression ratios compared to other deep learning and traditional approaches. Furthermore, the RejuvenateForme achieves a BLEU score of 27.31, 25.78, and 50.45 on the EN-DE, EN-FR, and BookCorpus corpora, showcasing its comprehensive efficacy. In contrast, the pre-trained T5-Small exhibits better performance over prior state-of-the-art models.
翻译:文本压缩在保留关键信息的同时缩减文本数据,从而消除存储、带宽和计算效率方面的限制。尽管通信中英文文本数据量不断增长,但无损压缩技术与基于Transformer的文本解压缩方法的融合尚未得到充分关注。推进文本压缩与恢复的主要障碍在于:如何通过高效的预处理优化基于Transformer的方法,并整合无损压缩算法——这在先前研究中仍未得到解决。本文提出一种名为RejuvenateForme的基于Transformer的文本解压缩方法,通过采用新型预处理技术与无损压缩方法解决了既有问题。我们结合Lempel-Ziv-Welch算法的精细化预处理技术在BookCorpus、EN-DE和EN-FR语料库上分别实现了12.57、13.38和11.42的压缩比,相较于其他深度学习和传统方法展现出最先进的压缩性能。此外,RejuvenateForme在EN-DE、EN-FR和BookCorpus语料库上分别获得27.31、25.78和50.45的BLEU分数,证明了其综合效能。相比之下,预训练的T5-Small模型较先前最优模型展现出更优越的性能。