Telecom services are at the core of today's societies' everyday needs. The availability of numerous online forums and discussion platforms enables telecom providers to improve their services by exploring the views of their customers to learn about common issues that the customers face. Natural Language Processing (NLP) tools can be used to process the free text collected. One way of working with such data is to represent text as numerical vectors using one of many word embedding models based on neural networks. This research uses a novel dataset of telecom customers' reviews to perform an extensive study showing how different word embedding algorithms can affect the text classification process. Several state-of-the-art word embedding techniques are considered, including BERT, Word2Vec and Doc2Vec, coupled with several classification algorithms. The important issue of feature engineering and dimensionality reduction is addressed and several PCA-based approaches are explored. Moreover, the energy consumption used by the different word embeddings is investigated. The findings show that some word embedding models can lead to consistently better text classifiers in terms of precision, recall and F1-Score. In particular, for the more challenging classification tasks, BERT combined with PCA stood out with the highest performance metrics. Moreover, our proposed PCA approach of combining word vectors using the first principal component shows clear advantages in performance over the traditional approach of taking the average.
翻译:电信服务是当今社会日常需求的核心。众多在线论坛和讨论平台的出现,使电信供应商能够通过挖掘客户观点来了解客户面临的共性问题,从而改进其服务。自然语言处理工具可用于处理收集到的自由文本。处理此类数据的一种方法是使用基于神经网络的多种词嵌入模型之一将文本表示为数值向量。本研究采用新颖的电信客户评论数据集,通过广泛实验展示了不同词嵌入算法如何影响文本分类过程。研究涵盖了包括BERT、Word2Vec和Doc2Vec在内的多种前沿词嵌入技术,并结合了多种分类算法。针对特征工程和降维这一重要问题,本文探讨了多种基于主成分分析的方法。此外,还研究了不同词嵌入方法的能耗情况。研究结果表明,在精确率、召回率和F1分数方面,某些词嵌入模型能够持续产生更优的文本分类器。特别是在更具挑战性的分类任务中,BERT与主成分分析结合的方法展现出最高的性能指标。此外,我们提出的基于第一主成分组合词向量的主成分分析方法,在性能上明显优于传统的取平均方法。