Using NLP to analyze authentic learner language helps to build automated assessment and feedback tools. It also offers new and extensive insights into the development of second language production. However, there is a lack of research explicitly combining these aspects. This study aimed to classify Estonian proficiency examination writings (levels A2-C1), assuming that careful feature selection can lead to more explainable and generalizable machine learning models for language testing. Various linguistic properties of the training data were analyzed to identify relevant proficiency predictors associated with increasing complexity and correctness, rather than the writing task. Such lexical, morphological, surface, and error features were used to train classification models, which were compared to models that also allowed for other features. The pre-selected features yielded a similar test accuracy but reduced variation in the classification of different text types. The best classifiers achieved an accuracy of around 0.9. Additional evaluation on an earlier exam sample revealed that the writings have become more complex over a 7-10-year period, while accuracy still reached 0.8 with some feature sets. The results have been implemented in the writing evaluation module of an Estonian open-source language learning environment.
翻译:利用自然语言处理技术分析真实学习者语言,有助于构建自动化评估与反馈工具,同时也为第二语言产出的发展提供了全新且广泛的洞察。然而,目前仍缺乏明确结合这两个方面的研究。本研究旨在对爱沙尼亚语能力考试写作(A2-C1等级)进行分类,其基本假设是:通过精细的特征选择,能够为语言测试构建更具可解释性和泛化性的机器学习模型。我们分析了训练数据的多种语言特性,以识别与能力提升相关的复杂度及正确性预测因子(而非写作任务本身)。这些词汇、形态、表层及错误特征被用于训练分类模型,并与允许使用其他特征的模型进行比较。预选特征在测试准确率上表现相近,但降低了对不同文本类型分类结果的变异度。最佳分类器的准确率约为0.9。对早期考试样本的补充评估显示,在7-10年间学习者写作的复杂度有所提升,而部分特征集仍能达到0.8的准确率。研究成果已应用于爱沙尼亚开源语言学习环境的写作评估模块。