Data Visualization Literacy assessments are typically administered via fixed sets of Data Visualization items, despite substantial heterogeneity in how different people interpret the same visualization. This paper presents and evaluates an approach for predicting Human Interpretation Correctness (P-HIC) of data visualizations; i.e., anticipating whether a specific person will interpret a data visualization correctly or not, before exposure to that DV, enabling more personalized assessment and training. We operationalize P-HIC as a binary classification problem using 22 features spanning Human Profile, Human Performance, and Item difficulty (including ExpertDifficulty and RaschDifficulty). We evaluate three machine-learning models (Logistic Regression model, Random Forest, Multi Layer Perceptron) with and without feature selection, using a survey with 1,083 participants who answered 32 Data Visualization items (eight data visualizations per four items), yielding 34,656 item responses. Performance is assessed via a ten-time ten-fold cross-validation in each 32 (item-specific) datasets, using AUC and Cohen's kappa. Logistic Regression model with feature selection is the best-performing approach, reaching a median AUC of 0.72 and a median kappa of 0.32. Feature analyses show RaschDifficulty as the dominant predictor, followed by experts' ratings and prior correctness (PercCorrect), whose relevance increases across sessions. Profile information did not particularly support P-HIC. Our results support the feasibility of anticipating misinterpretations of data visualizations, and motivate the runtime selection of data visualizations items tailored to an audience, thereby improving the efficiency of Data Visualization Literacy assessment and targeted training.
翻译:数据可视化素养评估通常通过固定数据集的可视化项目进行,尽管不同个体对同一可视化的解读存在显著异质性。本文提出并评估了一种预测人类对数据可视化解读正确性的方法,即在个体接触特定数据可视化之前,预测其是否会正确解读该可视化,从而实现更个性化的评估与训练。我们将此预测问题构建为包含22个特征的二元分类任务,特征涵盖用户画像、用户表现与项目难度维度。通过包含1,083名参与者的调研数据,每位参与者完成32个数据可视化项目,共获得34,656条项目响应,我们评估了三种机器学习模型在特征选择前后的表现。采用十次十折交叉验证在32个独立项目数据集上进行评估,以AUC和Cohen's kappa作为性能指标。特征选择后的逻辑回归模型表现最佳,中位AUC达0.72,中位kappa为0.32。特征分析表明RaschDifficulty是最强预测因子,其次是专家评分与历史正确率,后者的预测效力随测试进程增强。用户画像信息对预测贡献有限。研究结果证实了预测数据可视化误读的可行性,为实时选择适配受众的可视化项目提供了依据,从而提升数据可视化素养评估与定向训练的效率。