In the rapidly evolving field of information visualization, rigorous evaluation is essential for validating new techniques, understanding user interactions, and demonstrating the effectiveness and usability of visualizations. Faithful evaluations provide valuable insights into how users interact with and perceive the system, enabling designers to identify potential weaknesses and make informed decisions about design choices and improvements. However, an emerging trend of multiple evaluations within a single research raises critical questions about the sustainability, feasibility, and methodological rigor of such an approach. New researchers and students, influenced by this trend, may believe -- multiple evaluations are necessary for a study, regardless of the contribution types. However, the number of evaluations in a study should depend on its contributions and merits, not on the trend of including multiple evaluations to strengthen a paper. So, how many evaluations are enough? This is a situational question and cannot be formulaically determined. Our objective is to summarize current trends and patterns to assess the distribution of evaluation methods over different paper contribution types. In this paper, we identify this trend through a non-exhaustive literature survey of evaluation patterns in 214 papers in the two most recent years' VIS issues in IEEE TVCG from 2023 and 2024. We then discuss various evaluation strategy patterns in the information visualization field to guide practical choices and how this paper will open avenues for further discussion.
翻译:在快速发展的信息可视化领域,严谨的评估对于验证新技术、理解用户交互以及证明可视化的有效性和可用性至关重要。忠实的评估能为用户如何与系统交互及感知系统提供宝贵见解,使设计者能够识别潜在缺陷,并就设计选择和改进做出明智决策。然而,单篇研究中包含多重评估的新兴趋势引发了关于该方法可持续性、可行性及方法论严谨性的关键问题。受此趋势影响,新研究人员和学生可能认为——无论贡献类型如何,研究都必须包含多重评估。然而,研究中评估的数量应取决于其贡献和价值,而非为了强化论文而纳入多重评估的趋势。那么,多少评估才足够?这是一个情境性问题,无法通过公式化方式确定。我们的目标是总结当前趋势与模式,以评估不同论文贡献类型中评估方法的分布情况。本文通过对2023年和2024年IEEE TVCG中VIS专题最近两年214篇论文的评估模式进行非穷尽性文献调研,识别了这一趋势。随后,我们讨论了信息可视化领域中多种评估策略模式,以指导实际选择,并展望本文如何为进一步讨论开辟途径。