We present Umwelt, an authoring environment for interactive multimodal data representations. In contrast to prior approaches, which center the visual modality, Umwelt treats visualization, sonification, and textual description as coequal representations: they are all derived from a shared abstract data model, such that no modality is prioritized over the others. To simplify specification, Umwelt evaluates a set of heuristics to generate default multimodal representations that express a dataset's functional relationships. To support smoothly moving between representations, Umwelt maintains a shared query predicated that is reified across all modalities -- for instance, navigating the textual description also highlights the visualization and filters the sonification. In a study with 5 blind / low-vision expert users, we found that Umwelt's multimodal representations afforded complementary overview and detailed perspectives on a dataset, allowing participants to fluidly shift between task- and representation-oriented ways of thinking.
翻译:我们提出Umwelt,一个用于交互式多模态数据表示的创作环境。与先前以视觉模态为中心的方法不同,Umwelt将视觉化、声化与文本描述视为同等地位的表征形式:它们均源自共享的抽象数据模型,因此没有任何模态被赋予优先权。为简化规范制定,Umwelt评估一组启发式规则以生成表达数据集功能关系的默认多模态表示。为支持不同表示间的平滑切换,Umwelt维护一个跨所有模态具体化的共享查询谓词——例如,浏览文本描述时会同步高亮视觉化并过滤声化输出。在面向5位盲人/低视力专家用户的研究中,我们发现Umwelt的多模态表示为数据集提供了互补的概览与细节视角,使参与者能在任务导向与表征导向的思维模式间灵活切换。