Qualitative data analysis is labor-intensive, yet the privacy risks associated with commercial Large Language Models (LLMs) often preclude their use in sensitive research. To address this, we introduce ChatQDA, an on-device framework powered by open-source LLMs designed for privacy-preserving open coding. Our mixed-methods user study reveals that while participants rated the system highly for usability and perceived efficiency, they exhibited "conditional trust", valuing the tool for surface-level extraction while questioning its interpretive nuance and consistency. Furthermore, despite the technical security of local deployment, participants reported epistemic uncertainty regarding data protection, suggesting that invisible security measures are insufficient to foster trust. We conclude with design recommendations for local-first analysis tools that prioritize verifiable privacy and methodological rigor.
翻译:质性数据分析工作繁重,但商用大语言模型(LLMs)的隐私风险常使其无法应用于敏感研究。为此,我们提出了ChatQDA——一个由开源LLMs驱动的本地化框架,专为保护隐私的开放式编码而设计。通过混合方法的用户研究发现,尽管参与者对系统的可用性和感知效率给予高度评价,但他们表现出"有条件信任":重视该工具在表层信息提取方面的能力,同时质疑其解释的细微差异与一致性。此外,尽管本地部署具备技术安全性,参与者仍对数据保护存在认知不确定性,这表明不可见的安全措施不足以建立信任。最后,我们为优先考虑可验证隐私与方法严谨性的本地优先分析工具提出了设计建议。