Organizations use asynchronous AI interview systems to efficiently manage large applicant pools, enabling quick and uniform evaluations. However, concerns remain about their impact on user agency and the lack of personalization applicants experience with these systems. Although efforts have been made to humanize the interview process, users' expectations are often unmet, especially when compared to the promises made by these systems. To examine how applicants perceive and experience these tools, particularly in the context of their growing familiarity with large language models (LLMs), we conducted a two-phase study. The first phase involved an analysis of 11 subreddit discussions on interview experiences with asynchronous AI interviewers, followed by a semi-structured interview study with 17 participants. Qualitative analysis revealed key issues such as mismatched expectations, amplified by organizational rhetoric and applicant expectations shaped by experiences with LLMs. These factors shaped participants' sense of agency and trust, often leading to workarounds and deceptive practices. In the follow-up study, we designed an interface with two features, response variants and feedback variants, and evaluated it across six groups (N = 180, 30 participants each) to assess whether these features support users' sense of agency, competence, and relatedness. Our analysis suggests that even subtle design changes can enhance user autonomy and that carefully designed feedback can provide meaningful support in high-stakes interview contexts.
翻译:暂无翻译