Given large language models' (LLMs) increasing integration into workplace software, it is important to examine how biases in the models may impact workers. For example, stylistic biases in the language suggested by LLMs may cause feelings of alienation and result in increased labor for individuals or groups whose style does not match. We examine how such writer-style bias impacts inclusion, control, and ownership over the work when co-writing with LLMs. In an online experiment, participants wrote hypothetical job promotion requests using either hesitant or self-assured autocomplete suggestions from an LLM and reported their subsequent perceptions. We found that the style of the AI model did not impact perceived inclusion. However, individuals with higher perceived inclusion did perceive greater agency and ownership, an effect more strongly impacting participants of minoritized genders. Feelings of inclusion mitigated a loss of control and agency when accepting more AI suggestions.
翻译:鉴于大语言模型(LLMs)日益融入职场软件,探究模型中的偏见如何影响工作者至关重要。例如,LLMs建议语言中的风格偏见可能导致疏离感,并增加风格不匹配个体或群体的劳动负担。本研究通过在线实验,考察了在与LLMs协作撰写文本时,作者风格偏差如何影响其对工作的包容性、控制感与所有权。实验参与者使用LLM提供的犹豫型或自信型自动补全建议撰写假设性晋升申请,并报告后续感知。结果显示,AI模型的风格未显著影响感知包容性;但感知包容性较高的个体体验到更强的自主性和所有权,这一效应在少数性别群体中尤为显著。包容感有效缓解了因采纳更多AI建议而导致的控制感与自主性下降。