Integrating cognitive ergonomics with LLMs is crucial for improving safety, reliability, and user satisfaction in human-AI interactions. Current LLM designs often lack this integration, resulting in systems that may not fully align with human cognitive capabilities and limitations. This oversight exacerbates biases in LLM outputs and leads to suboptimal user experiences due to inconsistent application of user-centered design principles. Researchers are increasingly leveraging NLP, particularly LLMs, to model and understand human behavior across social sciences, psychology, psychiatry, health, and neuroscience. Our position paper explores the need to integrate cognitive ergonomics into LLM design, providing a comprehensive framework and practical guidelines for ethical development. By addressing these challenges, we aim to advance safer, more reliable, and ethically sound human-AI interactions.
翻译:将认知工效学与大型语言模型(LLMs)相结合,对于提升人机交互的安全性、可靠性与用户满意度至关重要。当前LLM设计往往缺乏此类整合,导致系统可能无法完全契合人类的认知能力与局限。这种疏漏加剧了LLM输出中的偏见,并因以人为本设计原则的应用不一致而导致次优的用户体验。研究者正日益利用自然语言处理(NLP),特别是LLMs,在社会科学、心理学、精神病学、健康及神经科学等领域对人类行为进行建模与理解。本立场论文探讨了将认知工效学融入LLM设计的必要性,提出了伦理发展的综合框架与实践指南。通过应对这些挑战,我们旨在推动更安全、更可靠且符合伦理的人机交互发展。