Integrating cognitive ergonomics with LLMs is crucial for improving safety, reliability, and user satisfaction in human-AI interactions. Current LLM designs often lack this integration, resulting in systems that may not fully align with human cognitive capabilities and limitations. This oversight exacerbates biases in LLM outputs and leads to suboptimal user experiences due to inconsistent application of user-centered design principles. Researchers are increasingly leveraging NLP, particularly LLMs, to model and understand human behavior across social sciences, psychology, psychiatry, health, and neuroscience. Our position paper explores the need to integrate cognitive ergonomics into LLM design, providing a comprehensive framework and practical guidelines for ethical development. By addressing these challenges, we aim to advance safer, more reliable, and ethically sound human-AI interactions.
翻译:将认知工效学与大语言模型(LLM)相结合,对于提升人机交互的安全性、可靠性与用户满意度至关重要。当前LLM设计往往缺乏此类整合,导致系统可能无法完全契合人类的认知能力与局限。这种疏漏加剧了LLM输出中的偏见,并因以用户为中心的设计原则应用不一致而导致次优用户体验。研究者正日益利用自然语言处理(NLP),特别是LLM,在社会科学、心理学、精神病学、健康及神经科学领域建模和理解人类行为。本立场论文探讨了将认知工效学融入LLM设计的必要性,提出了一个综合性框架及符合伦理发展的实践指南。通过应对这些挑战,我们旨在推动更安全、更可靠且符合伦理规范的人机交互。