Competency modeling is widely used in human resource management to select, develop, and evaluate talent. However, traditional expert-driven approaches rely heavily on manual analysis of large volumes of interview transcripts, making them costly and prone to randomness, ambiguity, and limited reproducibility. This study proposes a new competency modeling process built on large language models (LLMs). Instead of merely automating isolated steps, we reconstruct the workflow by decomposing expert practices into structured computational components. Specifically, we leverage LLMs to extract behavioral and psychological descriptions from raw textual data and map them to predefined competency libraries through embedding-based similarity. We further introduce a learnable parameter that adaptively integrates different information sources, enabling the model to determine the relative importance of behavioral and psychological signals. To address the long-standing challenge of validation, we develop an offline evaluation procedure that allows systematic model selection without requiring additional large-scale data collection. Empirical results from a real-world implementation in a software outsourcing company demonstrate strong predictive validity, cross-library consistency, and structural robustness. Overall, our framework transforms competency modeling from a largely qualitative and expert-dependent practice into a transparent, data-driven, and evaluable analytical process.
翻译:能力建模在人力资源管理中广泛应用于人才选拔、培养与评估。然而,传统的专家驱动方法严重依赖对大量访谈文本的手工分析,导致成本高昂且易受随机性、模糊性和可复现性不足的影响。本研究提出一种基于大语言模型(LLMs)的新型能力建模流程。我们并非简单地将孤立步骤自动化,而是通过将专家实践分解为结构化计算组件来重构工作流程。具体而言,我们利用LLMs从原始文本数据中提取行为与心理描述,并通过基于嵌入的相似度将其映射到预定义的能力库。我们进一步引入可学习参数来自适应地整合不同信息源,使模型能够确定行为信号与心理信号的相对重要性。为应对长期存在的验证难题,我们开发了一种离线评估程序,可在无需额外大规模数据收集的情况下实现系统化的模型选择。在一家软件外包公司的实际应用结果表明,该方法具有强大的预测效度、跨库一致性及结构稳健性。总体而言,我们的框架将能力建模从一种主要依赖专家的定性实践,转变为透明、数据驱动且可评估的分析流程。