Large language models are being deployed in complex socio-technical systems, which exposes limits in current alignment practice. We take the position that the dominant paradigm of General Alignment, which compresses diverse human values into a single scalar reward, reaches a structural ceiling in settings with conflicting values, plural stakeholders, and irreducible uncertainty. These failures follow from the mathematics and incentives of scalarization and lead to \textbf{structural} value flattening, \textbf{normative} representation loss, and \textbf{cognitive} uncertainty blindness. We introduce Edge Alignment as a distinct approach in which systems preserve multi dimensional value structure, support plural and democratic representation, and incorporate epistemic mechanisms for interaction and clarification. To make this approach practical, we propose seven interdependent pillars organized into three phases. We identify key challenges in data collection, training objectives, and evaluation, outlining complementary technical and governance directions. Taken together, these measures reframe alignment as a lifecycle problem of dynamic normative governance rather than as a single instance optimization task.
翻译:大型语言模型正被部署于复杂的社会技术系统中,这暴露了当前对齐实践的局限性。我们主张,将多元人类价值压缩为单一标量奖励的主流范式——通用对齐,在价值冲突、利益主体多元及不确定性不可约的设定下已触及结构性天花板。这些失效源于标量化的数学机制与激励结构,并导致**结构性**价值扁平化、**规范性**表征缺失以及**认知性**不确定性盲区。我们提出"边缘对齐"作为一种差异化路径,该路径要求系统保持多维价值结构、支持多元民主表征,并融入用于交互与澄清的认知机制。为使该方法具备实践可行性,我们提出由三个阶段构成的七大相互依存支柱,并指出数据收集、训练目标与评估体系中的关键挑战,同时勾勒出技术治理与制度治理的互补方向。整体而言,这些措施将对齐问题重新定义为动态规范治理的生命周期课题,而非单次优化任务。