In an era of ubiquitous data collection, platform dominance, and AI-mediated governance, the social contract of digital life is increasingly shaped by a few private actors rather than democratic deliberation. This paper advances a dignity-centric Digital Social Contract grounded in data sovereignty, human dignity, and data personalism: the view that personal data are rights-laden emanations of the person and should be protected as a human right, not treated as neutral inputs or tradable commodities. Drawing on social contract theory and interdisciplinary scholarship across law, ethics, economics, computer science, sociology, and political philosophy, we diagnose how datafied infrastructures and surveillance-based business models convert everyday traces into profiles, predictions, and consequential decisions at scale, concentrating informational power and weakening consent, autonomy, and civic trust. We contrast DatAIsm (an extractive paradigm reducing persons to datapoints, optimizing for prediction and control) with HumAIsm, which recenters the human subject and the irreducibility of dignity to mere calculation. We then articulate a governance architecture around six dimensions: (1) technological oversight through Dignity-by-Design, (2) limits to automation and meaningful human control, (3) contextual valuation, redistribution, and incentives, (4) political-institutional legitimacy and multi-actor governance, (5) sociocultural cohesion and the digital commons, and (6) legal-regulatory guarantees. The framework is operationalized through auditable tools (principles, non-negotiable limits, and DbD checklists) aimed at aligning innovation with autonomy, equality, and human flourishing. We conclude by articulating open questions and tensions to foster interdisciplinary debate and guide future research.
翻译:暂无翻译