Decentralized finance (DeFi) is an integral component of the blockchain ecosystem, enabling a range of financial activities through smart-contract-based protocols. Traditional DeFi governance typically involves manual parameter adjustments by protocol teams or token holder votes, and is thus prone to human bias and financial risks, undermining the system's integrity and security. While existing efforts aim to establish more adaptive parameter adjustment schemes, there remains a need for a governance model that is both more efficient and resilient to significant market manipulations. In this paper, we introduce "Auto.gov", a learning-based governance framework that employs a deep Q-network (DQN) reinforcement learning (RL) strategy to perform semi-automated, data-driven parameter adjustments. We create a DeFi environment with an encoded action-state space akin to the Aave lending protocol for simulation and testing purposes, where Auto.gov has demonstrated the capability to retain funds that would have otherwise been lost to price oracle attacks. In tests with real-world data, Auto.gov outperforms the benchmark approaches by at least 14% and the static baseline model by tenfold, in terms of the preset performance metric-protocol profitability. Overall, the comprehensive evaluations confirm that Auto.gov is more efficient and effective than traditional governance methods, thereby enhancing the security, profitability, and ultimately, the sustainability of DeFi protocols.
翻译:去中心化金融(DeFi)是区块链生态系统的重要组成部分,通过基于智能合约的协议实现一系列金融活动。传统的DeFi治理通常涉及协议团队手动调整参数或代币持有者投票,因此容易受到人为偏见和金融风险的影响,损害系统的完整性和安全性。尽管现有研究致力于建立更具适应性的参数调整方案,但仍需一种既更高效又能抵御重大市场操纵的治理模型。本文提出“Auto.gov”,一种基于学习的治理框架,采用深度Q网络(DQN)强化学习(RL)策略执行半自动化、数据驱动的参数调整。我们构建了一个类似于Aave借贷协议的编码动作-状态空间的DeFi环境用于模拟和测试,其中Auto.gov展示了保留原本会因价格预言机攻击而损失资金的能力。在使用真实数据的测试中,就预设性能指标——协议盈利能力而言,Auto.gov的表现优于基准方法至少14%,且优于静态基线模型十倍。总体而言,综合评估证实Auto.gov比传统治理方法更高效、更有效,从而提升了DeFi协议的安全性、盈利能力以及最终的可持续性。