Nowadays, neural networks are commonly used to solve various problems. Unfortunately, despite their effectiveness, they are often perceived as black boxes capable of providing answers without explaining their decisions, which raises numerous ethical and legal concerns. Fortunately, the field of explainability helps users understand these results. This aspect of machine learning allows users to grasp the decision-making process of a model and verify the relevance of its outcomes. In this article, we focus on the learning process carried out by a ``time distributed`` convRNN, which performs anomaly detection from video data.
翻译:当前,神经网络被广泛应用于解决各类问题。然而,尽管其性能优异,这些模型常被视为仅能提供答案而无法解释决策过程的"黑箱",这引发了诸多伦理与法律层面的担忧。值得庆幸的是,可解释性研究领域正致力于帮助用户理解这些结果。机器学习的这一分支使用户能够掌握模型的决策机制,并验证其输出结果的合理性。本文聚焦于一种"时间分布式"ConvRNN模型的学习过程,该模型专用于视频数据中的异常检测任务。