Learning, whether natural or artificial, is a process of selection. It starts with a set of candidate options and selects the more successful ones. In the case of machine learning the selection is done based on empirical estimates of prediction accuracy of candidate prediction rules on some data. Due to randomness of data sampling the empirical estimates are inherently noisy, leading to selection under uncertainty. The book provides statistical tools to obtain theoretical guarantees on the outcome of selection under uncertainty. We start with concentration of measure inequalities, which are the main statistical instrument for controlling how much an empirical estimate of expectation of a function deviates from the true expectation. The book covers a broad range of inequalities, including Markov's, Chebyshev's, Hoeffding's, Bernstein's, Empirical Bernstein's, Unexpected Bernstein's, kl, and split-kl. We then study the classical (offline) supervised learning and provide a range of tools for deriving generalization bounds, including Occam's razor, Vapnik-Chervonenkis analysis, and PAC-Bayesian analysis. The latter is further applied to derive generalization guarantees for weighted majority votes. After covering the offline setting, we turn our attention to online learning. We present the space of online learning problems characterized by environmental feedback, environmental resistance, and structural complexity. A common performance measure in online learning is regret, which compares performance of an algorithm to performance of the best prediction rule in hindsight, out of a restricted set of prediction rules. We present tools for deriving regret bounds in stochastic and adversarial environments, and under full information and bandit feedback.
翻译:学习,无论是自然的还是人工的,都是一个选择的过程。它始于一组候选选项,并从中选择更成功的选项。在机器学习中,选择是基于候选预测规则在某些数据上的预测准确性的经验估计来完成的。由于数据采样的随机性,经验估计本质上是带有噪声的,从而导致不确定性下的选择。本书提供了统计工具,以获得关于不确定性下选择结果的理论保证。我们从集中度量不等式开始,这是控制函数期望的经验估计偏离真实期望程度的主要统计工具。本书涵盖了广泛的不等式,包括马尔可夫不等式、切比雪夫不等式、霍夫丁不等式、伯恩斯坦不等式、经验伯恩斯坦不等式、意外伯恩斯坦不等式、kl 不等式和 split-kl 不等式。然后我们研究经典的(离线)监督学习,并提供一系列用于推导泛化界限的工具,包括奥卡姆剃刀原理、Vapnik-Chervonenkis 分析和 PAC-贝叶斯分析。后者进一步应用于推导加权多数投票的泛化保证。在涵盖离线设置之后,我们将注意力转向在线学习。我们介绍了由环境反馈、环境阻力和结构复杂性表征的在线学习问题空间。在线学习中的一个常见性能度量是遗憾,它将算法的性能与事后从一组受限的预测规则中选出的最佳预测规则的性能进行比较。我们介绍了在随机和对抗环境中,以及在完全信息和老虎机反馈下推导遗憾界限的工具。