Обзор современных исследований по проблеме качества обучения алгоритмов

The review considers basic ideas of machine learning theory concerning generalization bounds and learning algorithms grounds. Among them are: classical VC theory and structural risk minimization, effective VC-dimension and data-dependent bounds, margin, ensembles of algorithms (weighted voting, boosting and bagging), stability, cross-validation. A new combinatorial approach to proving nonprobabilistic generalization bounds is considered a little more detailed.