Stability of Learning Classification Algorithms Based on the Modified Estimates Calculation Model

In this paper, we obtain the following theoretical result: there exists a stable algorithm $\mathcal{A}$ of the modified model $ABO^{*}$ training, guaranteeing its learnability in the form of universal empirical generalization directly by use learning sample by the way minimizing the empirical risk. To obtain this result the $LOO$ stability of the algorithm $\mathcal{A}$ was proved. Algorithm $\mathcal{A}$ described in details in this article is a learning procedure with adaptation. It requires the adjustment of only the weights of objects of training sample. The remaining parameters of the model remain fixed. This is sufficient to achieve the desired result. Proposed modification of the model $ABO$ is minimal: it excludes only the case where “the point is voting for itself”. It is easy to show that in the case when the modified model $ABO^{*}$ is learned by only the choice the shortest elementary logical separators (in particular — a dead-end tests), a universal empirical generalization will also take place.