The probably approximately correct (PAC) learning is a framework for mathematical analysis of machine learning. The PAC criterion is that our learner (machine) produces a high accuracy learner with high probability. In this framework, the learner receives samples and must select a generalization function i.e., hypothesis from a certain class of possible functions. The learner must be able to learn the concept given any arbitrary approximation ratio, probability of success, or distribution of the samples.
Example: When a teacher teaches a student (learner) based on set of labelled examples and at what proximity the student answers correctly with high confidence level.
In any machine learning model, the accuracy determines the ratio of predictions to the total number of input samples, so PAC framework plays a major role in identifying the best hypothesis for the model with least error.
We want to be less than . Well, it is natural that we won’t be interested in the algorithms which have more than fifty percent chances of giving an inaccurate answer. The strict inequality i.e., < . This is because = would be useless as it would be no better than generating the output with the toss of a coin.