The probably approximately correct (PAC) learning is a framework for mathematical analysis of machine learning. The PAC criterion is that our learner (machine) produces a high accuracy learner with high probability. In this framework, the learner receives samples and must select a generalization function i.e., hypothesis from a certain class of possible functions. The learner must be able to learn the concept given any arbitrary approximation ratio, probability of success, or distribution of the samples.


Example: When a teacher teaches a student (learner) based on set of labelled examples and at what proximity the student answers correctly with high confidence level.

In any machine learning model, the accuracy determines the ratio of predictions to the total number of input samples, so PAC framework plays a major role in identifying the best hypothesis for the model with least error.

We want to be less than . Well, it is natural that we won’t be interested in the algorithms which have more than fifty percent chances of giving an inaccurate answer. The strict inequality i.e., < . This is because = would be useless as it would be no better than generating the output with the toss of a coin.

By Adarsh Vengarathodi

Hey there! I'm the founder and the CEO of MistLayer, Inc. In this role, I oversee the company’s overall product direction and development and leads the engineering, product and infrastructure teams. I have more than 12 years of software, systems and product development experience, including hardware and software engineering, product management, quality assurance and support.