This is the thirty-eighth article in a series dedicated to the various aspects of machine learning (ML). Today’s article will continue the discussion of computational learning theory, a school of thought dedicated to answering or even just pondering some of the most profound, troubling, and profoundly troubling questions in the machine learning field. Specifically, we will take a look at the attempts at answering the questions we introduced last article.
Our last article mentioned the American school system’s chosen metric for measuring the progress of students’ learning, i.e., test-taking.
We offered the potentially troubling proposition that analyzing any individual student’s progress is inherently thorny, as each student learns in a highly personalized manner (there are students and parents aplenty who posit that there are visual learners and verbal learners, good test takers and poor test takers).
What that means is that there will always be a bit of a mystery for teachers and school administrators as to whether their students are actually learning or not. Yet, figure it out they must, for the point of school is to educate people, and so by necessity methods for measuring the progress of pupils must be devised.
We found our non-human parallel in the AI world, where the field of computational learning theory is dedicated to answering the same fundamental problems that exist in learning humans.
This field seeks to answer questions like “How many mistakes does an agent need to make before it learns the ‘right’ way to do something?” and “How much time does an agent need to spend in training in order to figure out the right hypotheses?”
But, the big question that all of the questions fall under is, “How does learning actually occur in machine learning?”
Our last article focused on identifying some of the bigger questions in the field, and this article will provide an overview of the work that is done to answer some of these questions, along with significant findings.
How Do Researchers Answer These Questions?
What needs to be established first and foremost is the setting in which the agent will be observed, one that ought to reflect the setting that it will be when actually completing its tasks in the real world.
Some settings generate data at random, like on a pizza delivery route where cars can pull out at random from other streets, change lanes, brake, etc.
Certain settings involve the agent needing to experiment on its own to learn, while some involve an outside helper feeding it data, while yet others offer a combination of the two.
So, in order to figure out how an agent learns, one must observe it in a setting that it will be placed in in the real world.
One way this is done is by using the “probably approximately correct” model, or PAC, which offers random, previously unseen data examples, where algorithms are observed that can, with some probability, learn from these examples and form concepts that are “approximately correct”, meaning they form concepts about their actions and environments that closely resemble the specific concepts that their developers want them to learn.
One of the problems with PAC is that the agent typically knows that it will be picking a certain concept from a “concept class,” so that it has a hint about the limits of what it should think about in forming concepts. So, some researchers like to use agnostic learning models where the agent knows absolutely nothing about the type of concepts it should learn.
Once the setting is established, researches can get to work discovering how many mistakes an algorithm can suffer before turning out a good hypothesis.
Generally, it has been found that the more complicated a setting, the more costly it is for an agent to learn, from a time and computation standpoint.
But, the problem is that so many settings are complicated, so this general postulation does not quite satisfy developers. As a result, learning models called “mistake bound models” have been developed, so that agents of a certain type (e.g., pizza delivery bots) can be analyzed to find the approximate number of mistakes such an agent will make in its environment type before figuring out the “right” way to think and act.
Since there is such a diversity of AI agents, tasks, and settings, it is hard to pin down exact, all-purpose principles regarding machine learning. Here, the human-machine learning parallel is especially apt, as not every human individual will learn in the exact same way, just as algorithms for pizza delivery bots will produce different results than medical diagnosis tools.
Summary
Overall, there are many different types of machine learning agents, and as many types of settings in which they work. Still, learning models have been created to find out some general rules about the learning of these different types of agents and how they perform in certain settings.
Recent Comments