How do Humans Learn?

There are many techniques by which humans learn. They provide different ways of developing algorithms in the brain, which correlate input data patterns with outcomes. Here we illustrate three human learning techniques and show how they can translate into machine learning.

  1. Directed Learning

This is a simple method of providing exact step-by-step instructions to perform a given task?—?with little scope for deviating from the prescribed procedure. This is typically used for training workers on traditional assembly lines to execute a specific task precisely and identically every time. The only learning that happens is on how to exactly follow a procedure. This technique does not provide any means to deal with situations for which no instructions are provided.

This is how most classic computer programs works. A program is essentially a precise sequence of steps which a computer must blindly execute?—?exactly as designed by a human programmer. Crashes occur when unforeseen situations happen, because the computer has never been taught how to deal with such a situation. First generation of AI systems were victims of this strictly logical approach. Human programmers had essentially codified their interpretation of all possible data patterns into an
IF-THEN-ELSE logic, just like most computer programs are written. The AI machine itself was not capable of drawing its own conclusions as it had never really learned to deal with an un-programmed situation.

  1. Assisted Learning (a.k.a. Supervised Learning)

This is how a good teacher teaches students. The focus of teaching is not just to learn the exact subject matter, but also to learn how to learn and to understand the general topic. Teaching is done using illustrative examples. The teacher teaches a topic using many typical examples or cases. This enables the student to self-develop the logic that connects the data input with its corresponding outcome in each example used. To learn how to recognize cats, a child is shown hundreds of pictures of cats, along with pictures of other objects and animals that may look like a cat, pointing out in each case which one is a cat and which is not. The children look at each picture and they themselves develop the algorithm for how to recognize a cat (with no instructions or descriptions provided). Similarly, a radiologist learns how to detect specific abnormal medical conditions (like a tumor) by studying a large number of labeled images. The radiologist develops their own algorithm to quickly detect an anomaly within seconds. This is how our brain learns. It is able to spot information encoded in data patterns and to convert it into conclusions and decisions. This ability becomes very focused and precise with increasing number and diversity of cases experienced, making the decision-making automatic and beyond thinking. This is real learning. Through more and more practice people can deal with most situations?—?however ambiguous or unpredictable?—?just through self-developed and trained algorithms. This is how we learn languages, mathematics, music, tennis, skiing etc. The instructor starts us off with illustrative cases, which provide data patterns and outcomes. Then our brain develops the necessary algorithm by observing the correlation between subtleties in data patterns and the respective outcomes, allowing us to deal with most new situations.

This way of learning is also how IBM Watson works. To make Watson the best expert at diagnosing cancer, it is fed millions of past cases of cancer; with details on symptoms, recommended treatments and observed outcomes. These cases provide Watson with patterns of symptoms and treatments leading to outcomes. In addition, it is also fed with all research papers on oncology?—?which provide additional patterns based on the latest worldwide understanding on diagnosis and treatments for cancer. The neural networks within Watson develop its generic algorithms based on the patterns in this massive amount of information. This is how it learns to diagnose cancer and recommend treatments. IBM Watson has become an immense help to the doctors in guiding them in medical decisions. Google uses a similar technique for automatic translation; Facebook for face recognition, Amazon for generating product recommendation. Thanks to machine learning, theses machines are able to recognize and understand subtle patterns in images, sounds, and body language much more consistently than humans.

  1. Self-Learning (a.k.a. Unsupervised Learning)

As children we all have learned a lot just by observing other people. In traditional India, an enlightened teacher (a guru) taught his students not through words or instructions, but by assigning them tasks or roles which he thought would self-teach students in developing right insights and skills as the next step in their education and growth. Learning then becomes a discovery that is self-motivated and self-directed. It is based on one’s own observations and conclusions ?without the teacher’s bias.

A lot of learning happens without a teacher. By repeatedly observing an activity we figure out the correlation between what someone does and its outcome. For example, just by watching games like football/soccer we can learn how the game is played, what situations lead to a goal, or a foul, how a penalty corner is played etc. After watching hundreds of games we are then able to even predict various outcomes before they happen. This is because we have learned the patterns in the game that lead to a given outcome. Machines are not yet capable of any useful unsupervised learning, however it is one of the heavily researched areas waiting for a breakthrough.

Assisted learning is currently the most common approach to machine learning using deep learning techniques (to be described next). However, it needs a huge volume of labeled data sets —?which are expensive to produce —?to train the AI neural networks. In contrast, unlabeled data is readily available and cheap. Self-learning from free flowing unlabeled data is the next big challenge of AI.

Humans are often able to learn with very limited data – often a couple of examples help us to create the necessary algorithms in our brains. Maybe this a result of our evolution that has sharpened our learning process from limited data and made it very efficient. It has also been observed that it takes a smaller volume of training data to teach a machine learning system that has already been trained for some other tasks —?possibly, because the previous learning helped in some way to master the new task quicker. This may mean that if we subjugated fresh neural networks to a “basic schooling” program, they may learn other tasks faster, just like humans. That would justify why we all go through lessons in music, mathematics, biology, history, languages etc. in school irrespective of what profession we eventually practice. It possibly primes our brains to learn everything else faster. This does give hope that machines will someday be able to learn new tasks from very limited data. Professor Andrew Blake, Research Director at the Alan Turing Institute in the UK says: “We now have to distinguish between two kinds of data — there’s raw data and labelled data. [Labelled] data comes at a high price. Whereas the unlabeled data which is just your experience streaming in through your eyes as you run through the world… and somehow you still benefit from that, so there’s this very interesting kind of partnership between the labelled data — which is not in great supply, and it’s very expensive to get — and the unlabeled data which is copious and streaming in all the time. And so this is something which I think is going to be the big challenge for AI and machine learning in the next decade — how do we make the best use of a very limited supply of expensively labelled data?”

Leave a Reply

Your email address will not be published. Required fields are marked *