What Is Transfer Learning ?
Transfer Learning in simple terms transfer of knowledge from a trained model to another model. The idea is to use the knowledge acquired for one task to solve another related task. We reuse the pre-trained model on a new problem. We can train our model on a smaller dataset with the help of transfer learning. The reason it is so popular is because real-world problems typically do not have millions of labeled data points to train complex models
Transfer learning is mostly used in computer vision and natural language processing related tasks.
HOW IT WORKS
In computer vision, for example, the initial neural networks layers usually detect edges and shape of the objects, then the more complex features are detected by later layers. So taking benefit of these initial layers, in transfer learning we transfer the weights of initial layers from an trained model and we only retrain the latter layers. It helps leverage the labeled data of the task it was initially trained on.
WHY USE IT?
The main advantages of transfer learning is saving training time, better performance of network, without a lot of data.
APPROACHES TO TRANSFER LEARNING
1. TRAINING A MODEL TO REUSE IT
Assume you need to solve a task A but there is not much data to train accurately. In such case you may find a related task B for which enough data is readily available. Train the deep neural network on this task B and use the resultant model for solving task A through transfer learning. Whether you’ll need to the whole model or a few layers may depend on the problem you are trying to solve.
2. USING A PRE-TRAINED MODEL
Another way is to use an already available pre-trained model. There are a lot of such models out there. The layers from such model you want to reuse may depend on the problem you are solving.
Keras, for example, provides many pre-trained models that can be used for transfer learning, prediction, feature extraction and fine-tuning. You can find these models here. There are also many researchers and organizations that release trained models.
This type of transfer learning is very common in deep learning.
3. FEATURE EXTRACTION
Another approach is to use deep learning to find the best representation of the problem. This means finding the most important features. This is known as representation learning. In can often result in a much better performance than the one obtained with hand-designed representation.

Thanks for Reading. You can also check our post on Reinforcement Learning.