Datatechvibe Explains: One-shot Learning

Datatechvibe Explains One-shot Learning

Machine learning models are often marketed as “data-hungry, ” thereby creating a hurdle for small businesses. One-shot learning is the answer.Datatechvibe-Logo-Explain

Heard about the famous ten-times thumb rule in machine learning (ML)? The amount of data you input should be ten times the number of features. If you are working on an algorithm to distinguish dog images from cats based on 1,000 parameters, you require 10,000 images to train the model.

Machine learning is data hungry, but collecting such an enormous volume of information is not always feasible. It is a task that tech giants Microsoft, Amazon, and Facebook struggle with. By that measure, small-scale companies avoid it entirely. One-shot learning aims to overcome this challenge.

Decoding the data debate

Consider developing a facial recognition model for an international airport. Millions of people pass through and identifying each one requires billions of data; it is nearly impossible to accumulate such a vast dataset. Secondly, hosting such an amount of facial data on a centralised server could be a privacy nightmare.

To avoid such a scenario, rope-in one-shot learning. Rather than considering it as a classification problem, one-shot learning turns it into a difference evaluation problem. Let’s see what that means.

Most computer vision applications today use the deep-learning model, popularly called CNN (Convolutional Neural Network), for image classification. The CNN decodes an image’s features into a series of numerical values when it processes the image and uses these values to decide which class the image belongs to.

If the deep learning model goes with one-shot learning, it only takes two images. One will be the photo of the individual on the passport; the second, the person standing and facing the camera. The algorithm compares the similarity between the two images and returns a value. If the similarity is high, the neural network returns a value lower than a predetermined threshold (let’s say zero). If not, it returns a higher one.

The field of ML has advanced; however, to date, it has been out of reach for most businesses and some industries. On the other hand, one-shot learning provides multiple benefits:

  • It holds the potential to mainstream machine learning adoption across all industries. By enabling machines to learn from just one data point, one-shot learning promises to get around restrictions for businesses that can’t spend much time and money on machine learning. ML models can be adjusted via one-shot learning without a constant stream of training data upfront.
  • The entry barrier for machine learning has been broken down via one-shot learning. What once required cutting-edge tooling is now possible with less money and resources. Leave behind intricate groups of people and astronomically expensive machinery. Users with no programming experience can rapidly construct and train their own machine learning models.
  • One-shot learning opens up new opportunities in industries like technology and finance, where complex teams have historically been needed for this functionality. Let’s explore one-shot learning and see what it can accomplish in the real world.

The method is not new and has real-life use cases. Researchers have used one-shot learning in drug discovery, a field with extremely little data. As described in a paper, the authors developed an offline signature verification method using one-shot learning, particularly beneficial to banks and other public and private organisations.

In 2020, Ilia Sucholutsky and Matthias Schonlau from MIT wrote a study on “less than one”-shot learning. The study aims to create a model that can recognise more objects than there are training examples for. Again, the results of this study could fundamentally alter the machine learning field. Applications are there, but not a sure-shot formula to do wonders.

Not a silver bullet

Undoubtedly, one-shot learning can profoundly impact a wide range of business applications, from sentence completions, translations, labelling, and 3D object reconstruction to object recognition and classification. The ability to input and analyse more data with fewer tools and resources is a game-changer.

However, the technology does have some drawbacks. Each Siamese neural network is only effective for the specific goal for which it was designed. It is impossible to use a neural network designed for facial recognition with one-shot learning for other tasks, such as determining whether two images contain the same car or building. Additionally, neural networks are sensitive to various alterations. For instance, if one of the subjects is pictured wearing a hat, a scarf, or spectacles, while the other is not, the accuracy may suffer significantly.

Further, Siamese networks are more computationally intensive than other types of CNNs, since there are twice as many operations required to train two models. There is also a significant rise in the amount of memory needed.

To conclude, one-shot learning is an exciting research area, though steps need to be taken to strengthen the domain. Eventually, this will lead to achieving “machine learning for all” – the true goal of AI.

If you liked reading this, you might like our other stories
Datatechvibe Explains: Zero-Trust Cloud Security
Datatechvibe Explains: Dirty Pipe