Covering Disruptive Technology Powering Business in The Digital Age

image
AI Using Small Data Sets – Possible or Recipe for Disaster?
image
October 28, 2020 News

 

In the technology world, it is known that artificial intelligence would not be able to function without data. There are even famous tech sayings coined by different IT entities: “There’s no AI without IA (Information Architecture)”, “Data is the lifeblood of AI” and such, emphasising the importance of data to AI. This is true, because your AI and its subsets, such as machine learning, is just as good as what data you feed it.

Typically, you would need a large set of data to get AI models working properly and accurately, because the more data you have, the more your models could learn. This also means high computational cost and long training times, and often, AI models need data points more than the results it can provide.

Training AI using small data sets, with current established techniques, could be likened to conducting a survey with a tiny sample size; It may end up being misleading, inaccurate and on the whole, unreliable.

However, a paper published by the University of Waterloo in Ontario suggests that training on much smaller training sets while maintaining nearly the same accuracy is possible, in what they called `less than one’-shot learning. This method is inspired by the “few-shot learning” technique, where a model must learn a new class given only a small number of samples from that class.

`Less than one’-shot learning, also called LO-shot learning, takes it to the extreme, where the model must learn a new class from a single example. For example, let’s say a typical AI model would need a million samples to recognise a handwritten letter from an alphabet (in terms of amount, providing an output that is significantly lower than the input). In LO-shot learning, models must learn N new classes given only M<N examples (where M is the input data sets).

This is achievable with the help of soft labels, or hybrid data sets where multiple data points are blended together and then fed into an AI model. To give an example, the researchers explained how AI models were trained to read handwritten digits. “If you think about the digit 3, it kind of also looks like the digit 8 but nothing like the digit 7,” says Ilia Sucholutsky, a PhD student at Waterloo and lead author of the paper. “Soft labels try to capture these shared features. So instead of telling the machine, ‘This image is the digit 3,’ we say, ‘This image is 60% the digit 3, 30% the digit 8, and 10% the digit 0.’”

Such innovation in machine learning enables efficiency, as it will only require small data sets. However, there are still concerns over this, since you would initially need big data sets, before shrinking them down to be used for LO-shot learning. “Eager learning models like deep neural networks would benefit more from the ability to learn directly from a small number of real samples to enable their usage in settings where little training data is available. This remains a major open challenge in LO-shot learning.”

(0)(0)

Archive