top of page
Writer's pictureWhy Funny?

CHALLENGES OF DEEP LEARNING

Deep learning is often compared to the brains of humans and animals. But past years have proven that artificial neural networks, the main component used in deep learning models, lack the efficiency, flexibility and versatility of their biological counterparts.

Bengio, Hinton and LeCun acknowledge these shortcomings in their articles. "Supervised learning, while successful in a wide range of tasks, often requires a large amount of human tagged data. Similarly, when reinforcement learning is based solely on rewards, it requires a lot of interaction,” they write.


Supervised learning is a popular subset of machine learning algorithms in which a model is presented with labeled examples, such as a list of images and their corresponding content. The Model is trained to find duplicate patterns in samples with similar labels. It then uses learned patterns to associate new samples with the correct labels. Supervised learning is particularly useful for problems where labeled samples are abundantly available.

Reinforced learning is another branch of machine learning in which a "tool" learns to maximize "rewards" in an environment. An environment can be as simple as a tic-tac-toe board, where an AI player is rewarded for lining up three X's or O's, or as complex as an urban environment, where a self-driving car is rewarded for avoiding collisions, obeying. The agent starts by taking random actions. As he gets feedback from his surroundings, he finds action sequences that provide better rewards.

Either way, as scientists admit, machine learning models require great Labor. Tagged datasets are difficult to access, especially in private areas that do not have publicly available, open-source datasets, meaning they require the demanding and expensive labor of human interpreters. And complex models of reinforced learning require large computational resources to run numerous training departments, making them available to several very rich artificial intelligence labs and technology companies.


Bengio, Hinton and LeCun also agree that existing deep learning systems are still limited in the scope of problems they can solve. They perform well on special missions, but are “often fragile outside the narrow space in which they train.” Usually, small changes, such as a few modified pixels in an image, or a very small rule change in the environment, can cause deep learning systems to go wrong.

The fragility of deep learning systems is largely due to machine learning models based on the "independent and identical distributed" IID assumption, which assumes that real-world data has the same distribution as educational data.The Iid also assumes that observations do not affect each other (for example, coin or die shots are independent of each other).


"Since the early days, machine learning theorists have focused on the iid conjecture," the scientists wrote. Unfortunately, this is not a realistic assumption in the real world," he writes.

Real-world settings are constantly changing due to different factors, many of which are almost impossible to represent without causal models. Intelligent agents must constantly observe and learn from their environment and other agents, and adapt their behavior to changes.

"The performance of today's best AI systems tends to take a hit when they go from lab to field," the scientists write.

The IID assumption becomes even more fragile when applied to areas such as computer vision and natural language processing, where the agent has to deal with high entropy environments. Currently, many researchers and companies are trying to transcend the boundaries of deep learning by training neural networks on more data, hoping that larger datasets will cover a wider distribution and reduce the likelihood of failure in the real world.


Comentarios

Obtuvo 0 de 5 estrellas.
Aún no hay calificaciones

Agrega una calificación
bottom of page