Machine learning is a critical aspect of modern businesses and research. Using algorithms and neural networks, machine learning assists the computer system in improving their performance. They build models using simple data that helps in making decisions without needing to be programmed.
The first case of neural networks took place in 1943, when a mathematician Walter Pitts and a neurophysiologist Warren McCulloch wrote a paper on neurons and how they function. This was when they decided they would create a model of this with the help of an electrical circuit, thus the birth of the neural network.
Further research on the Turing test was done where computers had to convince humans that it is a human and not a computer. Decades later, Google’s DeepMind AI game AlphaGo could successfully defeat the world’s number one player in the Go games, an ancient board game that is complex and is said to have a more possible configuration for pieces than the atoms present in the universe.
Undoubtedly, AI and machine learning have progressed to greater heights one cannot fathom, this ideally is driven by computational power.
As the thrive gets greater in creating computer chips in a classical computing manner, you will find more bits that are nearing the smallest molecular size. Therefore, machine learning cannot rely on computational power anymore to create powerful models. The reason why machine learning now turns to compositional learning.
Defining compositional learning?
The idea behind compositional learning states that one single model cannot do all the tasks. For example, when deep neural networks were used for one single task like classifying an image into a cat or a dog or recognizing cancer, they performed extremely well. Sadly, it has been observed that the model could perform only one task at a time.
As technology in AI and applications become more complex, the single neural network will only get larger. Once this happens, it may further account for more complications with more neurons. Thus, the ability to grow might reach a dead end.
Now if you combine all these neural networks to perform certain segments of a complete task, as a whole the model tends to perform better even while taking up intricate tasks, maintaining a reasonable computing space.
If these tasks are broken down into fragments (several neural networks), each of these networks can specialize in a certain field. This relationship is almost similar to asking the prime minister to make decisions with or without any support from the secretaries of defense, health, labor, or any other departments.
Here’s a simple example,
A chatbot is developed to help upscale a restaurant’s business by engaging with customers or performing tasks like making a reservation or inquiring about the menu, or having general chitchat. The conversation can be easily broken into three sections – chitchat, information retrieval, and action are taken.
Well, instead of having one single model that takes in the customers’ reactions and sends an output – response, we can easily opt for a more distributed system.
Single model: current message history > general neural network (encoder – decoder) > response
Compositional model: current message history > task distributor neural network >
With the help of a distributed model, you get two benefits:
Encoder-decoder networks or GANs (generative models) are composed of multiple networks and at times can be said as compositional models. However, in the current context, they will still be considered as a singular model since compositional models expand making it more effective. The compositional model we’re talking about here is described as the ‘composition-composition model’.
One of the major reasons why compositional learning works perfectly well is perhaps because our brains are also compositional in nature. To be precise, compositional learning is much more difficult than standard modeling.
However, we’re yet to unveil what AI has in store for compositional learning.
The post Everything You Need to Know About Compositional Learning in Machine Learning appeared first on Brainstormingbox.