Apr 11, 2022 • 6M

Robo-teachers? Fundamentals of AI for educators

Part 5 –Deep Learning

Open in playerListen on);
Great Teachers Matter
Episode details

Neural Networks and Deep Learning

Geoffrey Hinton, coming from a family of well-known academics, wanted to become a professor and to study Artificial Intelligence (AI) from his teenage. Even when the research works were delayed owing to AI Winter (a period of reduced funding and interest in artificial intelligence research), Hinton believed that Rosenblatt’s idea of neural network was the right path to follow.

In 1972, Hinton completed his PhD on the topic ‘Artificial Intelligence’ from the University of Edinburgh. AI was not even considered as a science subject at that time. Many people thought that Hinton was wasting his time and talent by researching AI.

In 1986, Hinton co-wrote a paper along with David Rumelhart and Ronald J Williams titled Learning Representations by Backpropagating Errors in which they put forward the key processes for using backpropagation in neural networks.

However, the adverse remarks encouraged Hinton to work more on the topic, and he was confident that he would succeed in this field. He knew that the main hindrance to Artificial Intelligence was computing power. He also knew time was on his side. Moore’s Law had predicted that the number of components on a chip would double about every 18 months.

Soon, Hinton started developing core theories on neural networks. Eventually, his theories came to be known as Deep Learning.

In 1986, Hinton co-wrote a paper along with David Rumelhart and Ronald J Williams titled Learning Representations by Backpropagating Errors in which they put forward the key processes for using backpropagation in neural networks. It significantly improved the accuracy of predictions and visual recognition. Hinton’s pioneering work was also based on the achievements of numerous other researchers who kept their faith in neural networks.

Geoffrey Hinton’s achievement influenced a number of researchers to come up with their own innovations in the field of AI.

Some of the most important achievements during this period were the following:

1.      Kunihiko Fukushima, a Japanese computer scientist, developed neocognitron in 1980. It was based on visual cortex of animals. Neocognitron had the capability to recognise patterns that became the basis of convolutional neural networks.

2.      John Hopfield developed Hopfield Networks in 1982. It was a recurrent neural network.

3.      In 1989, Yann LeCun merged convolutional networks with backpropagation. This approach helped in analysing handwritten checks.

4.       In 1989, Chris Watkins published his PhD thesis titled Learning from Delayed Rewards in which he described Q-Learning. This major advancement helped in reinforcement learning.

5.       After his success in backpropagation, in 1998, Yann LeCun published Gradient-Based Learning Applied to Document Recognition in which he explained the use of descent algorithms to improve neural networks.

Technological drivers of modern AI

New conceptual approaches, theories and models fuelled the growth and development in the field of artificial intelligence. Besides these, AI had some other important technological drivers that helped in its growth. They included:

Explosive growth in datasets: The rise of internet became a major factor as it allowed researchers to create massive datasets for the development of AI.

Infrastructure: Google has been playing a significant role in the development of AI for the past 15 years. Google has come up with creative approaches to build scalable systems as the web indexing was growing at a staggering rate. Their aggressive approach resulted in innovations in commodity server clusters, virtualisations, and open-source software. In 2011, Google launched the ‘Google Brain’ project, which was based on Deep Learning and by doing so, Google became one of the early adopters of Deep Learning technology.

Graphics processing units (GPUs): This chip technology, pioneered by Nvidia, was originally developed for playing high-speed graphics games. Its architecture is suitable for AI as the GPUs use parallel processing technique, which is many times faster than traditional CPUs. Nowadays, most deep learning researches are done using GPUs.

All these factors contributed to the growth of AI, and these factors will continue to drive further growth in the development and adoption of AI in the years to come.

Now put on your thinking hats and think about the following questions for a couple of minutes.

As a teacher, how would you describe the term "deep learning" to your students?

Can you think of the ways in which Rosenblatt’s idea influenced Geoffrey Hinton?

How do you describe the contributions of Geoffrey Hinton in the development of neural networks and deep thinking?

Write down your thoughts and discuss them with your students, children and your colleagues. Listen to their views and compare them with your own. As you listen to others, note how similar or different your views are to others’.

Thank you for listening. Subscribe to The Scando Review on thescandoreview.com.

Happy Teaching!