Computers and machines are going to take over most jobs; this is an argument that is gaining some currency around the world. There are heated debates about what such a world will look like for humans. After all, jobs are central to the identities of most people on the planet.
With rapid developments in artificial intelligence (AI)-based technologies, we are beginning to see advances in autonomous vehicles or self-driving cars and trucks, threatening the livelihoods of drivers around the world. Robots have already become central features in factories and manufacturing plants. Robotics is also becoming a common feature in surgeries. There are speculations that intelligent systems will soon replace human doctors, lawyers, and accountants.
While most professions and professionals are seen to be on the verge of being ‘disrupted’ by advanced technologies, there is one that seems to evade this trend – teaching and teachers. This may be because most people have an intuitive feeling that teaching and learning are deeply social, and, therefore, deeply human endeavours. This is perhaps why education is still pretty much dominated by schools, colleges and universities as we currently know them – filled with human beings interacting with each other.
Science fiction writers and Hollywood blockbuster movies have played important roles in seeding the public’s imaginations about AI, much of which remains far removed from realities of what the technology can accomplish.
However, with exponential advances in machine learning, deep learning, and in areas like ‘affective computing’, emerging AI-based technologies are prompting a re-evaluation of this long-held belief that education is the sole preserve of humans. Before we can meaningfully and critically explore the implications of AI-based technologies on teachers and the teaching profession, it is important to gain at least a foundational understanding of not only the technology but also of its history and evolution.
AI is not something new; it has a long and chequered history. Science fiction writers and Hollywood blockbuster movies have played important roles in seeding the public’s imaginations about AI, much of which remains far removed from realities of what the technology can accomplish. Consequently, most people have little-to-no understanding of what AI is or of what its possibilities and limitations are. Simply saying “AI is artificial intelligence” obviously does not explain anything. The word ‘artificial’ makes sense, given that the discussion is about machines and not human beings. However, the word ‘intelligence’ is more complex to understand.
Intelligence can mean different things to different people, and a list of attributes can be a long one. For instance, intelligence can be seen as the ability to obtain and process new information, that is, the ability to learn. It can also include the ability to reason and understand. At the same time, an ability to validate a given information as true or false, seeing relationships between different sets of information, inferring meaning, and the ability to separate fact from fiction are all important aspects of intelligence.
According to Howard Gardner, the well-known Harvard psychologist, humans do not possess a singular type of intelligence. Instead, according to his theory of multiple intelligences, human beings rely on a wide range of intelligences to carry out various tasks. Gardner defines intelligence as a computational capacity – a capacity to process a certain kind of information – that originates in human biology and human psychology. Accordingly, humans have certain kinds of intelligences, whereas birds and computers feature other kinds of computational capacities.
For Gardner, intelligence comprises abilities that are rooted in human biology and psychology.
Intelligence, therefore, is not one thing. Rather, it is more useful (and probably more correct) to view it is as a process. When we begin to define intelligence as a process, we can see how it can be related to computer systems; because process (or processing), as it happens, is something that computers can be very good at, depending on the quality of data they are provided with and the quality of algorithms that are defined to achieve a certain goal. However, access to vast amounts of data and good quality algorithms do not, by themselves, make a computer intelligent, at least in the human sense. This is because the computer would still rely on a mechanical-mathematical process to manipulate data. In other words, it understands very little. Similarly, its ability to distinguish between truth and mistruthis also severely limited. As of now, there is no computer that can successfully and sustainably replicate all the different human intelligence characteristics.
As noted above, Howard Gardner emphasises the difference between the computation capacities of humans and that of birds, animals and even computers. This is an important, and often little-understood, point.
For Gardner, intelligence comprises abilities that are rooted in human biology and psychology. It is, therefore, important to understand that when we talk about intelligence in AI, it is quite different from – and sometimes have little to do with – human intelligence.
Rather than focusing on a single definition, it is perhaps more useful to categorise AI into four different buckets based on its ability to think and act. The ability to think and act can be further broken down into human-like thinking, human-like acting, rational thinking, and rational acting. This implies that there is a discernible difference between human and rational processes. Before we get into the difference between human and rational processes, let us first look at the four categories or four ways of looking at AI.
Let us begin with the category ‘human-like thinking’. When a computer thinks like a human, it performs tasks that typically require aspects of human intelligence (as opposed to following mechanical procedures). For example, a computer that can think like a human should be able to drive a car, considering the fact that driving a car is a lot more than simply following traffic rules; it is also about intuitively understanding what is happening on the road and being able to understand how other drivers are behaving, all of which are key characteristics of human intelligence.
Cognitive modelling approaches such as introspection, psychological testing, and brain imaging are used to determine whether a computer program thinks like a human or not. Once a model of human thinking is created, computer programs can be written to simulate that model. However, given the complexity, uncertainty, and the high degree of variability in human thinking, modelling it is itself a challenge, let alone accurately representing it through programming languages. This category of AI is often used in fields like psychology and neurosciences as a part of research to understand better the human brain and thought processes.
In part 2 of this series of articles on AI, we will look into human-like acting, rational thinking, and rational acting. We will also explore the difference between rational and human processes and look at a useful typology of AI.
Now put on your thinking hats and think about the following questions for a couple of minutes.
In your opinion, what does the term artificial intelligence mean to you?
How will you describe artificial intelligence to your students?
In your opinion, what would be the impact of artificial intelligence on the future of the education system?
Write down your thoughts and discuss them with your students, children and your colleagues. Listen to their views and compare them with your own. As you listen to others, note how similar or different your views are to others’.
Thank you for listening. Subscribe to The Scando Review on thescandoreview.com.