Different types of AI
An AI that acts like a human is what is most fascinating to most people, and, therefore, is one of the favourite subjects of science-fiction writers and Hollywood producers. For an AI to be judged as human-like, it must pass a test known as the Turing Test, named after the British mathematician, computer scientist and code breaker, Alan Turing (Alan Turing is often referred to as the father of AI).
The Turing Test is a game that involves three players: two humans and a computer. One of the human players is an evaluator. The evaluator asks the players open-ended questions with the goal of determining which one is the human. If the evaluator cannot clearly and unequivocally distinguish between the two players (in other words, the evaluator is unable to differentiate between the human and the computer player), then it is presumed that the computer is “intelligent”.
This type of AI is used in natural language processing, knowledge representations, automated reasoning, and machine learning. An example of this type of AI implementation is the popular chatbot called Kuki (Mitsuku). The Kuki chatbot has exchanged over a billion messages with an estimated 25 million human end-users on the web, social media, and various messaging applications. It also holds the world record for winning the Loebner Prize (a Turing Test competition) five times.
Human outcomes are a lot more complex and uncertain in nature as they are influenced by human instincts and intuition, which are highly variable.
Rationality (or rational behaviour) is based on a set of prescribed and expected behavioural guidelines, which are, in turn, based on typically identified human behaviours in various contexts. A human is considered rational when he or she behaves in line with such guidelines and expectations. A computer is considered to be thinking rationally when it can decide how to interact within a given environment based on a guide of expected behaviours. This forms the starting point to solving specific problems. The starting point can then be modified as feedback is collected during the process of actually solving the problem. In a similar way, a computer is considered to be acting rationally when it is able to use a guide of pre-recorded human actions in specific contexts to interact in a given environment.
Humans drive intuitively as opposed to mechanically because the real world requires negotiating uncertainties and unforeseen circumstances; things that make the real world more chaotic than can be represented and accommodated by precise rules.
Just as with rational thought, rational action also depends on an in-principal solution, which may not corroborate with reality. However, rational acts provide a starting point from which a system can begin to negotiate reality to arrive at a specific goal.
Rational thinking and rational acting differ from human thinking and human acting in their outcomes. As noted, rational outcomes depend on a rulebook, assuming the rulebook is correct. Human outcomes are a lot more complex and uncertain in nature as they are influenced by human instincts and intuition, which are highly variable. Driving is an example. Driving by following the letter of the law (rather than in spirit) might turn out to be a very frustrating experience, may not even successfully lead you to your goal. This is especially so because other drivers on the road are not following every bit of the law with absolute precision. Humans drive intuitively as opposed to mechanically because the real world requires negotiating uncertainties and unforeseen circumstances; things that make the real world more chaotic than can be represented and accommodated by precise rules. This is why self-driving vehicles need to act humanly as opposed to rationally.
According to Arend Hintze, a professor who conducts research in general purpose AI, neuro-evolution, evolutionary game theory, and data analytics at the Dalarna University in Sweden, AI should go beyond just teaching machines to learn. He has developed a useful typology, differentiating four different types of AI.
In Hinzte’s typology, the first type of AI is referred to as ‘reactive machines’. Reactive machines are the most basic of the types of AI which do not have memories or use data from past experiences to make decisions. An example is IBM’s Deep Blue that defeated the international Grand Master Gary Kasparov in chess in the 1990s. Another example is Google’s AlphaGo which has outsmarted the world’s top Go players. These types of AI systems do not have an understanding of the dynamic world we inhabit. Instead, they act on every given moment based on the data available. In other words, they simply react. They do not store previous experiences and, therefore, are incapable of drawing from history. They compute solutions again and again when faced with problems rather than relying on heuristics (in psychology, heuristics are simple, efficient rules, learned or inculcated by evolutionary processes). These systems also cannot interactively participate in the world.
Limited memory machines
The second type of AI in Hinzte’s typology are those with limited memory. Machines in this category are able to look into the past but still fall short of being able to build accurate representations of the world. When the machine sees the same situation, it can rely on experience to reduce reaction time and to provide more resources for making new decisions that are yet to be made. This is the current level of strong AI, aspects of which are being deployed in self-driving cars already.
Understanding machines (Theory of Mind)
Machines of the third type are far more advanced and can not only build representations of the world but also about other actors in the world. In psychology, this ability to understand other people and the awareness that other people have thoughts and emotions of their own is called the ‘theory of mind’. This type of AI is feasible to some extent today but not ready for large-scale commercial use.
If AI is going to be walking around us, then it has to be able to understand that each of us has our own expectations, and, based on knowing this, be able to adjust its behaviour accordingly. For self-driving cars to become fully autonomous, this level of AI must be fully developed and available to use in civilian use-cases because an autonomous vehicle should not only be able to reach its destination but also, along the way, be able to navigate the realities of traffic by understanding the behaviour and actions of other drivers on the road.
Finally, the most advanced type of AI systems, according to Hinzte, will not only be able to build accurate representations of the world and of other actors but also of themselves. This is about consciousness, and these machines may be referred to as self-aware machines. This involves a leap in complexity. The statement “I want this” involves a lower order of intelligence compared to the statement “I know I want this”, which involves a higher order of intelligence capable of knowing itself. This is pure science fiction as of now.
Now put on your thinking hats and think about the following questions for a couple of minutes. Arend Hintze developed a useful typology, differentiating four different types of AI. Can you think of the differences between these four AIs?
In your opinion, which type of AI is closest to human thinking?
Do you think that self-aware machines will only remain as a fiction?
Write down your thoughts and discuss them with your students, children and your colleagues. Listen to their views and compare them with your own. As you listen to others, note how similar or different your views are to others’.
Thank you for listening. Subscribe to The Scando Review on thescandoreview.com.