AI and Development- Reality, Controversies, Responses

Regional
2022
Artificial intelligence appears in our news feeds nearly every day, accompanied by a multiplicity of narratives and expectations that are generally hyperbolic - either excited or fearful - and rarely nuanced. With this in mind, it is critical to unpack the reality of what artificial intelligence is and is not, and the implications for our collective future. Artificial intelligence is “the science and engineering of making intelligent machines” where “intelligence is the computational part of the ability to achieve goals in the world” according to John McCarthy, who ran the first-ever gathering on AI in 1956 (McCarthy 2007). Today, artificial intelligence is the science and engineering of computer systems where “intelligence” means having the ability to perform tasks such as visual perception, speech recognition, language translations, and certain types of decision-making.Since that first AI gathering in 1956, significant advances in AI have been made, although more slowly than early pioneers predicted. It was only in the 2010s that advances in machine learning — a particular approach to AI — started to have real-world impact. Specifically, the introduction of deep learning (LeCun, Bengio, and Hinton 2015), enabled by increasing computational power and data availability, is propelling advances in AI. In the 2020s, machine learning algorithms are now at the core of many prevalent technologies. They power search engine results, personalize news feeds, enable chatbot conversations, compose music, make medical diagnoses, produce efficient engineering designs, enable real-time facial recognition and surveillance, and inform life-altering decisions about who is eligible for a job interview, a bank loan, and even parole. The progress made in AI over 60 years becomes clear when we compare an early AI program called Eliza to the 2020 release of the GPT-3 (Generative Pre-Training Transformer) (Brown et al. 2020). Eliza, programmed in the 1960s, was the first chatbot to use early natural language processing to simulate a psychotherapist. The relatively simple system worked by creating rules for Eliza to flip patients’ statements around into questions, like this: Patient: I am feeling stressed out. Eliza: Do you believe it is normal to be feeling stressed out? GPT-3, by contrast, does not use a predetermined set of rules. Instead, it is a deep learning language model trained on massive datasets of hundreds of billions of words scraped from Wikipedia and web crawlers. GPT-3 learns the relationships within datasets and uses this learning to generate new outputs from new inputs. The chasm between Eliza and GPT-3 demonstrates the key shift that has taken place in AI development: We have moved from approaches based on explicit rules to those based on machine learning. This is a critical leap for real-world applications because, in many contexts, it is impossible to explicitly spell out the rules to be followed. For example, how could one possibly write rules to account for all the permutations of visually recognizing a person or navigating a car through a city? AI is not taught how to do these things with a set of rules; it learns from experience codified in data. Despite the incredible power of machine learning, AI models are still narrow in the sense that they can be applied only to the task for which they are trained. They “break” when applied to another task. An AI algorithm trained to play chess cannot play Go. The GPT-3 language model can play neither chess nor Go. Furthermore, these models do not understand in a meaningful sense that they are playing chess. They do not understand the meaning of a sentence as part of a chatbot conversation. The model simply calculates the next move or utterance and chooses based on the move that has the highest probability of being correct. In short, these algorithms lack a general, humanlike, adaptive intelligence that would enable them to learn and apply learning across domains, situations, and problems. While researchers are actively exploring and developing AI capable of adaptive, general intelligence, such advances fall out of the scope of relevance for AI in development at the time of writing.
Research Type
Public policy and ethics
Organisation(s)
International Development Research Centre (IDRC)
Authors
Matthew Smith, Ruhiya Kristine Seward