Appendix B: What Is Artificial Intelligence?

Philip L. Frana, Associate Professor of Interdisciplinary Liberal Studies & Independent Scholars, James Madison University

Artificial intelligence (AI) is a rapidly evolving field rooted in computing and the cognitive sciences. As an academic field, AI involves research on intelligent agents that perceive and respond to environments like cyberspace or the physical world to achieve specific goals. Siri and Alexa chatbots are examples of intelligent agents, as are the sensor and actuator-based systems implanted in Roomba vacuum cleaners and Tesla cars. More generally, AI encompasses both real and fictional efforts to imitate human (and animal) intelligence and creativity with machines and code.

Human behaviors and characteristics of interest to AI researchers may involve pattern recognition, problem-solving and decision-making, learning and knowledge representation, communication, and emotions. Some advances in AI are stunning enough to garner millions of views on social media, but for every Boston Dynamics robot performing synchronized gymnastics or Disney Stuntronic Spider-Man doing spectacular acrobatic tricks, there are dozens of AI applications (recommendation and search engines, banking and investment software, shopping and pricing bots) that are so commonplace that we hardly remark upon their near-magical effectiveness anymore. In the popular imagination, “AI is whatever hasn't been done yet;” AI for most people is digital pixie dust.

Dreams of thinking machines are as old as civilization. Hesiod in c.700 BC tells the tale of the lethal autonomous robot Talos, who protected Crete by tossing giant rocks at enemy ships. Three centuries later a group of spirit movement machines (bhuta vahana yanta) were according to legend forged to protect the relics of the Buddha. In medieval times, Roger Bacon purportedly created a talking bronze head that, like Siri or Alexa, could answer queries. One of the earliest English-language accounts of machines with human-like intelligence is Samuel Butler's 1872 utopian novel Erewhon. Butler's machines are conscious and able to self-replicate. In the 20th century, fictional depictions of artificial intelligence found homes in the stories of Isaac Asimov, Philip K. Dick, and William Gibson. Hollywood is also enamored of sentient computers, producing classic films such as 2001: A Space Odyssey (1968), Blade Runner (1982), and The Terminator (1984), among many others. Themes that are abundant in fiction about AI include authenticity, personhood, companionship, loneliness, dystopia, and immortality.

The origins of artificial intelligence as actual science are interdisciplinary. One source of artificial intelligence ideas is cybernetics, which sought to understand the role of mammalian neural pathways and connections that produce homeostasis and intelligent control. In the 1940s, the Teleological Society and Macy Conferences formed to tackle problems important to understanding human physiology, creating servomechanisms for use in factories and weapon systems, and envisioning super intelligent “giant brains.” These organizations incubated the ideas of several pioneers important to the development of AI, including John von Neumann, Warren McCulloch, Walter Pitts, and Claude Shannon. Cybernetic and connectionist models that showed how biological organisms self-regulate, interact with the environment, and achieve goals continue to inspire pathbreaking efforts in system theory, artificial neural networks, and artificial intelligence.

A second wellspring of ideas about AI derives from cognitive psychology. Experimental psychologists seeking to move away from behaviorism invented the computational theory of mind in the 1950s, and this movement is now called the Cognitive Revolution. The computational theory of mind combines the information theory work of Claude Shannon; Alan Turing's conception of mental activity as computation; Allen Newell and Herbert Simon's information processing models of human perception, memory, communication, and problem solving; and Noam Chomsky's generative linguistics. Cognitive psychology tackles a number of problems in human and artificial intelligence, including recognition, attention, memory, and psycholinguistics.

A third source for AI is rule-based and symbolic representations of problems, also known as good old-fashioned AI (GOFAI). In addition to Newell and Simon, other active proponents of this approach include Marvin Minsky, John McCarthy (who coined the term artificial intelligence), and Edward Feigenbaum. GOFAI in the latter half of the 20th century nurtured a broad range of knowledge-based expert systems that emulated human decision-making. AI systems were created for several academic fields and commercial applications. DENDRAL was designed to detect and identify complex organic molecules, potentially useful on automated NASA planetary missions. MYCIN diagnosed and recommended therapies for blood infections. INTERNIST-I encoded the expertise of a doctor of internal medicine. The Cyc project to create an expert system for “common sense” has spanned almost four decades. While a few AI developers still assert that an expert system might eventually approach the versatility of a human thinker, most now think that artificial neural networks and deep learning, or some combination of neural and symbolic approaches, have the greatest potential to approach the artificial general intelligence (AGI) found in speculative fiction.

Machine learning is often referred to as artificial intelligence but is actually a particularly productive subfield focused on using computer algorithms to build systems that autonomously learn from a given database and/or experiences. Machine learning systems learn gradually, much in the way humans are thought to learn. The goal, notes AI pioneer Arthur Samuel, is to implant in machines “the ability to learn without explicitly being programmed.” Computer scientist Pedro Domingos defines “five tribes” within the subfield of machine learning: symbolists (inspired by logic and induction), connectionists (inspired by neural networks), evolutionaries (genetic development and transformation), Bayesians (statistics and probability), and analogizers (psychology and optimization). Machine learning platforms have improved medical care, facial and speech recognition, predictive analytics, warehouse management and transportation logistics, and many other workflows and tasks.

Work in machine learning is today divided into three broad types: supervised, unsupervised, and reinforcement. Supervised learning algorithms depend on labeled training data provided by human specialists. Here the machine learning model trains on input examples to classify, assess, or make predictions about similar new data. An example is using samples of spam email to design a spam filtering system. Unsupervised machine learning algorithms search for interesting patterns or structure in unlabeled datasets. The objective here is difficult to achieve, as the model is asked to provide valuable insights without training. An example might involve detecting and differentiating between groups of customers that have not otherwise been identified. Reinforcement learning depends on intelligent agents that react directly with the environment to achieve rewards or attain goals by trial, error, and feedback. Reinforcement learning is widely used to help train AIs to play games and drive automobiles.

Deep learning is a subfield of machine learning inspired by the structures and functions of the human brain. Resurgent interest in multilayer artificial neural networks (ANNs) and deep learning is producing exciting advances in speech recognition and natural language processing (NLP), computer vision and image recognition, neuromorphic computing, sustainability science, bioinformatics, and smart devices and vehicles. Deep learning powers the top machine translation engines (SYSTRAN, Google Translate, Microsoft Translator), imaging technologies (DeepFace, CheXNet, StyleGAN), environmental monitoring systems (Green Horizons, Wildbook, PAWS) and computational creativity applications (Deep Dream, MuseNet, WaveNet).

Today, much interdisciplinary work in artificial intelligence occurs in university and corporate computer science laboratories and within the fuzzy boundaries of cognitive science. Cognitive science is a multidisciplinary venture propelled by researchers in artificial intelligence, android science, biological information processing, computational neuroscience, cognitive psychology, human and animal cognition, linguistics and anthropology, neurology, and philosophy of consciousness. Notable 21st century computer scientists straddling multiple disciplines in cognitive science are Demis Hassabis (DeepMind), Geoffrey Hinton (Google Brain), and Fei-Fei Li (Stanford HAI).

Artificial intelligence research is grounded, and in many ways held accountable, by the hard questions of ethics and consciousness in philosophy. The ethics of AI extend back to the Three Laws of Robotics offered up in Isaac Asimov's short story “Runaround” (1942). The three laws still attract conversation but have largely been supplanted by other issues, especially the “black box”* of AI decision-making and questions of machine autonomy and human complacency. Instances of algorithmic bias and discrimination are common and growing. Problems of algorithmic accountability and governance are partially addressed in the European Union's General Data Protection Regulation (GDPR) and “Ethics Guidelines for Trustworthy AI” (2018), and in the proposed Artificial Intelligence Act (2021). Elsewhere, governments and corporations are creating directorates to recommend adoption of new policy frameworks, with varying levels of success. Explainable AI (XAI) refers to multiple approaches and design choices that reduce the potential for bias while making the inner workings of AI models transparent to human observers. Prominent organizations advocating for equitable and accountable artificial intelligence include the Algorithmic Justice League, the Partnership on AI (Amazon, Facebook, IBM, Google, and Microsoft), and the Global Partnership on AI. The goal of AI for Good's global summits is to identify artificial intelligence solutions that accelerate progress toward the United Nations Sustainable Development Goals.

Other professional organizations are also looking closely at the moral conduct of machines and their designers. AI autonomy in motor vehicles, autonomous weapons systems, and caregiver robots opens up a host of new opportunities and threats. The Society of Automotive Engineers International (SAE) defines six levels of driver automation. At level 0, the human driver is in full control of all responses to the environment and emergent threats. At level 1 a human driver is assisted by an automated system for longitudinal or latitudinal (lane centering) control. At level 2 the automated system provides steering, braking, and acceleration support (adaptive cruise control). At level 3 the human is not driving until the automated system requests that the human retake control. An example of level 3 support is “traffic jam chauffer.” At level 4 the automated system no longer requires the human ever assume control but works only under specific conditions. An example here is a “local driverless taxi.” At level 5 the vehicle is capable of driving itself under all conditions and without a human being present. Level 2 is the highest level of autonomy available with General Motor's Super Cruise, Nissan's ProPilot, or Tesla's Autopilot. As of 2021, no cars have yet reached the level 3, 4, or 5 stage of autonomy.

Lethal autonomous weapons systems (LAWS) are similarly divided into levels of AI autonomy. Human-in-the-loop weapons choose their targets and destroy them only under direct human authority. Human-on-the-loop weapons are monitored but largely free to deliver force autonomously. Overriding a primary directive in human-on-the-loop systems may require the hair-trigger response of a human supervisor. Human-out-of-the-loop weapon systems identify, target, and destroy enemies without any human oversight. Examples of powerful AI weapons are the U.S. Navy's MK 15 Phalanx CIWS (“sea wiz”) and the Israeli IAI Harpy “suicide” drone. The Harpy is categorized as a loitering munition that autonomously flies over an area until it finds a target to attack; the Harpy has aroused concern that it violates the laws of war.

More constructively, caregiver robots provide aid as assistants and companions to vulnerable populations, such as children, the disabled, the mentally ill, and the elderly. AI caregiver technology is available or being tested in many countries but is most common in Japan where cultural acceptance and an aging population have stimulated sales of plush robot baby beluga whales (Paro), robotic therapy dogs (AIBO), and autonomous humanoid patient-lifting robots (Robear).

Roboethicists are engaged in understanding the moral conduct of human creators of artificially intelligent robots. The Foundation for Responsible Robotics and the European Robotics Research Network (EURON) recognize the importance of human accountability in the development of AI systems. Other experts are thinking about full-fledged robot ethics and moral machines; one goal is implanting artificial ethical capability into every autonomous machine.

AI will undoubtedly adversely affect the nature and future of work around the world. AI threatens to throw millions of retail sales workers and managers, accountants and bookkeepers, factory workers, and journalists out of work. Oppositely, some experts in the trucking industry predict that a persistent driver shortage will trigger a full-scale switch to autonomy before 2030; there are today more than 3.5 million truck drivers in the United States alone.

The impact on life and work will be even greater if advances are made in quantum artificial intelligence (QAI) and superintelligence. Linking quantum processors to AI could make possible the autonomous management of traffic in an entire city, pharmaceutical discovery much less costly and arduous, or render digital encryption of military secrets obsolete. Respected authorities such as Nick Bostrom, Ray Kurzweil, and Murray Shanahan warn that a Technological Singularity facilitated by ultra-intelligent AIs could wreak havoc on human civilization, perhaps even precipitating an extinction event. On the other hand, an exponentially growing artificial intelligence could just as easily bring about an end to catastrophic climate change, overpopulation, or cycles of intergenerational poverty.

Note

  1. *   Black-box AI is any artificial intelligence system that cannot produce accountable and transparent results that include an explanation about how the results were obtained. Black-box AI makes biased data, unsuitable modeling techniques, and incorrect decision-making more difficult if not impossible to detect.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset