Read Aloud the Text Content

This audio was created by Woord's Text to Speech service by content creators from all around the world.


Text Content or SSML code:

The Era of AI: Bless or Threat? ARTICLE 1 No, the Experts Don't Think Superintelligent AI is a Threat to Humanity By Oren Etzioni If you believe everything you read, you are probably quite worried about the prospect of a superintelligent, killer AI. The Guardian, a British newspaper, warned recently that “we're like children playing with a bomb," and a recent Newsweek headline reads, “Artificial Intelligence Is Coming, and It Could Wipe Us Out." Numerous such headlines, fueled by comments from the likes of Elon Musk and Stephen Hawking, are strongly influenced by the work of one man: professor Nick Bostrom, author of the philosophical treatise Superintelligence: Paths, Dangers, and Strategies. Bostrom is an Oxford philosopher, but quantitative assessment of risks is the province of actuarial science. He may be dubbed the world's first prominent "actuarial philosopher," though the term seems an oxymoron given that philosophy is an arena for conceptual arguments, and risk assessment is a data-driven statistical exercise. NICK BOSTROM SUPERINTELLIGENCE Paths, Dangers, Strategies So what do the data say? Bostrom aggregates the results of four different surveys of groups such as participants in a conference called "Philosophy and Theory of AI," held in 2011 in Thessaloniki, Greece, and members of the Greek Association for Artificial Intelligence (he does not provide response rates or the phrasing of questions, and he does not account for the reliance on data collected in Greece). His findings are presented as probabilities that human-level AI will be attained by a certain time: By 2022: 10 percent. By 2040: 50 percent. By 2075: 90 percent. This aggregate of four surveys is the main source of data on the advent of human-level intelligence in over 300 pages of philosophical arguments, fables, and metaphors. To get a more accurate assessment of the opinion of leading researchers in the field, I turned to the Fellows of the American Association for Artificial Intelligence, a group of researchers who are recognized as having made significant, sustained contributions to the field. In early March 2016, AAAI sent out an anonymous survey on my behalf, posing the following question to 193 fellows: “In his book, Nick Bostrom has defined Superintelligence as ‘an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.' When do you think we will achieve Superintelligence?” Over the next week or so, 80 fellows responded (a 41 percent response rate), and their responses are summarized below: When Will Superintelligence Arrive? A survey of Al researchers shows they think it's still a long way off. 25% 0% 7.5% 67.5% In the next 10 years In the next 10-25 years In more than 25 years Never MIT Technology Review In essence, according to 92.5 percent of the respondents, superintelligence is beyond the foreseeable horizon. This interpretation is also supported by written comments shared by the fellows. Even though the survey was anonymous, 44 fellows chose to identify themselves, including Geoff Hinton (deep-learning luminary), Ed Feigenbaum (Stanford, Turing Award winner), Rodney Brooks (leading roboticist), and Peter Norvig (Google). The respondents also shared several comments, including the following: "Way, way, way more than 25 years. Centuries most likely. But not never. "We're competing with millions of years' evolution of the human brain. We can write single- purpose programs that can compete with humans, and sometimes excel, but the world is not neatly compartmentalized into single-problem questions. "Nick Bostrom is a professional scare monger. His Institute's role is to find existential threats to humanity. He sees them everywhere. I am tempted to refer to him as the 'Donald Trump' of AI." Surveys do, of course, have limited scientific value. They are notoriously sensitive to question phrasing, selection of respondents, etc. However, it is the one source of data that Bostrom himself turned to. Another methodology would be to extrapolate from the current state of AI to the future. However, this is difficult because we do not have a quantitative measurement of the current state of human-level intelligence. We have achieved superintelligence in board games like chess and Go (see "Google's AI Masters Go a Decade Earlier than Expected”), and yet our programs failed to score above 60 percent on eighth-grade science tests, as the Allen Institute's research has shown (see "The Best AI Program Still Flunks an Eighth Grade Science Test”), or above 48 percent in disambiguating simple sentences (see “Tougher Turing Test Exposes Chatbots' Stupidity"). There are many valid concerns about AI, from its impact on jobs to its uses in autonomous weapons systems and even to the potential risk of superintelligence. However, predictions that superintelligence is on the foreseeable horizon are not supported by the available data. Moreover, doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more. ARTICLE 2 Yes, We Are Worried About the Existential Risk of Artificial Intelligence Oren Etzioni, a well-known AI researcher, complains about news coverage of potential long- term risks arising from future success in AI research (see “No, Experts Don't Think Superintelligent AI is a Threat to Humanity”). After pointing the finger squarely at Oxford philosopher Nick Bostrom and his recent book, Superintelligence, Etzioni complains that Bostrom's “main source of data on the advent of human-level intelligence” consists of surveys on the opinions of AI researchers. He then surveys the opinions of AI researchers, arguing that his results refute Bostrom's. It's important to understand that Etzioni is not even addressing the reason Superintelligence has had the impact he decries: its clear explanation of why superintelligent AI may have arbitrarily negative consequences and why it's important to begin addressing the issue well in advance. Bostrom does not base his case on predictions that superhuman AI systems are imminent. He writes, “It is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur." Thus, in our view, Etzioni's article distracts the reader from the core argument of the book and directs an ad hominem attack against Bostrom under the pretext of disputing his survey results. We feel it is necessary to correct the record. One of us (Russell) even contributed to Etzioni's survey, only to see his response being completely misconstrued. In fact, as our detailed analysis shows, Etzioni's survey results are entirely consistent with the ones Bostrom cites. How, then, does Etzioni reach his novel conclusion? By designing a survey instrument that is inferior to Bostrom's and then misinterpreting the results. The subtitle of the article reads, “If you ask the people who should really know, you'll find that few believe AI is a threat to humanity.” So the reader is led to believe that Etzioni asked this question of the people who should really know, while Bostrom did not. In fact, the opposite is true: Bostrom did ask people who should really know, but Etzioni did not ask anyone at all. Bostrom surveyed the top 100 most cited AI researchers. More than half of the respondents said they believe there is a substantial (at least 15 percent) chance that the effect of human-level machine intelligence on humanity will be “on balance bad" or "extremely bad (existential catastrophe).” Etzioni's survey, unlike Bostrom's, did not ask any questions about a threat to humanity. Instead, he simply asks one question about when we will achieve superintelligence. As Bostrom's data would have already predicted, somewhat more than half (67.5 percent) of Etzioni's respondents plumped for “more than 25 years" to achieve superintelligence—after all, more than half of Bostrom's respondents gave dates beyond 25 years for a mere 50 percent probability of achieving mere human-level intelligence. One of us (Russell) responded to Etzioni's survey with “more than 25 years,” and Bostrom himself writes, of his own surveys, "My own view is that the median numbers reported in the expert survey do not have enough probability mass on later arrival dates." Now, having designed a survey where respondents could be expected to choose "more than 25 years," Etzioni springs his trap: he asserts that 25 years is “beyond the foreseeable horizon” and thereby deduces that neither Russell nor indeed Bostrom himself believes that superintelligent AI is a threat to humanity. This will come as a surprise to Russell and Bostrom, and presumably to many other respondents in the survey. (Indeed, Etzioni's headline could just as easily have been "75 percent of experts think superintelligent AI is inevitable.") Should we ignore catastrophic risks simply because most experts think they are more than 25 years away? By Etzioni's logic, we should also ignore the catastrophic risks of climate change and castigate those who bring them up. Contrary to the views of Etzioni and some others in the AI community, pointing to long-term risks from AI is not equivalent to claiming that superintelligent AI and its accompanying risks are “imminent.” The list of those who have pointed to the risks includes such luminaries as Alan Turing, Norbert Wiener, I.J. Good, and Marvin Minsky. Even Oren Etzioni has acknowledged these challenges. To our knowledge, none of these ever asserted that superintelligent AI was imminent. Nor, as noted above, did Bostrom in Superintelligence. Etzioni then repeats the dubious argument that “doom-and-gloom predictions often fail to consider the potential benefits of AI in preventing medical errors, reducing car accidents, and more.” The argument does not even apply to Bostrom, who predicts that success in controlling AI will result in “a compassionate and jubilant use of humanity's cosmic endowment." The argument is also nonsense. It's like arguing that nuclear engineers who analyze the possibility of meltdowns in nuclear power stations are “failing to consider the potential benefits" of cheap electricity, and that because nuclear power stations might one day generate really cheap electricity, we should neither mention, nor work on preventing, the possibility of a meltdown. Our experience with Chernobyl suggests it may be unwise to claim that a powerful technology entails no risks. It may also be unwise to claim that a powerful technology will never come to fruition. On September 11, 1933, Lord Rutherford, perhaps the world's most eminent nuclear physicist, described the prospect of extracting energy from atoms as nothing but “moonshine.” Less than 24 hours later, Leo Szilard invented the neutron-induced nuclear chain reaction; detailed designs for nuclear reactors and nuclear weapons followed a few years later. Surely it is better to anticipate human ingenuity than to underestimate it, better to acknowledge the risks than to deny them. Many prominent AI experts have recognized the possibility that AI presents an existential risk. Contrary to misrepresentations in the media, this risk need not arise from spontaneous malevolent consciousness. Rather, the risk arises from the unpredictability and potential irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives. This problem was stated clearly by Norbert Wiener in 1960, and we still have not solved it. We invite the reader to support the ongoing efforts to do so.