A new survey finds that more than a third of scientists doing AI research believe that the decisions made by AIs could trigger a debacle as bad or worse than nuclear war. The research highlights growing concern that AI could have unintended dangerous effects and produce many benefits. The way to stop computers from harming people? “Develop principles for safe and trustworthy AI,” Michael Huth, the head of the Department of Computing at Imperial College London and a senior researcher at the AI company Xayn told Lifewire in an email interview. “This is already happening for deep learning applications, such image classifications, where one can verify, in principle, whether small input distortions can make the AI model misclassify a commercial airplane as an attacking fighter jet.”
General Intelligence?
The new research was trying to uncover opinions on whether AI will achieve artificial general intelligence (AGI), the ability of an AI to think similarly to a human, and the impact AI will have on human society. The authors surveyed scientists who had co-authored at least two computational linguistics publications. “The community in aggregate knows that it’s a controversial issue, and now we can know that we know that it’s controversial,” the team wrote in their paper. Among the findings was that 58 percent of respondents agreed that AGI should be an important concern for natural language processing, while 57 percent agreed that recent research had driven us towards AGI. The hazards of AI are real, contends Huth. “The biggest threat comes from control of critical infrastructures, such as water supplies, hospitals, military defense systems, and the ability to discover biological weapons and to then use them on enemies,” he said. As AI becomes increasingly sophisticated, it is used in more aspects of everyday life, products, and services, Huth said. AI is becoming increasingly autonomous, without human monitoring or decision-making. “This can lead to unforeseen consequences,” he added. “For example, autonomous fighter jets may take an initiative that escalates a conflict and causes a huge loss of life. Another example is how AI can be used to boost hacker malware to penetrate critical systems faster and deeper, exposing nation states to attacks with a severity equal to that of conventional warfare.” Another way AI could harm humans is through accidents, Daniel Wu, a researcher in the Stanford AI Lab and cofounder of the Stanford Trustworthy AI Institute, which focuses on ways to make AI safe. In the infamous case of Compas, an AI algorithm was given broad license to sentence criminals and was shown to make sentences based on biased and discriminatory factors. “Imagine an AI tasked with manufacturing paperclips,” Wu said. “Now imagine that AI is being given much more power than needed. Suddenly, entire parts of our society and economy are repurposed into self-propagating paperclip factories. An AI doomsday is less likely to be an evil overlord and more likely to be a “paperclip optimizer.” That’s what we’re working to prevent.” AI researcher Manuel Alfonseca told Lifewire in an email that one dangerous scenario is if AI programs that are used in war get out of control. “As it would be difficult to test them in vitro,” their use could lead to catastrophe," he added. To prevent such scenarios, Alfonseca said that AI programs should be carefully designed and tested before being put in use. “And the data used to “teach” the programs should be carefully selected,” he added.
The Power for Good
It’s not all grim news when it comes to the future of AI, as researchers say that the field has the potential to help make many positive contributions. Huth said that using AI to monitor people’s health through sensor readings, mobility data, and other methods “will have tremendous benefits to the individual’s health and public health—with opportunities of cost savings to the taxpayer.” Such AI can be designed and deployed with privacy in mind “while still having the general benefits to the broader public.” Wu predicted that specialized AI will help us navigate the internet, drive our cars, and trade our stocks. “AI should be treated as a tool; at the end of the day, it will be humans building a better world,” he said.