Professors Want to Keep Robots from Getting Out of Control
Experts from around the country, including professors from Harvard University and MIT who study the future of Artificial Intelligence and its potential impacts on the human race, are signing a pledge to keep robotic advancements from going the way of Will Smith’s I, Robot.
In an open letter written by members of The Future of Life Institute (FLI), a Cambridge-based, volunteer-led organization made up of some of the area’s top computer scientists, AI researchers, physicists, and statisticians, they called for support to ensure the research of making smart machines “do what we want them to do” remain a top priority as technology rapidly improves.
“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” the letter states. “Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”
The group, cofounded by Max Tegmark, a professor of physics at MIT, and Viktoriya Krakovna, a PhD student in statistics at Harvard, outlined those “pitfalls” in a supplement to the letter they dubbed the “research priorities mission.” The secondary part of the letter breaks down FLI’s goals as an organization, including the importance of maintaining human control when constructing autonomous weapons and machines.
“We could one day lose control of AI systems via the rise of super-intelligences that do not act in accordance with human wishes—and that such powerful systems would threaten humanity,” according to an excerpt in the “priorities mission,” citing Stanford University’s recent “One-Hundred Year Study of Artificial Intelligence.”
But the group isn’t solely talking about the potential of Terminator-like machines scouring the streets, seeking some type of world domination. Their research has a timelier, realistic aim of addressing more immediate advancements in technology when it comes to autonomous weapons, cars, and other gadgets people are already experimenting with.
“A lot of the potential issues are not necessarily embodied in autonomous robots getting out of control—there’s also the potential problem where you might have some type of program, which isn’t necessarily malicious in any way, that could just have a goal programmed in the wrong way,” said Krakovna. “In past years, research has primarily focused on developing AI capabilities. And given how quickly they are developing now, it’s important to work on and really focus on the safety aspects and control aspects.”
Other areas of research FLI is looking to crack into include the tie between machines and laws and ethics, privacy risks, and the impacts robots may have on people’s jobs. “We are trying to get a large number of researchers to agree that there is important work to be done, and important research that goes into making AI robots beneficial in the future,” said Krakovna. “We would like to encourage research in these directions outlined in the ‘research priorities’ document.”
The letter extends far beyond local research as well. Already, the pledge has been signed by the likes of Tesla Motors CEO Elon Musk; Jaan Tallinn, cofounder of Skype; and renowned physicist Stephen Hawking.
“We believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today,” the letter states.
Here, you can read the full scope of the group’s research initiatives: