Former Google executive describes 4 main dangers of artificial intelligence
The California Senate introduced a bill last week that would require Amazon (AMZN) to reveal details of the productivity tracking algorithm used in its warehouses; Meanwhile, Facebook (FB) faced criticism this week over a Wall Street Journal report revealing that it knew his Instagram feed made some teenage girls feel worse about themselves.
These developments are not necessarily a backlash against big technology, as much as its algorithms, which use artificial intelligence (AI) to tailor performance to individual users or employees.
In a new interview, AI expert Kai-Fu Lee – who has worked as an executive at Google (GOOG, GOOGL), Apple (AAPL) and Microsoft (MSFT) – explained the four main dangers of AI technology in the open. boom: externalities, personal data risks, inability to explain the resulting choices and war.
“The biggest danger is autonomous weapons,” he said.
“This is when AI can be trained to kill, and more specifically to murder,” adds Lee, co-author of a new book titled “AI 2041: Ten Visions for Our Future”. “Imagine a drone that can fly on its own and search for specific people either with facial recognition or with cellular signals or whatever.”
“It changes the future of war”
A stand-alone weapons ban has received support from 30 countries, although an in-depth report commissioned by Congress advised the United States to oppose a ban because it could prevent the country from using weapons already in existence. its possession.
In 2015, prominent tech figures like Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, along with thousands of AI researchers, signed an open letter calling for a ban on these weapons. .
Autonomous weapons will transform warfare, as their affordability and accuracy will facilitate havoc and make it nearly impossible to identify the perpetrator, Lee said.
“I think this changes the future of terrorism, because terrorists no longer risk losing their lives to do something bad,” he said. “It also allows a terrorist group to use 10,000 of these drones to carry out something as terrible as genocide,” he says.
“It changes the future of war,” he adds. “We have to figure out how to ban or regulate it.”
The second significant risk posed by artificial intelligence is the unintended negative consequences that result when AI focuses on one goal but excludes other concerns, Lee said.
“Externalities happen when we tell AI to do something, and it’s so good at doing that thing that it forgets – or in fact ignores – other externalities or negative impacts that can cause,” says -he.
“So when YouTube continues to send us videos that we’re most likely to click on, not only do I not think of chance, but it also potentially sends me very negative or very one-sided views that could shape my thinking.” , he adds.
According to a Wall Street Journal report that shows internal Facebook research in 2019 found that Instagram made body image issues worse for one in three teenage girls who used the app and experienced such issues. .
Teens attributed increased rates of anxiety and depression to Instagram, according to Facebook’s internal report, according to the Journal. For its part, Facebook is testing a way to ask users if they need a way to take a break from Instagram, two of its researchers told the Journal. The researchers also noted that some of their studies involved a small number of users, and in some cases the causality of their results was unclear.
Kai-Fu Lee has been at the center of AI development for decades, since contributing to the development of speech recognition and automated speech technology as a doctoral student at Carnegie Mellon University.
Since 2009, he has been CEO of Sinovation Ventures, a technology-driven venture capital firm in China with more than $ 2.5 billion in assets under management.
Speaking to Yahoo Finance, Lee cited a final set of AI dangers surrounding vulnerable personal data and an inability to explain the consequent decisions made by technology.
Decisions made by AI are especially crucial in life and death situations, such as a thought experiment known as the wagon problem, in which a decision maker must choose to hijack a runaway train to kill many people. in its path, at the risk of killing fewer people on an alternative, Lee said.
“Can AI explain to us why it made the decisions it made?” ” he says. “In four key things like driving autonomous vehicles, the cart problem, medical decision making, surgeries.”
“It’s getting serious,” he adds.
Follow Yahoo Finance on Twitter, Facebook, Instagram, Flipboard, LinkedIn, Youtube, and reddit.