%% Last Updated: - [[2021-03-03]] %% Author: [[futureoflife.org]] Full Title: Benefits & Risks of Artificial Intelligence - Future of Life Institute URL: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/ ### Highlights first synced by [[Readwise]] [[2020-09-17]] - Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task - the long-term goal of many researchers is to create general AI (AGI or strong AI). - Another short-term challenge is preventing a devastating arms race in lethal autonomous weapons. - As pointed out by I.J. Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind - experts think two scenarios most likely: The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. - these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. - The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. > the concern about advanced AI isn’t malevolence but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we have a problem. > People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control? ^ea5f23 > in such a poll of the AI researchers at the 2015 Puerto Rico AI conference, the average (median) answer was by year 2045, but some researchers guessed hundreds of years or more. - Stuart Russell, author of the standard AI textbook The consciousness misconception is related to the myth that machines can’t have goals.