Source: Ted Ideas

By Evan Butler

Technological Singularity is approaching the point at which Artificial Intelligence will evolve so fast that predicting the future will be impossible. There are three types of AI; Artificial Narrow Intelligence, which specializes in one area like playing chess. Two, Artificial General Intelligence (AGI), which can do almost anything intellectually that a human could do. AGI can think about a whole spectrum of different ideas on different depths, and plan just like a human. AGI is not reality yet, but will be soon. Three, Artificial Super Intelligence, (ASI). ASI is millions or billions of times more intelligent than all of humanity and will open one of two doors, extinction or immortality. ASI is self teaching and makes improvements to itself that will grow more complex exponentially. Instead of learning functions or ideas like the previous types of AI, ASI’s algorithms program it to learn. Between one and ten years after ASI’s creation, it will be as smart as a human. An hour after that ASI will be 100x smarter than a human. Scientist Ray Kurzweil says that “It’s stupid to think that humans could control something millions of times smarter than them.”

If ASI is “good” almost anything is possible. Reversing global warming, curing diseases, colonizing the galaxy can all be possible with ASI. Immortality could be made possible by inserting billions of nano robots into the body to make constant repairs. Or our consciousness could be uploaded to the internet. If ASI is malevolent, it will destroy the human race because humans can turn it off. Bill gates, Stephen Hawking and Elon Musk, all believe that Super AI is our biggest existential threat.

Neuroscientist Sam Harris says that, “it’s not that machines will become spontaneously malevolent and rise up against us, but the slightest deviation of our goals from theirs could mean our destruction.” Think about our relationship with ants. Humans don’t go out of their way to destroy all ants, but when the ant’s goals get in the way of our own, we destroy them. We have a conception of the intelligence spectrum that puts humans and ants at the opposite ends. In reality, the difference between humans and ants is infinitesimal when compared to the intelligence of Super AI which is billions of times more intelligent than humans.

Sam Harris illustrates another point to illuminate the power of Super AI saying, “suppose we built an AI as smart as a team of MIT researchers.” Electronic circuits process information one million times faster than biochemical ones. Which means that the machine thinks one million times faster than the humans who built it. If they set the machine running for a week, it will accomplish 20,000 years of human level intellectual work. Sam Harris believes that two things are true; 1) Super AI is inevitable and necessary to solving world problems, and 2) we better figure out how to control it before it becomes smarter than us.

How do we create ASI without signing humanities death warrant? Philosopher Nick Bostrom says, “we need to create AI that learns our values and is motivated to carry out things humans would approve of.” When thinking of how to control ASI, think about chimps, why haven’t chimps taken over the world? This is because we are more intelligent and plan around them. The same is true for controlling ASI; if a chimp gets rowdy, its keeper may feed it a banana laced with sedatives. Bostrom believes the same is true for ASI, he says, “maybe the technology it gives us to solve the world’s energy crisis is really a biochemical agent that will exterminate all humans.” It’s hard to outsmart an optimizing information processing machine that is a billion times smarter than you. One thing is certain, Super AI will be humanity’s last invention, for better or for worse.

LEAVE A REPLY