On Our Minds: Nick Bostrom on Computer Superintelligence

January 4, 2015

Oxford University philosopher Nick Bostrom offers a cautionary take on artificial intelligence in his new book, Superintelligence: Paths, Dangers, Strategies. In it, he imagines what could happen if computers were to ever become smarter than humans. He tells Steve Paulson that it could have catastrophic effects, unless we start thinking about it now.

Guest(s): 

Comments

I am so tired of people worrying about AI taking over the world. Here's the thing: computers are just another tool. They don't WANT anything. Just as we have created powerful machines and weapons that kill people, we may reach a point where computers kill people, but it will be because of some kind of improper use by a human, either accidental, similar to how people are killed by farm machinery if they aren't careful enough, or malicious, as when someone plants a bomb. It will NOT be due to computers deciding on their own that they don't like us. Even in the scenario where someone programs a computer to maximize production of paper clips: first of all, we provide the raw materials, so it needs us, and secondly if it's software was written so that it decided killing humans helped it reach its goal, that's a failure of the programmer not including "Don't kill humans" as one of the goals.

I don't know. Hallway thermostats are already smarter than most journalists and life goes on, precariously.