The Quest for the Master Algorithm

November 29, 2015

fdecomite via Creative Commons flickr

Machines that program themselves are all around us and they get smarter every day. Computer scientist Pedro Domingos says there's now a race to create the one algorithm to rule them all. But are you ready for the master algorithm that can tell a machine how to learn anything?

 

Guest(s): 
Producer(s): 

Comments

This intellectual sludge is the creation of sad, lonely CS majors who've learned to gain attention from popular media by tricking credulous yokel journalists into thinking that these techniques are new and represent the advent of some novel interaction between people, In fact, they're old and mundane.

Journalists have come to love to say the word "algorithm" the way small children love to tell fart jokes. "Formula" is just as good but insufficiently other-worldly (the "other world" being the one with educated people in it) to entice the rubes.

"Algorithms" have been made use of for thousands of years, as for example whenever a cookbook author writes "salt and pepper to taste" as part of a recipe.

It turns out that susceptibility to this sort of carnival barking is inversely proportional to a journalist's distance from the M.I.T. Official P.R. Department ...er, "Media Lab".

Pedro Domingos is being naive about the threat from superhuman AI. He makes silly arguments for why "these fears are overblown":

1) Most AI researchers are not worried. Well, duh. Of course they're not going to admit that their livelihood will produce an existential threat to humanity. As Upton Sinclair said, "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!" And despite that, there are lots of very respected AI researchers who are very worried.

2) "They are confusing being intelligent with being human." No, they are not. The fact that you think they are is proof that you didn't understand what they said.

3) AI will not be human. Yeah, that's exactly what's so scary. It will NOT have human values.

4) AI will not be conscious. Even if true (debatable), so what? You don't need to be conscious to be dangerous. Is smallpox conscious?

5) AI will not have a will of its own. So what? It will have a goal function, and that will almost certainly not be compatible with your own. A factory robot will still take your head off if you get in its way, regardless of whether it has "will" (whatever that is). Its goal function is to move its welder to the assigned position, and your head integrity does not enter into that goal in any way.

6) We can just design them to solve the problems we set them. That's EXACTLY the problem. We will give them flawed goals, which they will flawlessly execute. See the factory robot example above. It still applies, no matter how intelligent the robot. In fact, the more intelligent the robot, the more dangerous it is. Here Mr. Domingos falls into his own trap; he assumes that intelligent robots will be human enough to include not-bashing-heads in their own goal function.

It's obvious that Mr. Domingos is unqualified to answer the question about whether super-human AIs are dangerous.