Engineering Intelligence

Friday, April 18, 2014


Despite the technological leaps made in the realm of artificial intelligence, people often object to the idea that the minds of machines can ever replicate the minds of humans. But for engineers, the proof is in the processing. Brooke talks with Stanford lecturer and entrepreneur Jerry Kaplan about how the people who make robots view the field of AI. 


Jerry Kaplan

Hosted by:

Brooke Gladstone

Comments [11]

Gary M Washburn from Vermont

I was gratified to learn that Turing had a far more modest notion of machine intelligence than his latter day fans. But there is a far more robust claim that can be made on behalf of being human. There is a way in which the Turing test, or the latter day version the Chinese Room, can be easily beaten by the human. But there is something that needs be recognized that cognitivists stubbornly refuse to see. The neglected qualifier. There is no more 'cognitive dissonance' than the propositional form of predication so dogmatically clung to by today's AI crowd, unless it is the enigmatic 'if...then' (If there is any if about it there is no if about it, and if there is no if about it it's pretty iffy!) There is a contrariety of which the greatest extremity, if the least term, of rigor in it is neither contradictory nor complementary. As we speak we engage that contrariety against conventional terms, and in that engagement we are no more contradicting than complementing each other in that differing with convention. The ramifying of this limit to the 'cognitive assonance' of logical forms is a long and arcane road, but suffice it here to point out the crux of the matter is the difference between the quantifier and the qualifier in the standard form of predication. The upshot is this: when we lose count, we begin to think. This achievement is what makes us human. When a machine loses count it undergoes the mechanical equivalent of madness. It crashes. That difference beats the Turing test hands down.

Aug. 10 2014 02:40 PM
Will Caxton from Noosphere

"In a way where we sublimate our needs to the needs of, of these machines ..." "Sublimate" is not the right word; I imagine Mr Kaplan probably means "subordinate". This topic is much too big to treat adequately in one radio show. At present, it isn't even important; it's theoretical, but it may be important some day. We can start by making a scale of conscious awareness, from zero up to human. Most of us agree that it's morally wrong to treat humans as if they were inanimate objects, because humans are capable of subjective experiences like physical pain and emotional pain, and humans have their own, independent goals and desires. Other biological organisms range somewhere between zero and human on the scale of consciousness (as far as we know). Right now, all machines are at or very close to zero, but that may not be true forever. Some day, there might be machines with a non-zero degree of conscious awareness, in which case abusing or enslaving those machines would be morally wrong. I know there are people working on the problem of recognizing machine intelligence and what to do in case it happens; I hope they have some good answers by then.

"... they will want to accomplish certain things. And if the best way to do that is to nag you or to cry at you or to put on a sad face, they might very well do that." Here Mr Kaplan seems to contradict himself. If machines aren't conscious, they can't "want" anything, except in a metaphorical sense, like a virus "wants" to reproduce. On the other hand, to date I've never encountered a virus able to put on a sad face. If machines are able to manipulate us, maybe they deserve to succeed.

Apr. 26 2014 09:13 PM
Will Caxton

"I can say, oh, Siri is thinking about my question, and you’re not confused about whether Siri is some kind of humanlike intelligence. It’s simply the use of the word." I don't agree that this represents any change in language; it's just a metaphor, like "shade-loving plants" or "sometimes my car decides it just doesn't want to start."

Apr. 26 2014 06:22 PM
Gary Bartlett

Larry from San Diego complains that "The test Turing actually proposed was for a man and a computer to try to convince someone they were a woman." This seems to be incorrect; and at best, it is certainly not the transparent fact that Larry presents it as being.

The original article itself ("Computing Machinery and Intelligence", 1950) leaves the matter unclear. Turing first describes the original imitation game: an interrogator tries to distinguish a man from a woman, and the man tries to make the interrogator believe that he is the woman. Turing then asks, "What will happen when a machine takes the part of [the man] in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?" So given that, Larry's interpretation can look plausible. However, there is no further (relevant) mention of women in the article, which is a little odd if indeed Turing intended to be saying that the machine had to specifically impersonate a woman, not just a human.

While some have argued that Turing did indeed intend the test to be as Larry says, recent historical evidence presented by Jack Copeland ("The Turing Test", 2000, in the journal Minds & Machines) seems to conclusively show that he did not. In a BBC radio broadcast in 1951, Turing said the aim of the test is to determine whether a machine can "imitate the brain"; and in another broadcast in 1952, that "the machine has to try and pretend to be a man"! (I'd imagine that the latter is just the standard sexist language that was, and is, common; saying 'man' where one probably really means 'human'.)

Apr. 25 2014 06:19 PM
Leslie Walden from Detroit

Jerry Kaplan said “Squirrels haven’t evolved to deal with 2000 pound piles of metal that approach them at 60 miles an hour, and that’s why they get run over." In my lifetime (since 1935) squirrels have evolved to live better with automobiles.
When I was five I began observing everything I could about driving. Squirrels, I noticed, when threatened by an automobile would attempt to return to the safety of the last tree they had been in, even when this was not the best way to survive. This happened in about seven cases out of eight.
In the 1960's, while I was teaching my son to drive, I observed that squirrels had improved in this respect and almost half would choose the better route to safety.
Now it appears that more than half run in the direction that takes them out of harm's way.
I have never seen squirrels who seemed to be giving or taking instruction, so I have to believe that this change is the result of natural selection -- the survival of the smarter,

Apr. 23 2014 10:03 PM
Rory Johnston from Hollywood CA

About that absurd 47% figure, are you not aware that people have been saying for many decades that computers are going to put most of us out of work? Yet it doesn't happen. See, for a start, the book "Hal's Legacy." Constantly pundits underestimate the complexity of human occupations.

Apr. 21 2014 01:43 AM
Jerry Kaplan from California

Hi folks, I'm 'the guest'! A few comments...

To Janet Baker: I realize the segment as broadcast didn't suggest any solutions to the pitfalls of the coming age of intelligent machines, but I definitely think we can - and must - implement solutions. The short story (bad news) is that our current economic system is in the process of running amok right now. The people who have the capital (intellectual and monetary) to build these systems are getting richer faster at other people's expense. I am quite confident about this because I see it up close, living in the world of Silicon Valley startups and venture capital. The short story (good news) is that there are ways to tinker with the free market system to tilt things in a different direction without excessive government interference, redistributing wealth, or dampening innovation and motivation. The trick is to recognize that our overall level of wealth is likely to double in the next 40 years or less, just as it did in the previous 40 years. We need to create tax incentives that encourage corporations to distribute the ownership of their stock as widely as possible. In conjunction with a few other proposals I've cooked up, the system itself will take care of the inequality problem, by broadening the class of people who own the newly-created assets. This is a little hard to explain in a short blog comment, but I've just written a book about it, and hopefully can get it into print in the next 9-12 months or so. (Publishing is a long, slow process.) I hope you can be patient with me! :)

To Benshababo: I believe the 16TB of RAM was a total of the RAM in each of the 90 server clusters, but that's not where the 'knowledge' was stored. From wikipedia: "Although Watson was not connected to the Internet during the game,[23] it contained 200 million pages of structured and unstructured content consuming four terabytes of disk storage,[8] including the full text of Wikipedia.[9]" They likely loaded this into RAM during run-time and used the rest of the memory to represent the search space of potential answers as they were being evaluated.

To JessieHenshaw: Ha! My book agent (see above) says it's very refreshing to get a manuscript from someone so optimistic about the future. I haven't had the pleasure of reading your ideas (yet), but I think its fair to say that the driving principle of our economic system is that (to paraphrase Gordon Gekko) growth is good. We grow our way out of debt, inflate our way out of commitments, etc. Sustainability may be a central principle of nature and ecologists, but as far as I can tell capitalists haven't gotten the memo yet.

While I have the keyboard, it was nothing short of amazing how Brooke and her producer managed to shoehorn some key points from an hour-long interview into this short segment without altering the meaning. My wife says they managed to make me sound a lot more articulate than I actually am. ;)

Thanks for all the thoughtful comments!

Apr. 21 2014 01:02 AM
Janet Baker from Chicago

This interview raised some interesting points about the effect of this most recent development in capitalism, AI, on workers--its devastating effect on them. The guest announced that 'Marx was right,' and went on to say that the future looked very dim for humans, not only to those on the lower end of the earning and training spectrum, but also on those at the higher end--doctors and intellectual workers of all kinds. Still, the guest had no solution to suggest, and the discussion returned to whether AI in itself was good or bad, right or wrong.
But it is not the fault of AI or machines in general, nor of the 'industrial age' or 'industrialism.' We do not have to choose between the help that machines offer and our own survival. Hilaire Belloc makes this point in The Servile State, a slim economic history of Europe. If the industrial revolution had risen in a society that had not given up its sense of obedience to authority and morals, as it inevitably would have risen in any case, the material fruits of industrialism could have been shared among the entire population, because they were the owners of their communities, not anyone's employee to be put 'out of work' at his will. If the industrial revolution had risen in the economic environment that prevailed before the so-called Reformation, its effects could have been positive. The technological advances would have been folded into the manufacturing enterprises organized by the guilds and cooperatively owned by an entire community of independent contractors, not employees. At the beginning of the sixteenth century, virtually 100% of its citizens were fully enfranchised and in possession both of their own land and the means to work it. Expensive means of production were usually owned cooperatively--the mills, for example. The same cooperative (called guilds) trained new workers and enforced regulations. A key: nobody was allowed to get very rich, you had to pass on work if you'd earned enough, and your land could not be sold, it had to be passed on to the family. But still, you could not be evicted, you enjoyed a lot of leisure time, and mead was plentiful. So life was good! It had taken Europe fifteen hundred years to achieve the transformation of the universal slavery into fully endowed and enfranchised free men, but it had been done. There is no reason to believe that machines could not have been incorporated into that peaceful low competition infrastructure, just as all prior advances had been. Machines do not make the evil--men make the evil, in this case the capitalistic/protestant revolution, which made men, money, and land into commodities. Within two generations of the rebellion, land ownership in Britain, for example, had fallen to 50%, with its owners thrown into day-to-day employment, and it's been downhill ever since.

The question is not is AI bad or good, dangerous or helpful. The question is who owns the proceeds.

Apr. 20 2014 03:36 PM
BenShababo from New York City

The guest says that Watson had 4 TB for memory and makes the comparison that today a 4 TB hard drive costs somewhere on the order of $100. He's trying to make a point about the rate at which computer power/dollar improves. But this is not a valid comparison.

According to the Watson Wikipedia entry, Watson has 16 TB of RAM. RAM is a different type of memory than a hard drive on which you store data long-term. I don't know how much it would cost to buy 4 TB of RAM, but as a comparison, if you get 250x less RAM - 16 GB - it costs around $100-200 today. As another comparison, If you want a 16 GB hard drive (long-term storage) today, you'd probably buy it on a keychain.

While it is true that computing power has increased the possibilities of machine learning, and that it improves quickly, the guest's comparison greatly overestimates the rate.

Apr. 20 2014 11:06 AM

Good heavens Brook!
You should talk to someone with the hard goods and a FAR more positive view of what AI can be. It really does not need to be the "black hole" of machines pumping money for machines. Yes... "machines pumping money" either for machines or Oligarchs, is indeed the insanely brutal future that lies directly ahead of us... but in life "something else" oftens happens to change courses in mid-stream.

The "course change" for just this kind of jam is what nature frequently displays, as an ingeniously counter-intuitive way of solving. It's by changing the orientation of whole systems (as even Keynes envisioned would be needed to survive our end of exponential growth) from using its wealth to "build more and more" to using its wealth to "care for what it built".

The fact is that "our problem" is understanding that principle perfectly, but only for our own personal lives. We could equally apply it to our world at large, but for our acting as if "clueless" about how to "do it" for "the system of all our systems"...

There are quite direct paths available for people who look at the problem that way. I have several discussed on my Journal, Reading Nature's Signals. You're just not talking to the right people!


Apr. 20 2014 10:33 AM
Larry from San Diego

I hate to nitpick, but people mis-stating the Turing testing is a particular pet peeve of mine. The test Turing actually proposed was for a man and a computer to try to convince someone they were a woman. There is no more reason to think a computer would be better at being human than a human than there is to a think a human would be better at being a squirrel than a squirrel.

You weren't, as is often done, calling on Turing as an authority to argue a point, so it's not clear that stating Turing's challenge incorrectly altered the listener's take away from the story. However, I do think that "On the Media" wants the media to do a better job of getting the science right.

Apr. 19 2014 06:49 PM

Leave a Comment

Email addresses are required but never displayed.