I just returned from attending a talk by Marvin Minsky, where I got to ask him a question afterwards. Chalk up another famous person that I got to meet! :)
His talk was interesting, it was about how AI (Artificial Intelligence) stopped making progress in 1980. I must say, I agreed with a lot of his criticisms of AI - I too believe we have been going the wrong way for a while. He stated that most AI research focus on statistics and thereby loses much of its relevance, which has been my reasoning to my approach to the Netflix Prize - statistics have underlying assumptions that might not be true in the space of predicting intelligent behavior.
My understanding of his belief is that Common Knowledge is what separates AI from humans, and this is where I started to disagree with him in a major way. He talked about some projects that have accumulated millions of pieces of Common Knowledge, and once they have it all codified... AI! Well, that is simplifying what he said, but it was his focus. I do not think this is the way to go at all, the we need to have machines learns in unsupervised ways, not based on human understanding of the problem.
So, I thought of a question to ask him. I asked "What research is being done to use AI to find knowledge beyond what humans currently know?" He said he didn't understand the question, so I rephrased it a bit and asked again, but got the same response. Then he said something to the effect "We do not even understand how a 4 year old thinks, how can we have AI beyond that? I think there is no limit to AI, so at some point we will have machines that can do more." Thus proving my point - his talk was bogus.
He and I both share a ton of Common Knowledge. Even beyond the basics like "a chair is for sitting on" and "you can pull with a string but not push with it", but I have read some of his works, and we have read a few of the same things beyond common experience. He even has a paper on his website about Alien Intelligence (which I find to be factually incorrect), but yet with all of that in common, my question didn't evoke any of that shared knowledge. We had more in common than either of us will ever have with a computer, no matter how many trivial pieces of Common Knowledge are entered into its database, and yet he didn't understand what I was really asking. And yet, even though he didn't know what I meant, I learned a lot about him from his answer.
His view of AI is actually Artificial Human Intelligence, not AI. He thought we needed to create a 4 year old simulation, or at least understand one. What about an Alien Intelligence? Beyond that, look at all we can accomplish as humans without that understanding of how a 4 year old's mind works. Can we not have a machine that can be Intelligent in some areas of human endeavors without being intelligent in all? Does it have to know how to fix a car as well as do chemistry? That was the essence of my question, what does he believe defines AI? and that is the question he answered for me without really knowing it.
I think we will only be able to judge that we have created AI when it can tell us something that we do not already know, otherwise it is just a database, just like Google.
So much to say, but will I conclude with this: his paper on Alien Intelligence. He believes that we could communicate with aliens no matter how far ahead of us they are. What about Dolphins? We can not communicate with them. Chimps? Gorillas? Let's go lower, ants - what can we say to them? Can we hear their conversations? They obviously communicate. Any aliens we meet will be so far beyond us, it would be like us talking to worms. What chance do we have of understanding any of their world? None. None at all.
The Edward