Monday, 2 March 2015

A.I. don't think so.

The Singularity - that point where machine intelligence surpasses human intelligence - is apparently widely assumed to be imminent. My pal Alice recently brought this article to my attention. The author writes in a chatty digressive style with amusing graphs and gifs but essentially his argument appears to be this:

1.The information processing abilities of computers have increased and will continue to increase.

2.The increase of information processing abilities will one day equate to human intelligence.

3.A machine functioning at this level will be able to improve itself so that it will surpass human intelligence.

I think that we can all agree that (1.) is a sound proposition but (2.) I think has some fundamental flaws that I will address today and (3.) has some fundamental flaws that I might address in the future if this post generates any interest or I am stuck for an idea..

When it comes to computations, computers have the edge. When IBM computer Deep Blue played chess grandmaster Gary Kasparov, it did not intuit the beauty of the game as any six-year-old might. Nor did it win by calculating the best move logically. The computer was programmed with the knowledge of millions of games and their outcomes and processed these - choosing moves that in identical situations had led to victory for other players. An impressive enough feat of processing power - but bears no relation to how humans think even about something as trivial as chess. Kasparov might as well have been playing a committee of a hundred grandmasters with access to all the books. He was convinced that he could have beaten Deep Blue in a rematch by taking the games in less well documented directions, but IBM had their headlines and their increased stock value and research into chess computing ceased. Kasparov continued to play and became involved in progressive Russian politics. Deep Blue was dismantled - half of it is on display at the National Museum of American History. 

A game I found in the White Lion that had no rules with it.

An output from a computation can only be random or determined by its program. So a simple chess program might make a random move to start with, then make moves determined by its program. A key part of human intelligence is creativity. I remember as a child in chess club getting bored with playing chess and inventing new games to play with the pieces. Ascribing the pieces new moves that increased or decreased their powers. What if bishops moved like queens? What if you make one move, then I make two, then you make three and so on? A computer program cannot do this. It is limited by its nature. This creative output of human intelligence is neither random nor determined by programming - I had after all been programmed with the rules of chess (if not with the discipline to apply them for the full hour) - and these are the only two options available to the computer.

The problem of creative endeavour in humans is not understood. We know it happens and we experience it whenever we paint a watercolour, compose a song or engage in conversation. We have no explanation for it. To design computer programs that mimic or produce this function will require radical new ideas that explain how it works in humans. It may be a problem that human minds are incapable of solving. The futurologists who predict an imminent Singularity seem to think that these 'hard' problems of consciousness and creativity will themselves be solved by the increasingly intelligent machines. But I don't buy it. An increase in computing power alone will not give birth to human level machine intelligence and that is a minimum requirement we must demand if machines are going to solve our problems for us.

I'm off to play the calculation engine at my favourite chess website Hopefully, it will not tip the board over if it starts to lose.



  

No comments:

Post a comment