Total Pageviews

Thursday, 15 September 2016

The Reason for Tilt

 

Black's banal moves of 85-89 and 93 prompted a discussion in the commentary team of Andrew Jackson and Kim Myungwan about bots going “on tilt” when they start losing.

Certainly it looked for all the world as if Alphago had suddenly lost her head, as the efficacy of her moves plummeted from the lofty standard of her prior plays to a level of naivety that kyus would scoff at.

What could cause such an apparent mental breakdown?

Well, it wasn't a mental breakdown; Alphago is Quixotic all the time, even when winning, even when playing jaw-dropping tesuji like she did in game 1.  Her lack of reality only becomes apparent when we can see the error of her ways.

To see why bots seem to sometimes lose their heads, let's imagine a bot is playing black and that there are 10 possible moves at each turn - call them b1,..b10. For each of these, white has ten choices w1,..,w10, and so forth. 

Now, suppose that for, say, b4, 9 out of the 10 replies to it lead to wins for black. Then the probability of b4 being a winning move is 90%. 

Now suppose the winning probabilities are lower for all the other possible first moves. So b4 looks good.

Only one reply to b4 - let's say w3 - leads to a white win. But what if w3 were an obvious move? What if anyone with half a brain would play w3 in response to b4?

Lee Sedol played the obvious (forced) 84 in response to black 83, and 86 in answer to 85, etc. We could all see that black cannot save his stones there - but Alphago didn't, because she doesn't know anything about Go, except how to count the final score to see who won. Her policy network operates like a set of autonomic stimulus-response reflexes learned from gerzillions of trials; but she has no overview, no plan, no commonsense.

MCTS bots sometimes choose moves that could only work if the opponent played tenuki. But tenuki of the first move in a path through the tree is just one case; in a lookahead sequence 20 moves long, there are ten opportunities for the opponent to tenuki - ten chances for the bot to win against a blind man.  And there are usually dozens, maybe hundreds, of other places the opponent could play instead of the obvious forced move.

But MCTS doesn't know when a move is forcing; it doesn't know that no opponent with his wits about him would tenuki on any of those ten occasions. It only knows that it has a 90% chance of winning if you are too dumb to not make the obvious replies. It doesn't know that you are not that dumb. All it knows are the statistics. 

Commonsense Go does know; it tests the viability of move candidates before proposing them.

I wonder, if Alphago used Commonsense Go instead of her policy network, would she play better by not going on tilt?