28 July 2019

A deeper, too long look into Lc0 v. Stockfish

In the previous post I took a look at the odd beginning and ending of a game between Lc0 and Stockfish. At the end white is clearly superior, so the b5 blunder doesn’t really matter.

Let’s scroll the whole game to see and spot other things that even an amateur could classify as mistakes and play better — allegedly.

24 July 2019

Chess engines dance on opening

I was taking a look at how chess engines do their magic and the answer is: through brute-force, more or less. Recently (so to speak) there are new roads to be explored, but anyway the strongest “classical” engines do not attempt to imitate human players: this approach proved to be unfeasible — provided the purpose is to have a very strong computer player.

Even if a good chess engine is far stronger than the common amateur chess player, it doesn’t mean that its moves make always sense.

I’ve found that games between chess engines can be particularly interesting because engines’ motives can look totally obscure. The absence of a human plan for at least one color can make the engines dance pointlessly, even if we are talking of chess engines which can make Kasparov’s life hard.

Pointlessly… one can argue: How do you know if you are less than an amateur chess player? Let me rephrase: pointlessly, apparently. But sometimes it is clear.

Take a look at this rapid game, Lc0 vs Stockfish.

Lc0 is one of those chess engine which uses a new approach, namely a neural network1.

In that game, after 4 moves the situation is this:

aabbccddeeffgghh1122334455667788

That is, the initial position, except that white has lost its f2 pawn. Does this make any sense? Would a human opponent have allowed this?

According to this Shredder’s opening database online2, in the ECO A02 opening (Bird’s opening) the black hasn’t ♞h6 among its possibilities. According to this one3, instead, there are 14 games with that move. Chess.com too has it classified as Bird’s Opening, Horsefly Defense (ECO A03), and it counts 13 games (today)4.

After 1. … ♞h6, white offers its pawn by pushing it forward: 2. f5. This is odd and I state that it doesn’t make any sense and that you don’t need to be a super engine or a GM to see it5. Stockfish catches the pawn, of course: 2. … ♞×f5.

Now lc0 plays 3. ♘f3, which is “obviously” ok. Stockfish moves its knight back to h6, and this is ok too, as far as I can say.

But then lc0, instead of continuing to develop its pieces, moves the knight back: 4. ♘g1. This does not make any sense. I, less than amateur player, consider this a blunder.

However, ok, lc0 has its neural network, who knows what’s going on inside its head.

What about Stockfish? It can gain tempo. Instead, it chooses to play 4. … ♞g8, as to give white what it’s his by right — at least black has gained a pawn…

But this lost pawn doesn’t steal the win from lc0 after 117 moves and a ply. In fact Stockfish blunders and lc0 mates.

  1. f7 b5??
  2. f8=Q#

Now, I can’t be sure about the best move, but it can’t be b5.

aabbccddeeffgghh1122334455667788

This

  1. … ♝×e6

at least avoid the imminent checkmate. Black’s destiny can’t be changed at that point, but I don’t think the engine has a make-the-agony-short algorithm, thus b5 is a blunder, even if anything else (as far as I can see) wouldn’t have changed the result.


  1. Nonetheless, I still classify this approach as brute-force, even if most of it were done in advance and elsewhere.↩︎

  2. In battles between chess engines there could be rules that exclude the use of openings database, and this could explain this odd beginning.↩︎

  3. Click on the explore link. Anyway the site requires a fee to be fully used.↩︎

  4. White wins 30.8%, draw 23.1%, black wins 46.1%.↩︎

  5. According to chess.com and those 13 games, it can continue with ♘f3, ♘h3 (and the history of those games — not an interesting statistics, indeed — says white can win), b3, or e4 (among those 13 games white has lost when it played these last two moves).↩︎