25 October 2020

Better Than Us (humans)

Watching Better Than Us, or Better Than Humans, that is, Лучше, чем люди in Russian. Nowadays it seems to me that it’s rather easy to be better than humans: it is enough not to be humans. Hence, even a robot can be better, even if it’s the dumb, basic one.

As all TV shows, this one too needs to inject splinters of stupidity and inexplicable blindness into the characters in order to build suspense, sort of, and to avoid that plots and counter-plots get uncovered too soon, making the show shorter by reducing the amount of maybe-possible scenes.

Like the following spoiler — possible spoiler alert for those who are enjoying the show from the beginning: maybe it’s better you don’t read this.

17 September 2019

An unfinished chess engine playing against itself

I am looking at the fascinating, hard world of the computer chess programming. This is a resource surely known by all those who share this interest:

I have currently started four projects in four different languages: C, Perl6, GNU AWK (yes!) and Ada (of course). My purpose isn’t to write strong chess engines: it would be a demanding and absorbing aim, and moreover there are many open source chess engines written in several languages (not Perl6 or AWK, though, and just one in Ada, AdaChess); one of the strongest, if not the strongest, is Stockfish. Competing in this serious business would be more than simply hard, and it would be also pointless.

So far the only project that begins to “play” is the one written in Perl6.

Let’s see its first checkmate against itself.

28 July 2019

A deeper, too long look into Lc0 v. Stockfish

In the previous post I took a look at the odd beginning and ending of a game between Lc0 and Stockfish. At the end white is clearly superior, so the b5 blunder doesn’t really matter.

Let’s scroll the whole game to see and spot other things that even an amateur could classify as mistakes and play better — allegedly.

24 July 2019

Chess engines dance on opening

I was taking a look at how chess engines do their magic and the answer is: through brute-force, more or less. Recently (so to speak) there are new roads to be explored, but anyway the strongest “classical” engines do not attempt to imitate human players: this approach proved to be unfeasible — provided the purpose is to have a very strong computer player.

Even if a good chess engine is far stronger than the common amateur chess player, it doesn’t mean that its moves make always sense.

I’ve found that games between chess engines can be particularly interesting because engines’ motives can look totally obscure. The absence of a human plan for at least one color can make the engines dance pointlessly, even if we are talking of chess engines which can make Kasparov’s life hard.

Pointlessly… one can argue: How do you know if you are less than an amateur chess player? Let me rephrase: pointlessly, apparently. But sometimes it is clear.

Take a look at this rapid game, Lc0 vs Stockfish.

Lc0 is one of those chess engine which uses a new approach, namely a neural network1.

In that game, after 4 moves the situation is this:

aabbccddeeffgghh1122334455667788

That is, the initial position, except that white has lost its f2 pawn. Does this make any sense? Would a human opponent have allowed this?

According to this Shredder’s opening database online2, in the ECO A02 opening (Bird’s opening) the black hasn’t ♞h6 among its possibilities. According to this one3, instead, there are 14 games with that move. Chess.com too has it classified as Bird’s Opening, Horsefly Defense (ECO A03), and it counts 13 games (today)4.

After 1. … ♞h6, white offers its pawn by pushing it forward: 2. f5. This is odd and I state that it doesn’t make any sense and that you don’t need to be a super engine or a GM to see it5. Stockfish catches the pawn, of course: 2. … ♞×f5.

Now lc0 plays 3. ♘f3, which is “obviously” ok. Stockfish moves its knight back to h6, and this is ok too, as far as I can say.

But then lc0, instead of continuing to develop its pieces, moves the knight back: 4. ♘g1. This does not make any sense. I, less than amateur player, consider this a blunder.

However, ok, lc0 has its neural network, who knows what’s going on inside its head.

What about Stockfish? It can gain tempo. Instead, it chooses to play 4. … ♞g8, as to give white what it’s his by right — at least black has gained a pawn…

But this lost pawn doesn’t steal the win from lc0 after 117 moves and a ply. In fact Stockfish blunders and lc0 mates.

  1. f7 b5??
  2. f8=Q#

Now, I can’t be sure about the best move, but it can’t be b5.

aabbccddeeffgghh1122334455667788

This

  1. … ♝×e6

at least avoid the imminent checkmate. Black’s destiny can’t be changed at that point, but I don’t think the engine has a make-the-agony-short algorithm, thus b5 is a blunder, even if anything else (as far as I can see) wouldn’t have changed the result.


  1. Nonetheless, I still classify this approach as brute-force, even if most of it were done in advance and elsewhere.↩︎

  2. In battles between chess engines there could be rules that exclude the use of openings database, and this could explain this odd beginning.↩︎

  3. Click on the explore link. Anyway the site requires a fee to be fully used.↩︎

  4. White wins 30.8%, draw 23.1%, black wins 46.1%.↩︎

  5. According to chess.com and those 13 games, it can continue with ♘f3, ♘h3 (and the history of those games — not an interesting statistics, indeed — says white can win), b3, or e4 (among those 13 games white has lost when it played these last two moves).↩︎

31 March 2019

On Salvation (TV series)

Here it is, a post just to begin this new year saying this blog is dead, but it isn't really!

I've just finished the second season of Salvation.

According to Wikipedia Salvation is an “American suspense drama television series”. One thing is undoubtful: it's American, and by “American” I don't mean Canada, Estados Unidos Mexicanos, or any of the South American (the continent) countries. I mean just this:

In fact it is full of all the classical USA rhetoric, stereotypes, characters, and so forth. Almost everything's already seen; abuse of (cheap) suspense and engaging techniques — like when a character acts in a so dumb way just to cause a problem to be fixed…1

Nonetheless everything is also well packaged (as often it happens with this kind of show) and they keep a good pace; indeed I had to skip over filler moments or scenes, among which I include scenes that supposedly should give psychological complexity to the characters or spice conflicts up a bit, but to me they were boring commonplaces or annoying overloaded ideological speeches/“ruminations”.

Beware: there's a big SPOILER ALERT here… but in between there are other spoiler alerts (with details you can choose to reveal).

29 May 2016

On few episodes of Numb3rs

I'm watching Numb3rs. As usual in this kind of shows, you can find things that look unsound and unrealistic.

You must never forget it is a TV show, after all, even if it pretends to portray the reality of math, how real smart guys are, or whatever.

I've taken notes on few episodes since season three. Now I dump them here (just to prove this blog isn't dead as Heaven on a Saturday night).

About errors in general, it's also funny/interesting to read others resources, e.g.

Despite these things, the mathematics and topics they hint at are real, and there are so many things — too many to be all known by a single man as Charlie Eppes — worth looking at. E.g. see the following link:

4 May 2015

String around the rod

I've decided to feed my blogs with puzzles, sometimes. There are already two examples packed into one post in the other blog. I will use this blog to post about puzzles when I decide not to show how to tackle them with a computer language as a tool.

Usually you can find a solution online which is written better, explained better, shown better, and so on. This is why these posts must be read still as pointless ramblings and why I consider relevant to show also whatever could be described as suboptimal paths or even, maybe, wrong solutions. Warned: don't be fooled!

So, in this post the problem is the Sunday puzzle n. 28, that «stumped 96% of America's top math students»: String around a rod.