28 July 2019

A deeper, too long look into Lc0 v. Stockfish

In the previous post I took a look at the odd beginning and ending of a game between Lc0 and Stockfish. At the end white is clearly superior, so the b5 blunder doesn’t really matter.

Let’s scroll the whole game to see and spot other things that even an amateur could classify as mistakes and play better — allegedly.

24 July 2019

Chess engines dance on opening

I was taking a look at how chess engines do their magic and the answer is: through brute-force, more or less. Recently (so to speak) there are new roads to be explored, but anyway the strongest “classical” engines do not attempt to imitate human players: this approach proved to be unfeasible — provided the purpose is to have a very strong computer player.

Even if a good chess engine is far stronger than the common amateur chess player, it doesn’t mean that its moves make always sense.

I’ve found that games between chess engines can be particularly interesting because engines’ motives can look totally obscure. The absence of a human plan for at least one color can make the engines dance pointlessly, even if we are talking of chess engines which can make Kasparov’s life hard.

Pointlessly… one can argue: How do you know if you are less than an amateur chess player? Let me rephrase: pointlessly, apparently. But sometimes it is clear.

Take a look at this rapid game, Lc0 vs Stockfish.

Lc0 is one of those chess engine which uses a new approach, namely a neural network1.

In that game, after 4 moves the situation is this:

aabbccddeeffgghh1122334455667788

That is, the initial position, except that white has lost its f2 pawn. Does this make any sense? Would a human opponent have allowed this?

According to this Shredder’s opening database online2, in the ECO A02 opening (Bird’s opening) the black hasn’t ♞h6 among its possibilities. According to this one3, instead, there are 14 games with that move. Chess.com too has it classified as Bird’s Opening, Horsefly Defense (ECO A03), and it counts 13 games (today)4.

After 1. … ♞h6, white offers its pawn by pushing it forward: 2. f5. This is odd and I state that it doesn’t make any sense and that you don’t need to be a super engine or a GM to see it5. Stockfish catches the pawn, of course: 2. … ♞×f5.

Now lc0 plays 3. ♘f3, which is “obviously” ok. Stockfish moves its knight back to h6, and this is ok too, as far as I can say.

But then lc0, instead of continuing to develop its pieces, moves the knight back: 4. ♘g1. This does not make any sense. I, less than amateur player, consider this a blunder.

However, ok, lc0 has its neural network, who knows what’s going on inside its head.

What about Stockfish? It can gain tempo. Instead, it chooses to play 4. … ♞g8, as to give white what it’s his by right — at least black has gained a pawn…

But this lost pawn doesn’t steal the win from lc0 after 117 moves and a ply. In fact Stockfish blunders and lc0 mates.

  1. f7 b5??
  2. f8=Q#

Now, I can’t be sure about the best move, but it can’t be b5.

aabbccddeeffgghh1122334455667788

This

  1. … ♝×e6

at least avoid the imminent checkmate. Black’s destiny can’t be changed at that point, but I don’t think the engine has a make-the-agony-short algorithm, thus b5 is a blunder, even if anything else (as far as I can see) wouldn’t have changed the result.


  1. Nonetheless, I still classify this approach as brute-force, even if most of it were done in advance and elsewhere.↩︎

  2. In battles between chess engines there could be rules that exclude the use of openings database, and this could explain this odd beginning.↩︎

  3. Click on the explore link. Anyway the site requires a fee to be fully used.↩︎

  4. White wins 30.8%, draw 23.1%, black wins 46.1%.↩︎

  5. According to chess.com and those 13 games, it can continue with ♘f3, ♘h3 (and the history of those games — not an interesting statistics, indeed — says white can win), b3, or e4 (among those 13 games white has lost when it played these last two moves).↩︎

31 March 2019

On Salvation (TV series)

Here it is, a post just to begin this new year saying this blog is dead, but it isn't really!

I've just finished the second season of Salvation.

According to Wikipedia Salvation is an “American suspense drama television series”. One thing is undoubtful: it's American, and by “American” I don't mean Canada, Estados Unidos Mexicanos, or any of the South American (the continent) countries. I mean just this:

In fact it is full of all the classical USA rhetoric, stereotypes, characters, and so forth. Almost everything's already seen; abuse of (cheap) suspense and engaging techniques — like when a character acts in a so dumb way just to cause a problem to be fixed…1

Nonetheless everything is also well packaged (as often it happens with this kind of show) and they keep a good pace; indeed I had to skip over filler moments or scenes, among which I include scenes that supposedly should give psychological complexity to the characters or spice conflicts up a bit, but to me they were boring commonplaces or annoying overloaded ideological speeches/“ruminations”.

Beware: there's a big SPOILER ALERT here… but in between there are other spoiler alerts (with details you can choose to reveal).

29 May 2016

On few episodes of Numb3rs

I'm watching Numb3rs. As usual in this kind of shows, you can find things that look unsound and unrealistic.

You must never forget it is a TV show, after all, even if it pretends to portray the reality of math, how real smart guys are, or whatever.

I've taken notes on few episodes since season three. Now I dump them here (just to prove this blog isn't dead as Heaven on a Saturday night).

About errors in general, it's also funny/interesting to read others resources, e.g.

Despite these things, the mathematics and topics they hint at are real, and there are so many things — too many to be all known by a single man as Charlie Eppes — worth looking at. E.g. see the following link:

4 May 2015

String around the rod

I've decided to feed my blogs with puzzles, sometimes. There are already two examples packed into one post in the other blog. I will use this blog to post about puzzles when I decide not to show how to tackle them with a computer language as a tool.

Usually you can find a solution online which is written better, explained better, shown better, and so on. This is why these posts must be read still as pointless ramblings and why I consider relevant to show also whatever could be described as suboptimal paths or even, maybe, wrong solutions. Warned: don't be fooled!

So, in this post the problem is the Sunday puzzle n. 28, that «stumped 96% of America's top math students»: String around a rod.

25 June 2014

Ramblings about ignorance (and knowledge?)

Time to scatter dots of futile digital ink on this blog too. I was asking myself about ignorance.

At the moment, I am at the point where I imagine they exist at least two kinds of ignorance.

To ignore something can mean to lack a specific knowledge about a specific topic. That knowledge would make us able to talk about the matter and to recognize if someone, pretending to be an expert, talks about it with true competence. We can even chime in, without making the audience think we are just lambasting reasonlessly and incompetently1 — that is, in some circumstances, trolling.

When you are aware of the fact that you lack a specific knowledge, you can switch to a sort of “learning mode” when it seems to you that a speaker shows competence about the matter — this happens more likely when he is a recognised authority2; but it can happen also because we trust his claims about his competence, or since we already believe that a specific knowledge and/or argument is the right one… but this brings us into the second kind of ignorance, so let's ignore the latter observation for now.

This is usually how we learn activily: we know we lack a specific knowledge, then we go and get informations, read books, see documentaries, we experiment, we try to do experiences of things that can teach us something about the topic we want to learn about, we attend conferences, we talk to people who, supposedly, know about the subject, and so on. (All these things and more I have not mentioned imply trust of the sources and, at the end, they give us a certain level of knowledge about the topic).

Doing so, we are “filling” a hole.

This is the good kind of ignorance, since we are aware of it and of its weakness, so we have no special reasons to stick with what we already believe to know. Though, we may have already a vague knowledge about something, already weakly tied to something else we know. The important fact to consider, it is what we rationally think about our vague knowledge: we are aware it is valueless and in fact we are well prepared to replace it (to fill the gap), under appropriate conditions3.

Imagine it as if there is a hole containing enough room so that you can pour good things4 inside it, and these good things will bring to the surface the bad things and will break their weak bonds with our consolidated knowledge, if there is any bond at all… thus we can wipe out “bad things” easily.

While the first kind of ignorance can be imagined as a hole ready to be “filled”, the second kind of ignorance is rather identical to actual knowledge, but it is wrong knowledge, of course (highlighted with a red opaque circle in the picture).

Therefore it is hard to change it: it is strongly bound to other knowledge, which in turn can be “true” or “false”, but what it is really important is that it is (deeply) believed to be true — then everyone saying something else, must be wrong, or a liar.

This is the kind of ignorance that is going to be problematic soon or later. You can't fix it simply, unless you are an authority stronger than the whole “net” of authorities which contributed to the fundamental grounds which make the subject believe he (or she) has a good knowledge of the topic. Moreover, since he thinks he competently master the matter, when he will recognize someone who has not that knowledge and is at the same time well prepared to learn and accept, rather than to refuse, his spots, he will pose as a teacher and an authority of the matter — the more he is really incompetent, the more it will be likely he will act like this, deceiving every potential receiver.

There is a higher probability that noxious fake knowledge spreads and settles among similar “low-level” people uncapable of discriminating between sincere but incompetent, insincere but competent or incompetent, and both sincere and competent teachers — and of course they lack any method to check, even approximately, the knowledge they are absorbing.

In general, knowledge spreads easier among culturally matching people who trust each other and recognize themselves in the other — like if they see their image in a mirror. Thus clusters of “opposite” knowledge5 emerge; especially in the case of an average low cultural level, different clusters are not able to “compare” their knowledge about a topic on rational ground, so they are not prepared to change their minds (unless something abruptly discards a critical number of pillars.)

An external observer — by this I mean an observer who is not an advocate of a particular point of view and hence of the specific knowledge that supports it — finds hard to pick a set of clusters which are closer to a hypothetical ideal right knowledge, provided he has a way to get an idea about it… He can, however, apply a method6 to come up with a judgement which is worth to listen to… This implies two facts: 1) the external observer has a method… and 2) other people are able to acknowledge he has one and hence his “opinion” about the “investigated knowledge” is valid; the next logic step should be this: they take this conclusion into account and change their minds! But this rarely happens.

To make it short…

Final thought: the context the “new knowledge” is presented in, and the previous good knowledge, are important, since the context can make easier or harder to recognize knowledge as bad, and since bad knowledge links harder to previous correct knowledge (because it would produce dissonance) and so the “subject” would resist to wrong knowledge more easily — but then the opposite is also true… does this mean that ignorant humans with a huge load of bad knowledge7 are doomed to become even worse?

How can they be saved?


People interested in tools I make in support of the bulls### I write here, can clone (or just take a look at) the github repository for this blog.


  1. But we need to assume the audience is itself able to realize if a lambaster is as much as competent as the speaker, or competent enough to say whatever he is saying. This may be not the case, unfortunately. Since a “low-level” audience could be the worst judge of the competence of a speaker, it can be easily fooled by an incompetent lambaster into believing that he has some interesting point that needs an answer or a counterargument. When the audience itself is not enough competent, its resistance to wrong argument by incompetent lambasters can depend not on the argument itself, but on their trust (or even faith) in the speaker's competence or on how strongly they are already convinced of the argument. I am talking about argument that should need a specific knowledge to be understood correctly.

  2. Namely, a recognized expert of the matter. But the sociology of the mass media suggests that indeed we can wrongly believe that someone has some kind of competence to talk about a topic since he is an expert about something else, or since he or she is a “very” important person — since he or she is famous and was invited to talk about the topic in a (topical) talkshow or similar … Thus we are inclined to value what he or she says as if he or she were competent.

  3. Why can we have already some kind of knowledge then? We can suppose that we “create” ourselves “knowledge” about “objects of the world”, pushed by internal or external motivations (or both), as needed to form or judge opinions we were exposed to or we think we can be exposed to. We “create” this knowledge and tend to keep it to an uncertainty state; experiences, events, … or direct attempts to acquire the knowledge can deflate/suppress or inflate/fix the constellation of “spots” or “nodes” that define that knowledge, and can weaken or strengthen their links with other “spots”, maybe part of a consolidated knowledge, sets of known facts, … Time can weaken and deflate the ”spots” as well, especially if they have few connections with consolidated knowledge, or none at all. Moreover, created or imported “spots“ are more likely to survive if we are someway able to link them with other existing spots.

  4. Indeed, in general, there is no a priori reason to be sure that we are pouring good things: we could be deceived into believing in something, so we are poisoning our knowledge — and maybe we are discarding good intuitive knowledge instead!

  5. I am supposing we can call one the right knowledge, and the other the wrong one, but we are oversimplifying: e.g. they could be both wrong, but having a not-empty intersection with the set of ideal right knowledge… Indeed, why should we think there are just two options? There is in fact a “sea”, and only a subset has a big enough intersection with the set of the ideal right knowledge. But the hard part could be to get which have this property…

  6. The way science acquires knowledge is in fact a method to check if that knowledge is reliable. But the topics I am thinking about when writing this are not exactly in the realm of what the scientific method can investigate easily.

  7. And of course totally lacking of a smart method to check for new bad or good knowledge: dribbles of scientific method, logic reasoning… things like these.