Time to scatter dots of futile digital ink on this blog too. I was asking myself about ignorance.
At the moment, I am at the point where I imagine they exist at least two kinds of ignorance.
To ignore something can mean to lack a specific knowledge about a specific topic. That knowledge would make us able to talk about the matter and to recognize if someone, pretending to be an expert, talks about it with true competence. We can even chime in, without making the audience think we are just lambasting reasonlessly and incompetently1 — that is, in some circumstances, trolling.
When you are aware of the fact that you lack a specific knowledge, you can switch to a sort of “learning mode” when it seems to you that a speaker shows competence about the matter — this happens more likely when he is a recognised authority2; but it can happen also because we trust his claims about his competence, or since we already believe that a specific knowledge and/or argument is the right one… but this brings us into the second kind of ignorance, so let's ignore the latter observation for now.
This is usually how we learn activily: we know we lack a specific knowledge, then we go and get informations, read books, see documentaries, we experiment, we try to do experiences of things that can teach us something about the topic we want to learn about, we attend conferences, we talk to people who, supposedly, know about the subject, and so on. (All these things and more I have not mentioned imply trust of the sources and, at the end, they give us a certain level of knowledge about the topic).
Doing so, we are “filling” a hole.
This is the good kind of ignorance, since we are aware of it and of its weakness, so we have no special reasons to stick with what we already believe to know. Though, we may have already a vague knowledge about something, already weakly tied to something else we know. The important fact to consider, it is what we rationally think about our vague knowledge: we are aware it is valueless and in fact we are well prepared to replace it (to fill the gap), under appropriate conditions3.
Imagine it as if there is a hole containing enough room so that you can pour good things4 inside it, and these good things will bring to the surface the bad things and will break their weak bonds with our consolidated knowledge, if there is any bond at all… thus we can wipe out “bad things” easily.
While the first kind of ignorance can be imagined as a hole ready to be “filled”, the second kind of ignorance is rather identical to actual knowledge, but it is wrong knowledge, of course (highlighted with a red opaque circle in the picture).
Therefore it is hard to change it: it is strongly bound to other knowledge, which in turn can be “true” or “false”, but what it is really important is that it is (deeply) believed to be true — then everyone saying something else, must be wrong, or a liar.
This is the kind of ignorance that is going to be problematic soon or later. You can't fix it simply, unless you are an authority stronger than the whole “net” of authorities which contributed to the fundamental grounds which make the subject believe he (or she) has a good knowledge of the topic. Moreover, since he thinks he competently master the matter, when he will recognize someone who has not that knowledge and is at the same time well prepared to learn and accept, rather than to refuse, his spots, he will pose as a teacher and an authority of the matter — the more he is really incompetent, the more it will be likely he will act like this, deceiving every potential receiver.
There is a higher probability that noxious fake knowledge spreads and settles among similar “low-level” people uncapable of discriminating between sincere but incompetent, insincere but competent or incompetent, and both sincere and competent teachers — and of course they lack any method to check, even approximately, the knowledge they are absorbing.
In general, knowledge spreads easier among culturally matching people who trust each other and recognize themselves in the other — like if they see their image in a mirror. Thus clusters of “opposite” knowledge5 emerge; especially in the case of an average low cultural level, different clusters are not able to “compare” their knowledge about a topic on rational ground, so they are not prepared to change their minds (unless something abruptly discards a critical number of pillars.)
An external observer — by this I mean an observer who is not an advocate of a particular point of view and hence of the specific knowledge that supports it — finds hard to pick a set of clusters which are closer to a hypothetical ideal right knowledge, provided he has a way to get an idea about it… He can, however, apply a method6 to come up with a judgement which is worth to listen to… This implies two facts: 1) the external observer has a method… and 2) other people are able to acknowledge he has one and hence his “opinion” about the “investigated knowledge” is valid; the next logic step should be this: they take this conclusion into account and change their minds! But this rarely happens.
To make it short…
Final thought: the context the “new knowledge” is presented in, and the previous good knowledge, are important, since the context can make easier or harder to recognize knowledge as bad, and since bad knowledge links harder to previous correct knowledge (because it would produce dissonance) and so the “subject” would resist to wrong knowledge more easily — but then the opposite is also true… does this mean that ignorant humans with a huge load of bad knowledge7 are doomed to become even worse?
How can they be saved?
People interested in tools I make in support of the bulls### I write here, can clone (or just take a look at) the github repository for this blog.
But we need to assume the audience is itself able to realize if a lambaster is as much as competent as the speaker, or competent enough to say whatever he is saying. This may be not the case, unfortunately. Since a “low-level” audience could be the worst judge of the competence of a speaker, it can be easily fooled by an incompetent lambaster into believing that he has some interesting point that needs an answer or a counterargument. When the audience itself is not enough competent, its resistance to wrong argument by incompetent lambasters can depend not on the argument itself, but on their trust (or even faith) in the speaker's competence or on how strongly they are already convinced of the argument. I am talking about argument that should need a specific knowledge to be understood correctly.↩
Namely, a recognized expert of the matter. But the sociology of the mass media suggests that indeed we can wrongly believe that someone has some kind of competence to talk about a topic since he is an expert about something else, or since he or she is a “very” important person — since he or she is famous and was invited to talk about the topic in a (topical) talkshow or similar … Thus we are inclined to value what he or she says as if he or she were competent.↩
Why can we have already some kind of knowledge then? We can suppose that we “create” ourselves “knowledge” about “objects of the world”, pushed by internal or external motivations (or both), as needed to form or judge opinions we were exposed to or we think we can be exposed to. We “create” this knowledge and tend to keep it to an uncertainty state; experiences, events, … or direct attempts to acquire the knowledge can deflate/suppress or inflate/fix the constellation of “spots” or “nodes” that define that knowledge, and can weaken or strengthen their links with other “spots”, maybe part of a consolidated knowledge, sets of known facts, … Time can weaken and deflate the ”spots” as well, especially if they have few connections with consolidated knowledge, or none at all. Moreover, created or imported “spots“ are more likely to survive if we are someway able to link them with other existing spots.↩
Indeed, in general, there is no a priori reason to be sure that we are pouring good things: we could be deceived into believing in something, so we are poisoning our knowledge — and maybe we are discarding good intuitive knowledge instead!↩
I am supposing we can call one the right knowledge, and the other the wrong one, but we are oversimplifying: e.g. they could be both wrong, but having a not-empty intersection with the set of ideal right knowledge… Indeed, why should we think there are just two options? There is in fact a “sea”, and only a subset has a big enough intersection with the set of the ideal right knowledge. But the hard part could be to get which have this property…↩
The way science acquires knowledge is in fact a method to check if that knowledge is reliable. But the topics I am thinking about when writing this are not exactly in the realm of what the scientific method can investigate easily.↩
And of course totally lacking of a smart method to check for new bad or good knowledge: dribbles of scientific method, logic reasoning… things like these.↩
No comments:
Post a Comment
Be polite and possibly on topic.