Articles to 2017-04-07

Zum Seitenende      Übersicht Artikel      Home & Impressum

First the link to this week’s complete list as HTML and as PDF.


What is an author? An author is someone who has taken part in drafting an writing an article, who has proof-read and authorized the final draft, and who is prepared to take full responsibility for any mistakes or misconduct that might turn up later. Should he fail to satisfy any of those three criteria he is not an author and his appropriate place is in the acknowledgements. Levis et al. boasts a list of 148 full authors.


The current answer to any local pain is to flood the whole body and all its nerve cells with some active chemical, inducing possible side effects in all of them. Spahn et al. have devised a way to activate their compound only where it’s actually needed and useful.


This week (behaviourally) modern humanity’s pre-eminence is getting it from both barrels and all sides. First we have surprising learning capacity in bees (Loukola et al.) and then both language (Gärdenfors & Högberg) and advanced social capabilities (Rodríguez-Hidalgo et al.) long before the arrival of anatomically modern humans or Neanderthals.


DeCasien et al. look reasonable and well done but I have to take it all on trust. Everybody knows that p is pressure and P is power, but it’s still spelt out in any diagram caption and table header I’ve come across. Not so, when non-statisticians report statistics. The label their table rows "Lambda", "Kappa", and "Delta" and that’s all the explanation we’re ever going to get. Alright, there is "Pagel’s lambda" hidden somewhere in the methods, but that’s still just a longer name. One gets the impression the date were dumped into some statistics package, those values came out, and the authors themselves have no idea what they mean, just a little table telling them the ranges of good versus bad numbers. And if they come out good, they’re published, if not the parameters are fiddled until they do.


Without doubt, Shi et al.’s is a very worth while and well done study with reliable results. The problem is their interpretation and ascertaining what their chosen metrics actually mean. According to their abstract and nature’s editorial they seem to confirm typical liberal prejudices. But do they?

If liberals pick their reading from more general subjects like physics and astronomy that will be text books or summaries aimed at a general audience. Current developments and hard science are found in more specialised categories, which is what the conservatives read.

This is confirmed by what they interpret as the proclivity of conservatives not to read fundamental but applied science. A look at supplementary figure 3 shows that being cited by patents (their measure for applied) is nearly synonymous to being cited by other books and articles, i.e. it picks out hard and primary from popular and secondary science.

There is one area that stands out particularly. More than two thirds of all climatology is read by conservatives. Liberals are convinced they and only they have the truth and any doubter must be misinformed, but it seems they’re mostly content with mainstream television pronouncements and less than half as many as on the conservative side ever bother to read up on the subject and think for themselves.

The results also state that liberals read the mainstream of science while conservatives stick to the fringes. That’s what some of the diagrams in figure 4 show. But that middle of the cluster is not the scientific mainstream, it is the centre of mass buying habits, i.e. the main area of popular science. If the red dots cluster at the fringe, that’s not the fringe of opinion, it may well be the area of academic technical content. It is telling that the authors reran their analysis with the academic publishers taken out but they don’t report the result with only those left in. (No, I don’t believe that test was not done, it’s too simple and too obvious.)

Then there is the claim of a wider versus a narrow range of interests.

“For example, if red books link to a narrow subset of books within a discipline, while an equal number of blue books connect to a large and diverse subset of disciplinary books, those purchasing blue books have exposure to a wider range of science books — and probably a wider range of scientific perspectives — than those purchasing red books.”

First something they do point out in their explanation but that tends to get lost in summaries: they speak about books, not people. So what they call scientific breadth does not refer to individuals with wide interests against those of narrow ones, but to certain political books being bought by people with more diverse other interests. What does more diverse actually mean? The real hard science is a small subset, they classify 10 % of all their registered science books as academic. The more popular side encompasses all the fringes from homeopathy to anthroposophy and gender studies - naturally a wider field.

Both liberals and conservatives accuse each other of ignoring the science and neglecting critical scientific thinking in favour of following their given preconceptions. It is a valid and relevant question what exactly both sides mean, when they speak of science. Shi et al.’s is a novel, powerful and valid method to answer just that question and just like all new and untried methods its first results need to be scrutinized with care.

Zum Anfang      Übersicht Artikel      Home & Impressum

Creative Commons Attribution-Share Alike 3.0 Unported License Viewable With Any Browser Valid HTML 4.01! Valid CSS!