It's been a while since the last substantial post, blaming deadlines, deadlines, deadlines for that. Having submitted a major delivery this Friday, here's a sort of inhale before I jump right into the next leg of the fall semester triathlon – a major research bid that I'll be heading.
So, this week, the distinguished science journal Nature's online news section published an entertaining piece on what the outcome may be when all researchers, regardless of field, are ranked according to citation – how much their work is referred to by other researchers – using open online automated resources, such as Google Scholar, or its special citation section. Using a service called Scholarometer, Nature had this guy, slightly surprisingly to many, coming out on top (wonder who he is? - click the pic!):
Strange, isn't it? Not when considering that they have been using the so-called h-index, a mathematical construct devised to reflect the citation weight (rather than rate) of a scholar (that is, this is h-index as used in Google Scholar, in the more professionally advanced and commercial Web of Knowldge, it is something else, but the purpose is the same), thereby reflecting the value of having more articles with more citations rather than just many concentrated to one publication. They then perform what is referred to as normalisation for different scholars in relation to the size of their respective fields. So, what makes Marx come out on top is that he is a more well-cited historian than what, e.g., Albert Einstein is a well-cited physicist, considering that physics is a very much larger discipline than history is. Now, of course, none of this says anything about quality or influence on the progress of research (no more than what the Billboard chart says regarding music) – it merely measures popularity as an object of citation among fellow scholars. In fact, the notion that citation proves anything over and above that others have taken some sort of interest in one's work is highly contestable – said without denying the no doubt important use that citation and citation tracking has in science and research.
But here's the funny thing. Having been pointed to the Scholarometer toy, I of course couldn't resist checking out my own pet fields! So here's what came out when looking at the h-index ranking in bioethics – the field where much of my most weighty specialisation is located (click the image to view a scaled up version):
I could recognize some names, such as Simo Vehmas who happens to be a good friend, but several others were completely unfamiliar to me. Now, bioethics broadly conceived is a large field so it need not be surprising that one doesn't know the name of completely decent fellows within it, but the fact that I could not place any of the top four names made me wonder. But then it struck me: wait a second, I do know one of those names, the top one at that, but certainly not in the role of a bioethicist, but as a world-renowned researcher in reproductive genetic medicine and leader of the team that performed the first successful preimplantation genetic diagnosis in the early 1990's. I happened to know this, since I published a book on the ethics in the aftermath of this technological advance in 1999 (available for online reading and download through that link). So Alan H Handyside is a prime medical researcher, which of course is what ups his h-index to such heights, as may be confirmed by inspecting a Google Scholar search on his name - what makes him top name in bioethics is, seeemingly, merely that someone tagged his name with that disciplinary affiliation. So what about A Pandiella? Same story it appears, this is a cell-biologist with a no doubt impressive citation count and, I'm certain, many important results up the sleeve. Moving on to R Frydman it's almost the same story, as the bulk of the publications are here in reproductive biomedicine, but it's more complicated as it appears that there is also another R Frydman, who is publishing in the field of health policy/economics, but these persons are treated as one! Next one, J Kimmelman is likely to be a similar story, since there is one with a good number of publications clearly in bioethics [retrospective note added after publication of this post: this person, Jonathan Kimmelman has added a comment below and clarified his affiliation, which is indeed in bioethics] and another publishing in very specialised biomedical science that has attracted vast numbers of citations (I checked some of the respective author affiliations in this case and they don't seem to match either). Last, before we get to my friend Simo, we have F Olivennes, who again seems to be a purely biomedical researcher in the field of reproductive medicine and embryology, who for some reason has been tagged as belonging to bioethics.
These, then are the top researchers of my field according to Scholarometer - no wonder I never heard of them in that role. And, in fact, it seems that the problem appears already at the Google Scholar source, for checking the top name of the straight citation ranking for bioethics, we meet this guy – yup, yet another biomedical researcher classified as a bioethicist. Number two is this guy, whoever he is, same story all over again, and then come some names I'm familiar with and respect in the way one would expect of people ranked to be at the top of one's field. Just to twist the knife some extra turns, I also did a quick check for medical ethics; same story, this is the top guy, apparently, and this is no. three I hear (number two in this ranking actually is a well-known bioethicist who happens to also be a medical researcher, so that kind of animal does exist).
So, what we may conclude is that for these fields, attempts at measuring citation has been severely corrupted by failures of disciplinary/field classification that swamp rankings with citation counts of no relevance for the field at all. I haven't looked through the entire publication lists of the people mentioned, but many of them appear to have basically no output belonging to ethics of any sort. They might, of course, have tagged along on a few ethics papers led by others as clinical/scientific experts (which is fine), but this does not make them highly cited bioethicists, it makes them medical researchers whose medical citation counts look impressive in the context of a field-normalisation to bioethics rather than medicine. In addition, we have seen an obvious identity problem, where the automated online citation counters are unable to distinguish people with similar surname plus initial – makes for quite a lot of error, I would say.
But what is the root of the classification errors with regard to field-normalised/specific citation measures? There are several (possibly overlapping) possibilities. One is, of course, that authors misclassify themselves, as may happen in Google Scholar Citation, where you as author decide what fields to belong to. For example, I could myself have made a strategic choice to pass myself off as belonging to the philosophy of medicine field, which would not exactly be lie albeit bending it a bit, and with my current total citation of 408 ended up in a handsome 6th place, rather than the less impressive placings I enjoy as bio- or medical ethicist or just ethicist. But not all authors are in this system, as you have to actively join it and manage it a bit for it to work (thus your responsibility for how you classify yourself), so the problem might also come from the classification done by the Google Scholar staff; I wouldn't be surprised if several of the strange things described earlier are due to Google's experts confusing "bioethics" with "biometrics" or "biotechnical", for example. The qualification of this staff for doing what they are doing is completely blacked out to me, as I suspect it is to most other scholars, and still many us – like the team behind Scholarometer – take it rather serious. Now, with regard to Scholarometer, there may certainly be error sources located there as well, since one may require of an academically construed automation tool that it is checked for serious error of the sort I have been displaying – which has apparently not occurred to or engaged the team at Indiana University Bloomington responsible for the product.
But wait a second! Wouldn't that mean to, sort of, making the automated citation counter, sort of, not automated? Yes indeed, that is what it means! And hence the title of this little peak into the fascinating games sometimes played in the world of academia to no apparent use for anyone. Alas, though, through the way in which governments and other funders of research are increasingly using bibliometrics and citation as quality indicators to determine the allocation of funds, preferably in an as automated way as possible (partly because of the hype represented by Scholarometer and the article in Nature), thus falling prey to the sort of weirdness here described, this sad example of pretending to have a technology that works when one hasn't, is actually putting fellow scholars and researchers at risk of losing funds and other resources, miss jobs and promotions, et cetera, for no good reason at all.
My plea to Nature and other journals, Scholarometer and Google Scholar is simply this: stop pretending that there's something there that is actually not in evidence. Those who provide these services: make them work as they should or shut them down. Scholarly media: ignore them until they have something to show for real and not merely for fancy.
See you soon!