It's been a while since the last substantial post, blaming deadlines, deadlines, deadlines for that. Having submitted a major delivery this Friday, here's a sort of inhale before I jump right into the next leg of the fall semester triathlon – a major research bid that I'll be heading.
So, this week, the distinguished science journal Nature's online news section published an entertaining piece on what the outcome may be when all researchers, regardless of field, are ranked according to citation – how much their work is referred to by other researchers – using open online automated resources, such as Google Scholar, or its special citation section. Using a service called Scholarometer, Nature had this guy, slightly surprisingly to many, coming out on top (wonder who he is? - click the pic!):
Strange, isn't it? Not when considering that they have been using the so-called h-index, a mathematical construct devised to reflect the citation weight (rather than rate) of a scholar (that is, this is h-index as used in Google Scholar, in the more professionally advanced and commercial Web of Knowldge, it is something else, but the purpose is the same), thereby reflecting the value of having more articles with more citations rather than just many concentrated to one publication. They then perform what is referred to as normalisation for different scholars in relation to the size of their respective fields. So, what makes Marx come out on top is that he is a more well-cited historian than what, e.g., Albert Einstein is a well-cited physicist, considering that physics is a very much larger discipline than history is. Now, of course, none of this says anything about quality or influence on the progress of research (no more than what the Billboard chart says regarding music) – it merely measures popularity as an object of citation among fellow scholars. In fact, the notion that citation proves anything over and above that others have taken some sort of interest in one's work is highly contestable – said without denying the no doubt important use that citation and citation tracking has in science and research.
But here's the funny thing. Having been pointed to the Scholarometer toy, I of course couldn't resist checking out my own pet fields! So here's what came out when looking at the h-index ranking in bioethics – the field where much of my most weighty specialisation is located (click the image to view a scaled up version):
I could recognize some names, such as Simo Vehmas who happens to be a good friend, but several others were completely unfamiliar to me. Now, bioethics broadly conceived is a large field so it need not be surprising that one doesn't know the name of completely decent fellows within it, but the fact that I could not place any of the top four names made me wonder. But then it struck me: wait a second, I do know one of those names, the top one at that, but certainly not in the role of a bioethicist, but as a world-renowned researcher in reproductive genetic medicine and leader of the team that performed the first successful preimplantation genetic diagnosis in the early 1990's. I happened to know this, since I published a book on the ethics in the aftermath of this technological advance in 1999 (available for online reading and download through that link). So Alan H Handyside is a prime medical researcher, which of course is what ups his h-index to such heights, as may be confirmed by inspecting a Google Scholar search on his name - what makes him top name in bioethics is, seeemingly, merely that someone tagged his name with that disciplinary affiliation. So what about A Pandiella? Same story it appears, this is a cell-biologist with a no doubt impressive citation count and, I'm certain, many important results up the sleeve. Moving on to R Frydman it's almost the same story, as the bulk of the publications are here in reproductive biomedicine, but it's more complicated as it appears that there is also another R Frydman, who is publishing in the field of health policy/economics, but these persons are treated as one! Next one, J Kimmelman is likely to be a similar story, since there is one with a good number of publications clearly in bioethics [retrospective note added after publication of this post: this person, Jonathan Kimmelman has added a comment below and clarified his affiliation, which is indeed in bioethics] and another publishing in very specialised biomedical science that has attracted vast numbers of citations (I checked some of the respective author affiliations in this case and they don't seem to match either). Last, before we get to my friend Simo, we have F Olivennes, who again seems to be a purely biomedical researcher in the field of reproductive medicine and embryology, who for some reason has been tagged as belonging to bioethics.
These, then are the top researchers of my field according to Scholarometer - no wonder I never heard of them in that role. And, in fact, it seems that the problem appears already at the Google Scholar source, for checking the top name of the straight citation ranking for bioethics, we meet this guy – yup, yet another biomedical researcher classified as a bioethicist. Number two is this guy, whoever he is, same story all over again, and then come some names I'm familiar with and respect in the way one would expect of people ranked to be at the top of one's field. Just to twist the knife some extra turns, I also did a quick check for medical ethics; same story, this is the top guy, apparently, and this is no. three I hear (number two in this ranking actually is a well-known bioethicist who happens to also be a medical researcher, so that kind of animal does exist).
So, what we may conclude is that for these fields, attempts at measuring citation has been severely corrupted by failures of disciplinary/field classification that swamp rankings with citation counts of no relevance for the field at all. I haven't looked through the entire publication lists of the people mentioned, but many of them appear to have basically no output belonging to ethics of any sort. They might, of course, have tagged along on a few ethics papers led by others as clinical/scientific experts (which is fine), but this does not make them highly cited bioethicists, it makes them medical researchers whose medical citation counts look impressive in the context of a field-normalisation to bioethics rather than medicine. In addition, we have seen an obvious identity problem, where the automated online citation counters are unable to distinguish people with similar surname plus initial – makes for quite a lot of error, I would say.
But what is the root of the classification errors with regard to field-normalised/specific citation measures? There are several (possibly overlapping) possibilities. One is, of course, that authors misclassify themselves, as may happen in Google Scholar Citation, where you as author decide what fields to belong to. For example, I could myself have made a strategic choice to pass myself off as belonging to the philosophy of medicine field, which would not exactly be lie albeit bending it a bit, and with my current total citation of 408 ended up in a handsome 6th place, rather than the less impressive placings I enjoy as bio- or medical ethicist or just ethicist. But not all authors are in this system, as you have to actively join it and manage it a bit for it to work (thus your responsibility for how you classify yourself), so the problem might also come from the classification done by the Google Scholar staff; I wouldn't be surprised if several of the strange things described earlier are due to Google's experts confusing "bioethics" with "biometrics" or "biotechnical", for example. The qualification of this staff for doing what they are doing is completely blacked out to me, as I suspect it is to most other scholars, and still many us – like the team behind Scholarometer – take it rather serious. Now, with regard to Scholarometer, there may certainly be error sources located there as well, since one may require of an academically construed automation tool that it is checked for serious error of the sort I have been displaying – which has apparently not occurred to or engaged the team at Indiana University Bloomington responsible for the product.
But wait a second! Wouldn't that mean to, sort of, making the automated citation counter, sort of, not automated? Yes indeed, that is what it means! And hence the title of this little peak into the fascinating games sometimes played in the world of academia to no apparent use for anyone. Alas, though, through the way in which governments and other funders of research are increasingly using bibliometrics and citation as quality indicators to determine the allocation of funds, preferably in an as automated way as possible (partly because of the hype represented by Scholarometer and the article in Nature), thus falling prey to the sort of weirdness here described, this sad example of pretending to have a technology that works when one hasn't, is actually putting fellow scholars and researchers at risk of losing funds and other resources, miss jobs and promotions, et cetera, for no good reason at all.
My plea to Nature and other journals, Scholarometer and Google Scholar is simply this: stop pretending that there's something there that is actually not in evidence. Those who provide these services: make them work as they should or shut them down. Scholarly media: ignore them until they have something to show for real and not merely for fancy.
See you soon!
Jonathan Kimmelman here. Thanks for the plug- sort of.
ReplyDeleteThat there are many, many scholars working in bioethics who deserve a higher ranking than me testifies to the fallibility of these metrics- at least in fields like philosophy or bioethics.
But I can testify that Scholarometer did get one thing right: I am indeed a REAL bioethicist! Can learn more about my team's work at: translationalethics.com
Hi, Jonathan and thanks for the connect and heads up - as for the! I did mention that at least one of the J Kimmelman's picked out by Google Scholar is someone publishing bioethics papers, but there appears to be other JK's around as well, not distinguished by Google. I'm glad that you see the main point of my post being about these metrics and automated gadgets and the role they play in our funding world!
DeleteBy the way, I have added a bit to that particular section to point readers to your comment and to make sure there is no misunderstanding!
DeleteHi Christian, I'm the journalist who wrote the Nature piece.
ReplyDeleteMost of what you say about errors is entirely correct, but it is all already well explained at both the Scholarometer website FAQ, and briefly (I only had 600 words for the entire story) in the online news piece.
The first source of error is that the Scholarometer team are crowdsourcing the tags used to classify academics in their rankings. If an academic is classified under 'bioethics' it is because somebody searched for that scholar and chose to put them in that category. Of course this introduces errors - but that is how the automated process works. The Scholarometer team allow multiple tags. The alternative - a team pronouncing on the 'correct' classification for tens of thousands of academics - would be mind-bogglingly time-consuming. Thomson Reuters does it this way - but Thomson Reuters charges a lot of money for its products and does not allow the results to be made public. Some also criticize Thomson Reuters' categorizations (it's a never-ending taxonomic game really).
The second source of error is Google Scholar itself, which is built up both automatedly by scraping the web pages of publishers and researchers, and also because academics interact with Google Scholar to correct their personal records.
On this point I think you made an error: I do not believe that there are any 'Google Scholar staff' poring over whether someone should be classified as a bioethicist or not! The only staff there work on the technical side.
Obviously, this automated/crowdsourcing process also introduces errors. Notably, searches often find it hard to tell apart researchers with the same name (also an issue in Thomson Reuters' databases, and one that ORCID is being introduced, among other unique identifiers, to help solve). But Google Scholar is getting better and better.
That is the bet that both the Scholarometer team and Google Scholar have taken: that an automated approach + refinement through online crowdsourcing (because academics are motivated to correct their own scholarly records), will provide reasonable bibliometrics. Most importantly, these bibliometrics are also public and free.
The alternative is for universities to pay lots of money to companies that will collect and normalize bibliometrics for them. This is, to be honest, what many universities do anyway. But - as the Scholarometer team note - many academics and heads of department may not be aware of their own records, nor that citation metrics differ so radically across different fields and that normalization is important. This corrected h-index is their first attempt to highlight this.
So, take it for what it is: an entertaining story, but one where the metrics probably have errors.
As I say, this is all briefly packed into the Nature story (see the last few paragraphs), but there's a longer explanation on the Scholarometer FAQ.
cheers
Richard.
Thanks a bunch for this elaboration - and point taken re Google Scholar having no skilled staff to do what would take skill to do :) I do understand the lure of the automated approach and I do appreciate and support the openess ideal. However, my point about having to consider functionality in relation to the real context of researchers still hold. I do understand that the disciplinary affiliation classification of these gadgets is something completely different than what we normally perceive as such classification, especially when ranking people (you don't become a brilliant exponent of X through having zillions of brilliant and well-cited papers on Y). Alas, though this insight is (I know from personal experience) not shared by policy makers or funding agencies, although they happily use these services to decide the fates of researchers – the recent REF in the UK being just the latest in a long line of examples. This cannot be ignored when one ponders whether or not to present something as a tool for ranking scholars.
Delete