Monday 3 January 2011

Deteriorating Bioethics, Plagiarism, Amateurism and Open Access

My research speciality, Bioethics (including medical and health care ethics (+ law and policy), research ethics of the life sciences (+ law and policy), public health ethics, and so on) has been an expanding, successful and increasingly influential international academic field since the early 1980's. While moral philosophers, like myself, have played an essential part in this process (people like Peter Singer, Michael Tooley, Rosalind Hursthouse, Dan Brock, Ruth Faden, Richard Hare, Judith Jarvis Thomson, Daniel Callahan, Jonathan Glover, Mary Anne Warren, Bonnie Steinbock, Tom Beauchamp and Frances Kamm come to mind), there has always been a consciousness in the field of the importance of having also far-reaching theoretical analyses being informed by accurate facts and up to date empirical research and theories. With time, more and more bioethics has come to be descriptive and empirical rather than theoretical and normative – applying some presupposed (as a rule not very well worked out) normative standard as a backdrop to investigating phenomena that are supposedly ethically relevant against that background (the endless series of studies of the practice of informed consent against the backdrop of "the four principles" is the most obvious example). Gradually, this development has led to a situation where more and more authors of bioethics papers lack any deeper education or training in the theoretical subtleties of the field (at best they have some sort of biomedical education topped up with a relevant ethics master or some-such, but often not even that). In parellel, more and more bioethics papers are being published not in the renowned specialised journals of the field, but in regular medical or health research journals and numerous new "open access" journals of uncertain repute.

Now, while I have been hearing more than just a few grumpy comments on this development among my more theoretically inclined colleagues, I am not of the opinion that it is necessarily for the worse. It may be viewed as a natural consequence of the success and relevance of the field that a division of labor is developing, where some concentrate on theoretical detail, analysis and innovation, others  doing the empirical investigations of relevance against the background of formulated theories and yet others engage in the tricky activity of moderating the application of bioethics research results in the context of policy making. However – and this is a big but – as in other areas of science and research, such developments are for the better only to the extent that the integrity of the field is not compromised as an upshot. Such a threat may arise in many ways, one of which being lack of intellectual contact and interaction between the "labor-parts" – for this reason, I'm a big supporter of multidisciplinary team research, where the theoretical, empirical and pragmatic sides of the field are forced to interact in the very process of doing the research. Another source of the threat is that the development brings into bioethics some bad habits and disgraceful ingredients that – alas – have been part of the biomedical research world for a long time. I'm thinking about things as journals with questionable quality standards and research fraud.

Quite recently, both of these sources of a threat against the integrity of bioethics research have been exemplified in the form of a multi-layer scandal in relation to a paper published in the "open access" journal BMC Medical Ethics. The paper in question, "End-of-life discontinuation of destination therapy with cardiac and ventilatory support medical devices: physician-assisted death or allowing the patient to die?" by Mohamed Y Rady and Joseph L Verheijde, was recently retracted because it was found to repeat substantial passages from a paper by Franklin Miller, Robert Truog and Dan Brock, published in the very well-seen journal Bioethics. At the research ethical blog Retraction Watch, Franklin Miller describes how BMC Medical Ethics conceded to retracting the article only after some substantial pressure from the legal department of Bioethics. Udo SchĂĽklenk, editor in chief of Bioethics, demonstrated his outrage at the attitude of BMC Medical Ethics by cross-posting the Retraction Watch piece on his own Ethx Blog,  just a few days ago.


As noted in a further comment made at The k2p Blog, one of the main points made by Retraction Watch is that the BMC Medical Ethics retraction note is not honest about what has actually occurred. The note reads:


The authors have voluntarily retracted this article [1] and it is no longer available for online public display because portions of the article are similar to a previous publication [2]. While there was no intention to use pre-existing work without appropriate attribution, the authors nonetheless extend their apologies to Dr. Miller and all others concerned.

So, let me get this straight: The article plagiarised another article, but – then again – it didn't, since the plagiarisors claim that there was no intention to plagiarise. Well, how very odd then, that same (not) plagiarisors find reason to apologise to the (not) plagiarised authors.... eh....., for not plagiarising their article, I guess. As The k2p Blog puts it, "[w]hen is plagiarism not plagiarism?", immediately answering:

Apparently when the editor of the journal BMC Medical Ethics finds that a paper published in his own journal has copied large chunks from a different (competing?) Journal.


And, ending the post on a similar note:

...perhaps it is only plagiarism when other Journals copy material published in yours but not when others are copied and published in your Journal?

Amen. Together with Franklin Miller's account of the lack of willingness to even acknowledge any reason for retracting Rady's and Verheijde's paper in spite of irrefutable evidence, we here have as good an illustration as we want of the poverty that may result when "scientific journals" are able to mushroom under the protection of the presently sacred buzz-headings of "open access" and "online". As we all know, of course, this format for a journal is a financial prerequisite for having these sort of below sub-standard periodicals in the first place, at least as long as bioethics does not become tasty prey for the Big Pharma sponsored "manufactured scientific journal" industry, that have been scandalising biomedical scientific publication in recent years. 

Further evidence of the dungeon-like quality standard of BMC Medical Ethics is provided when inspecting it's website; priding itself as it is with a shining silver medallion at the top, announcing this fine journal's "unofficial impact factor" to be a handsome 1.93. Who's that when he's at home?, you may rightfully ask. The mystery clears, when clicking the shiny little badge – lo and behold:


Do journals published by BioMed Central have Impact Factors and are their citations tracked?
 
Yes; for any journal to have an Impact Factor, however, it must be tracked by Thomson Reuters (ISI) for three years. Although many BioMed Central journals are tracked by Thomson Reuters, others are still relatively new. The tables below show those journals that are already tracked by Thomson Reuters (ISI) and so already have Impact Factors, and journals that are due Impact Factors for which we have calculated their unofficial Impact Factors.

In short, BMC Medical Ethics does in fact not have an impact factor, but the management of the journal apparently hopes that it will one day receive one (to be precise, as transpires further down, some time in 2012). In the meantime, the same management has done its own little dabbling with its pocket calculator, coming up with the nice little "unofficial impact factor" that, just as so happens, holds out BMC Medical Ethics as a more influential journal in the field than longstanding leading periodals such as Journal of Medical Ethics, Bioethics, Theoretical Medicine and Bioethics, Health Care Analysis, Cambridge Quarterly of Health Care Ethics and Hastings Center Report. Truly impressive (footwork of a con-artist excuse for a scientific journal editor, that is)!

Now, although – as I said – the "open access" and "online" options opened up by the internet are presumably prerequisites for this sort of scam, it is of course quite possible to have journals that are both open access and online that meet the quality conditions one would expect of an academic journal. The problem is, with friends in the trade such as BMC Medical Ethics (and, one is prone to suspect, the whole Biomed Central lot) these journals need no enemies. The only way in which a researcher has any chance of deciding the actual quality is to inspect a sizable amount of paper and submit them to intense scrutiny. I can vividly hear the bitter laughs rattle the offices and rooms of my colleagues, as they ponder this suggestion for what to do with their precious time...

Well, I thought, then maybe I'll do it a bit for them! Because, as revealed by Udo SchĂĽklenk, the retracted paper is in fact not fully retracted – or, to be precise, it is indeed still available through the BMC Medical Ethics server [RETROSPECTIVE NOTE: this has now been taken away, but the paper can still be accessed here], just not to be found via clicking a link in the table of contents. Not that we needed any more evidence of the morass that is BMC Medical Ethics, but there we are!

But anyhow, how about the paper itself? Well, you are all welcome to read it and make up your own minds, but to me this paper illustrates exactly the sort of threats to the integrity of bioethics research due to lack of adequate knowledge and skill mentioned earlier. Here allowed to be made public thanks to scam operations such as BMC Medical Ethics. The whole paper is based on the assumption that there is a comprehensive, sharp and self-evidently morally relevant distinction to be made between "active" killing ("physician assisted death") and "allowing" patients to die (without that assumption, the authors would be unable to make the argument they try to make and advance the thesis they want to advance). Recognize that one, perhaps? Isn't it one of those conceptual traps that are laid in the shallow rhetoric of high school debate contests that a minimally trained and educated bioethicist is supposed to spot and unpack instantly by applying some conceptual analytic ointment? Yes indeed, it is (and, as it happens, this is what Miller, Truog and Brock do in their paper). I'll tell you, Rady and Verheijde wouldn't have passed my basic course on medical ethics or bioethics theory. In spite of one of them waving the flag of an affiliation to a center for biomedical ethics, this is the indisputable work of amateurs whose only accomplishment is to send a stinking cloud of incompetence over my favored field of inquiry.






22 comments:

  1. I think you are a little bit too harsh on BMC here. While I agree with you that they are branching out in every possible field of science (which will lead to poor quality journals and manuscripts), they do publish decent journals too. In many ways they are not that much different from PLoS ONE which also accepts basically anything and then lets the reader decide if it is relevant or interesting.

    ReplyDelete
  2. Problem is this: if a journal (or a journal publisher) is not exercising proper quality control - how do you know what to trust? Other than, as I said in the post, spending rivers of time inspecting the quality (i.e. doing the job that referees and editors are supposed to do). And how much is that going to happen? The only way for BMC or, for that matter, PLoS ONE to protect itself is that scandals like these do not occur, since unlike other journals they cannot defend themselves with having tried their best to ensure that the material published meet reasonable quality standards.

    ReplyDelete
  3. This has nothing to do with open access. At least BMC did the right thing to retract the plagiarizing article. A colleague of mine had a similar experience with a prestigious subscription based epidemiology journal published by Walter-Kluwer. The editor acknowledged that it was plagiarism, but was not willing to even publish an erratum that would cite the paper that had been plagiarized. They were only willing to write to the plagiarizing author supporting the allegations, and "discussing the importance of proper scholarly conduct".

    As for calculating unofficial impact factors, that is easy for anyone subscribing to the ISI service, and the BMC calculations are easily checked. Hence, there is no con-artistry going on. There is another potential explanation for the high unofficial impact factor. By being open access, it is easier for other scientists to get hold of and read the BMC articles. Making scientific work easily available to any anyone is the goal of open access journals, and that is a great thing.

    ReplyDelete
  4. Well, unfortunately, since the ISI impact factor is a number based on calculations of citations over a certain period of time, there simply is no such thing as an "unofficial impact factor", other than in the minds of unprofessional editors who – apparently – lack the proper patience to build a solid journal. Together with the obvious lack of professionalism in the case at hand, this signals to me that BMC ME is not a serious venture, but an attempt of some people to take a shortcut to academic status and reputation.

    And although you're quite right that misconduct is to be found also in the case of prestigious, well-established journals, I do insist that OA is a factor here. This for the very reason I stated in the original post – the lack of control for starting up these journals makes it much too easy for unqualified and unprofessional people to construct a castle of air where they can hold themselves out as important journal editors, editorial board members, and so on. Besides the case at hand, you wouldn't believe the absurd requests I get almost weekly from various OA journals to review or submit papers in areas not even remotely related to my field of expertise. It is obvious that OA is widely infected by incompetence, insincerity and dishonesty.

    As for the positive side to OA, I agree about the dissemination aspect. However, almost all non-OA journals nowadays give the author permission to post a preprint online, so that aspect is already seen to, at least with some delay. Pure OA thus makes a rather marginal difference as regards dissemination and access. Nevertheless, again, I have nothing against OA journals that behave themselves with regard to peer review, quality control, honesty, et cetera as you would expect of an academic journal. It would be great if there could be a way of having online OA journals where the overall quality can be guaranteed. Or an easy way of spotting the bad apples. However, other than the practically impossible method mentioned in the post, I can't think of anything.

    ReplyDelete
  5. The impact factors that ISI calculates and publish are the official ones, but anyone can use the ISI data and exactly the same mathematical formula to calculate an impact factor for journals for which ISI does not calculate and publish them. BMC does this, and calls them "unofficial" impact factors. It is that simple. They can even do this for a journal that ISI does not track, since ISI reports citations to articles in all journals but only from the journals that they track.

    There is no more or less control for starting up OA versus subscription based journals. Anyone can do either, and there is an enormous number of new subscription based journals launched every year. In the medical field, there are many subscription journals that are widely infected by incompetence, insincerity and dishonesty. Elsevier, the biggest publisher of subscription based journals, has even created phony journals paid by the pharamceutical industry (http://blog.bioethics.net/2009/05/merck-makes-phony-peerreview-journal/).

    There are good and bad OA journals and there are good and bad subscription journals, and the majority of both OA and subscription based journals that are eager to accept any scientifically sound article. Since most of the most prestigious journals have been around for many decades, and OA is a new thing, most prestigeous journals are still subscription based, but that is slowly changing, with PLoS Biology and PLoS Medicine as two prime examples.

    To generalize about the inherent scientific quality of papers in OA versus subscription based journals is just as wrong as generalizing about inherent scientific quality of papers by male versus female scientists. Any systematic differences that may exist are mostly explained by difference in age.

    Martin

    ReplyDelete
  6. Sure, anyone can use ISI data to calculate whatever they want. None of that, however, would be any sort of recognized metric of scientific importance. Using a term like "unofficial impact factor" is an obvious ploy to have people allude to the real IF, thereby conveying an impression that this is something more than the editor having played around with numbers in his/her own little way. The way that BMC ME does it, with a little medallion image, making it all look as some sort of award, just ads to the insincerity of it all.

    Re. the phony journals: yeah, I mentioned those in my original post. And, as I said, things such as that, are other aspects of things that bioethics now needs to deal with to an increasing extent. But observe, in these cases with bought trad. journals there needs to be big bucks behind. With the new sort of easy setup online OA journal, you don't need that anymore.

    So, back to OA - as I said, I have nothing against it as such. Problem is this: I and my colleagues don't get all those idiotic submission and review requests from other journals than OA ones. Not all, mind you, but it is obvious to me that this with setting up OA journals/publication consortia online has become a rather large industry in a short time (due, as I said, mainly to how easy and cheap it is to do this) and that a sizable proportion of this industry is not serious. The whole business around the plagiarism case in BMC ME that started all of this is an example illustrating this.

    ReplyDelete
  7. It is true that "anyone can use ISI data to calculate whatever they want". In science in general, anyone can use any data to calculate whatever they want. That is not scientific evidence that the calculations are wrong. Regarding the unofficial impact factors, BMC calculates them using exactly the same mathematical formula that ISI calculate their official impact factors. Hence, there is no con-artistry going on, and for you to write that the BMC editor has "played around with numbers in his/her own little way" is "an obvious ploy" and not very ethical.

    As a statistician, I once calculated an unofficial impact factor to convince one of my co-author that PLoS Medicine was going to be a high impact journal worthy of our manuscript. Hence, I have done the same simple calculations that both ISI and BMC do.

    Regarding the annoyance of spam, we are on the same page. I get spam from OA journals, subscription based journals, semi-fake scientific conferences, laboratory equipment manufacturers, European lotteries, pharmaceutical companies, "urologists", former government officials in Nigeria, etc, etc. I hope you realize that most Nigerians, most urologists, most pharmaceutical companies, most laboratory equipment manufacturers, most conference organizers, most subscription based journals and most OA journals (www.doaj.org) are good people doing good work. The fact that 100% of the emails that you receive from Nigerians are spam, does not mean that 100% of Nigerians send spam. The same is true for OA journals.

    Martin

    Disclosure: I am an associate editor for one journal published by BMC, occasionally providing free peer-review service to that journal. I have also reviewed for other BMC journals. I generally prefer publishing in open access journals, as I consider my scientific work to be useful for a wide audience and I want anyone in the world to be able to read it whether or not they have access to a well equipped library.

    ReplyDelete
  8. Thanks for the disclosure, very apt. You once again try to put the idea in my mouth that all OA is bad and wrong, which I have never claimed.

    As to the following:

    "In science in general, anyone can use any data to calculate whatever they want. That is not scientific evidence that the calculations are wrong"

    Well, yes. But I have never claimed that the calculations are (mathematically) wrong.

    "Regarding the unofficial impact factors, BMC calculates them using exactly the same mathematical formula that ISI calculate their official impact factors."

    This is false, since ISI calculates the IF based on a much larger set of data.

    ReplyDelete
  9. Well, the main problem with tbe business model of OA is surely that you make more money by publishing (sorry, uploading to webserver) more manuscripts. The more papers you upload the more money you make. That's roughly all I need to know to make up my mind about BMC and its family of 'journals'. No surprise then that the editor of the journal retracted the paper in the bizarre fashion that he did. You don't want to put off future paying customers (advertisers - paying for publication seems akin to forking out for an advertisement). My completely personal view on this matter.

    ReplyDelete
  10. BMC calculates the unofficial impact factor using the same data set that ISI uses and which is available to any subscriber of ISI Web of Science. The only difference is the choice of journals that they calculate the impact factors for. BMC only calculates it for BMC journals, and ISI only for journals that has gone through their selection procedure.

    Udo: Subscription based journals also make more money by publishing more papers, as the subscription fee they can charge is related to the size of the journal. Most OA journals do not charge a publication fee (Peter Suber, Open Access News), while all TA (toll access=subscription based) journals do, so the financial incentive to publish mediocre scientific research is actually less of a problem for the majority of OA journals than it is for TA journals.

    The problem with some TA journals is that they have to publish a certain approximate number of pages/papers in each volume, so that their subscribers get the promised number of issues. If the standard of submitted papers decline, they have to be less stringent in order to keep publishing roughly the same number of papers, which they need to do to motivate their ever increasing subscription fees. OA journals that want to maintain their standards can simply publish fewer papers, as they haven't promised a certain volume to subscribers.

    Martin

    ReplyDelete
  11. "BMC calculates the unofficial impact factor using the same data set that ISI uses /.../BMC only calculates it for BMC journals, and ISI only for journals that has gone through their selection procedure."

    And since that procedure takes some time (in the case at hand we'll have to wait until 2012 if BMC ME is to be believed), it will not be the same data set.

    Moreover, the procedure you refer to is, among other things, about ISI making sure that a indexed journals meet a number of quality conditions (interested readers can learn more here: http://conocimiento.incae.edu/ES/centros-academicos-investigacion/pdfs/ISI_JOURNALS.pdf). Thus, the IF signifies not only the citation calculus, but also that the journal in question has been selected through the ISI selection procedure and thereby has been found to meet the quality standards.

    In other words, the "unofficial impact factor" ploy is misleading in two separate ways.

    ReplyDelete
  12. ISI and quality standards? Come on... they are following Revista Romana de Bioetica and Postepy Mikrobiologii among many other "quality" journals.
    And in defense of BMC: I have noticed that their calculated preliminary IFs are often very close to the real IF once the journal is followed by ISI. As remarked above: calculating IFs is very easy with the data from ISI and there is nothing wrong with it.
    I also find the assumption that work in high-IF "prestigous" journals is automatically valid and of high quality intelectually lazy but I guess that's a different discussion alltogether.

    ReplyDelete
  13. Be that as it may. Whatever one may think about the reliability of the ISI quality assurance process, it's still the case that the IF is used by many researchers, universities, funding bodies, other institutions, governments, and so on as a sort of mark of minimal quality of journals. For journals, the IF is therefore an important marketing tool for attracting good papers. BMC ME flagging its home-cooked "unofficial impact factor" is an obvious attempt to convey the impression that such minimal quality has been assured. Which it has not. Therefore, while it need not be wrong calculating the UIF or with the UIF itself, it is still deceptive and a sign of lack of seriousness to flag it officially as a marketing device. Taken together with the rest of the features of the scandal that made me write this post in the first place, it builds a case against BMC ME's ambition as an aspiring serious quality journal in the field.

    To my mind, it's BMC ME's problem, not mine or ISI's or other journals/publishers, to bite the bullet, learn the lesson and start behaving as one expects of a journal with such aspirations.

    ReplyDelete
  14. Dear Christian,

    I clearly misunderstood you. You are correct that the 2009 unofficial impact factor that BMC calculated last year is based on different data than the official impact factor that ISI will calculate for 2012. BMC used the same data that ISI would have used if they had chosen to calculate an impact factor for 2009, and they got the same number that ISI would have got. As you point out, ISI also has other “quality conditions” that they use when they decide whether to calculate an official impact factor, and hence, they do not calculate an impact factor for all journals for which it is possible to do so. It is those “quality conditions” that BMC is trying to get around by calculating the impact factor themselves, calling it an “unofficial” impact factor.

    If anyone wants to publish their bioethics papers in a peer-reviewed open access journal, but would only do so if it the journal has both a high impact factor and fulfill the ISI “quality conditions”, they could try PLoS ONE. That journal has a 2009 official impact factor of 4.315, compared to 4.000 for American Journal of Bioethics and 1.136 for Bioethics. However, regarding both impact factors and the ISI “quality conditions”, I tend to agree with the Anonymous reviewer that posted on 9 January 2011 17:07. To base publishing decisions on impact factors is kind of silly. It is better to publish where one thinks the paper will reach the largest audience of interested readers.

    Martin

    ReplyDelete
  15. Matthew Cockerill11 January 2011 at 17:29

    Christian,
    BioMed Central takes plagiarism very seriously. We use CrossCheck to help our editors spot potential plagiarism but, unfortunately, occasionally things slip through. The article concerned has been retracted, and because it contains infringing content it is no longer available from our website.

    No publisher or journal is immune to the problem of plagiarism, but the broad accusations that you level at BMC Medical Ethics in particular, and at open access journals in general, are simply not justified.

    Your attack on the validity of the unofficial impact factor displayed on the BMC Medical Ethics website is similarly wide of the mark....

    ReplyDelete
  16. Matthew Cockerill11 January 2011 at 17:31

    To respond to your specific allegations:

    "In short, BMC Medical Ethics does in fact not have an impact factor, but the management of the journal apparently hopes that it will one day receive one (to be precise, as transpires further down, some time in 2012). "

    The journal has been tracked by Thomson Reuters since 2009, and because of the three year time-frame used for impact factor calculation, the journal will receive its first official impact factor in mid-2012, as Thomson Reuters will be happy to confirm.

    "This [the equivalence of BMC's calculation to ISI's] is false, since ISI calculates the IF based on a much larger set of data."

    As the other commenter notes, in calculating unofficial impact factors BioMed Central uses the same dataset (available via Web of Science), the same algorithm and the same timeframe that is used by Thomson Reuters to calculate official impact factors. The only difference is that Thomson Reuters chooses to only calculate the impact factor for a subset of journals. However, the data allows an equivalent unofficial impact factor to be calculated for any journal, and such calculations can be extremely useful and are done by many researchers, editors and publishers.

    Unofficial impact factors are very reliable. Many publishers including BioMed Central calculate them routinely in order to cross-check the impact factor numbers issued by Thomson Reuters. Every year many official impact factors which have been accidentally miscalculated by Thomson Reuters are corrected as a result of publisher feedback.

    ReplyDelete
  17. Matthew Cockerill11 January 2011 at 17:31

    "In the meantime, the same management has done its own little dabbling with its pocket calculator, coming up with the nice little "unofficial impact factor" that, just as so happens, holds out BMC Medical Ethics as a more influential journal in the field than longstanding leading periodals such as Journal of Medical Ethics, Bioethics, Theoretical Medicine and Bioethics, Health Care Analysis, Cambridge Quarterly of Health Care Ethics and Hastings Center Report. Truly impressive (footwork of a con-artist excuse for a scientific journal editor, that is)!"

    The sarcastic tone is unjustified. The unofficial impact factor calculation objectively indicates that, looking at the 2007-2009 timeframe, a typical 2007 or 2008 article published in BMC Medical Ethics went on to be cited in 2009 more often than a typical article in any of the other journals you mention. That is what the impact factor measures - it simply tells us that over this timeframe, the average article in BMC Medical Ethics had more citation impact than the average article in the other journals you mention.

    Because BMC Medical Ethics is a relatively young journal and publishes comparatively few articles, its overall influence may well be smaller than that of older, more established journals. But impact factors are not a measure of aggregate influence – they measure 'per-article citation impact', and on that measure, BMC Medical Ethics is performing very well, quite possibly helped by the increased visibility provided by the open access model.

    Best regards,
    Matt Cockerill,
    Managing Director, BioMed Central

    ReplyDelete
  18. Thanks Matt for standing up like this. Like Martin before, you break this matter out of the rest of the criticism. The "UIF" badge hadn't been much more to me than a rather touching little sign, had it not been for the rest of the stuff. But I cannot see that your remark add much to the arguments already put forward by Martin above. I stand by my responses to those.

    Would actually have been nicer to have you tell us that you intend to swing the BMC ME editor by his ears for not telling it like it is re. the Rady & Verheijde retraction. But I can see that this wish may be reaching for the stars.

    ReplyDelete
  19. Saw now that there was more than one comment. Alas, it doesn't change much re. the case at hand. The problem of spotting plagiarism and other sorts of research fraud is of course real, but my post didn't hold that against BMC ME - and I'm happy to hear that you take precautions. The most important thing re. a journal's credibility in these matters, however, is how a it acts when fraud is detected. The retraction note is still a disgrace of ambiguity and, unfortunately, this makes your assurance that BMC takes plagiarism very seriously ring like words rather than deeds.

    ReplyDelete
  20. I happened to come across this website when doing a search for some information about impact factors. I am a scientist in the biomedical field and pay a lot of attention to journals/publishing etc. as it basically defines my career and success!! I would also just like to add that I am not an editor to any journal and have to predisposition in favour or against open access journals.

    Christian, your log and subsequent retorts to comments really do come across as a bit of a rant. You seem to completely fail to understand the concept of impact factors. True, ISI only complete their impact factor analysis after a minimum of 3 years. However, the calculation of this prior to that is still possible, accurate and valid. It is simply a question of statistical power. Obviously, over 10 years 9and after publishing 1000 manuscripts), a citation index can be assumed to be both correct and of reasonable statistical power. A similar calculation after one year after the publishing of 100 manuscripts is no less accurate, however, one may choose to call into question the power of the assessment? Is 100 articles enough. In any case, the mathematics is accurate, however, one may question the number and wish to see more data before a test is made.

    Be this as it is, ISI make no mention of number of papers that its journal publish, just a time frame (3 years) prior to impact factor awarding. It is plausible that a journal with an official impact factor may have published significantly less papers than a journal with an unofficial one. Which impact factor is more valid to you?

    ReplyDelete
  21. Well, I never questioned any of this, so I fail to see how it undercuts my criticism. My point was not about the accuracy of any calculation or what algorithm was used, not even the power issue you mention. The point was that an ISI IF implies much more than just those things, since the IF is attained only after some qualified scrutiny over a period of time. This is what makes the ISI IF interesting in the first place, and why it used as a measure of quality by many governments and funding bodies (however misguided that may be). This is why it is deceptive for a journal to hold out oneself as having an IF, just "unofficial". There simply is no IF before a journal has passed the Thomson Reuter trial treshold.

    ReplyDelete