Friday 3 June 2011

History, meaning and relevance of "Impact Factor" - Part 2

Returning to an article that has produced some very positive feedback from readers in the other side of the Atlantic, I am today presenting the final part of the history, meaning and relevance of "Impact Factor"; on its own an unprecedented Journal. And the best way to do so is to just jump straight to the different questions that are likely to be popping in every single one of our heads. Thus, with no further ado, shall we proceed:

How should citations studies be taken into account by the International Society of Scientometrics and Informetrics? As explained  in [1] citations studies should be adjusted to account for two main variables that in a funny way seem to be a lot related to electrochemistry and radioactive decay rather than to whatelse is expected: citation density (average number of citations referenced per source article) and half-life (number of retrospective years required to find 50% of the cited references). Immediately one gets the idea that there is a lot more in constructing "Impact Factor" and understanding the consequences of its results than just looking to the mainstream press names in scientific publishing. 

What do the critics of "Impact Factor" say? The critics of this methodology point their fingers at generalisations, but a few analysts laid their analytical observations and provided opinions on the value of this Journal. For example, Tam Cam Ha et al (2006) criticised the fact that "As with other measures of multifaceted phenomena, the transition from qualitative to quantitative measures can produce the drawing of inappropriate conclusions", and believes that the Journal Impact Factor shouldn't over-interpret analysed information under the possibility that it might incur in misuse. [2] also offers a quote from Steven lock, an Emeritus editor of the British Medical Journal (one of the Journals that sucked me strongly into Science and to which I could never offer any resistance due to its incredible quality): - “it is remarkable that scientists may rely upon such a non-scientific method for the evaluation of the scientific quality of a paper as the impact factor of the journal in which it is published”.

But probably one of the ideas that surprised me the most was the introduction of two diagrams displaying the dynamics of the important factors that potentially determine both journal quality and research quality (find them on figures 1 and 2); and if you spend some time reading through the article, you will also find refined alternative proposals the authors themselves found in literature and that should be considered by those responsible for the Journal Impact Factor. Amongst these ideas and the respective comments we can immediately understand that dealing with biased evaluations, dealing with tailored-made solutions, and dealing with generalisations will define this discussion and its new paradigms as too arbitrary and sometimes even an unfinished business. Other objections to Impact Factor relate to the system used to classify and categorise the Journals, as well as some editors state that they'd calculate impact merely on the basis of their most-cited papers, thus reducing the low impact factors.

What is new to the Journal Impact Factor? A new idea has been recently adapted allowing for the opportunity to more precisely categorise journals. It is but a simple formula based on citation relatedness between two journals used in a way that defines how approximate they are to each other. In order to offer the best comprehension possible I decided to quote directly from Dr Garfield's own words: - "For example, the journal Controlled Clinical Trials is more closely related to JAMA than at first meets the eye. In a similar fashion, using the relatedness formula one can demonstrate that, in 2004, the New England Journal of Medicine was among the most significant journals to publish cardiovascular research."

What is the overall interpretation Dr Garfield holds of this type of evaluation? Straight as an arrow it is always better to quote the gentleman once again for the sake of genuineness: - "The use of journal impacts in evaluating individuals has its inherent dangers. In an ideal world, evaluators would read each article and make personal judgements... Most individuals do not have the time to read all the relevant articles. Even if they do, their judgement surely would be tempered by observing the comments of those who have cited the work. Online full-text access has made that practical."

Is there any direct competitor to the Journal Impact Factor? Yes there is. I personally do not know if we may consider it a strong competitor or even if it is a real competition or maybe just a way of providing a fairer understanding of the value of the numerous science journals we have available nowadays. Nonetheless,  bare in mind that in the future The Toxicologist Today will be talking about the Thomson Scientific Database called Journal Performance Indicators, expected for pretty soon. But that is for some other time as I will be back with some refreshing news within a week's time. Thank you so very much for reading and even more for commenting, even if it is on LinkedIn. And as usual I'd like to end this article with a smile on YOUR lips, so have a nice weekend with this article and the last image taken from [3].


[1] Garfield, E. (2006). "The history and meaning of the journal Impact Factor". JAMA,295(1), pp. 91-93.

[2] Ha, T. C., Tan, S. B., Soo, K. C. (2006). "The Journal Impact Factor: Too much of an Impact?". Annals Academy of Medicine, 35(12), pp. 911-916.

[3] VADLO, Life Sciences Search Engine, http://vadlo.com/cartoons.php?id=14, last visited on the 3rd of June 2011, last updated in 2008.

No comments:

Post a Comment