Vanna Pistotti

The Impact of the "Impact Factor": A Critical Assessment

Are you already subscribed?
Login to check whether this content is already included on your personal or institutional subscription.

Abstract

Impact Factor is the last creation of a chain for the information retrieval of a product called Science Citation Index, a bibliographic databases produced by the Institute of Scientific Information in Philadelphia and now owned by Thomson, which has in its records the data needed for calculate it. Eugene Garfield, to whom we should recognise the genius of the idea, in 1963 decided to publish a bibliography of scientific publications including in its records also the original list of citations reported in the references of the article. The original idea of Garfield was not a method for making "carrier" in science but the possibility of collecting all the articles published about a certain argument starting from the literature cited in similar papers. Born with the intention of enables a variety of information professionals to access and assess key journal data - librarians, publishers, editors, authors, information analysts - it is now used for obtaining an apparently objective and quantitative measure of an author's scientific assessment. The ubiquitous and sometimes misplaced use of journal impact factors for evaluation has caused considerable controversy. Are any alternatives been studied? How can a score count for so much when it is understood by so few and its value is so uncertain? In defence, worshippers of impact factors say we have no better alternative. Isn't it time for the scientific community to find one? The radical change brought about by the Web for publishing and searching scientific literature is changing the classical scheme of printed library collections and private journal subscriptions. Some alternatives flourished in these years, some are interesting and under study by some information analysts, some had a brief life and died. Most evaluation committees in developing nations currently base promotions, resource allocations, and awards solely on citation indices and impact factor, particularly in the medical field. What is more surprising is that most scientists and peer reviewers seem to be convinced that this is the best method for considering scientific quality. "What matters absolutely is the scientific content of a paper, and nothing will substitute for either knowing or reading it" (Brenner, 1995).

Preview

Article first page

What do you think about the recent suggestion?

Trova nel catalogo di Worldcat