You are here

Towards more effective KT: Stepping out of the “Impact Factor” (“IF”) Box

Towards more effective KT: Stepping out of the  “Impact Factor” (“IF”) Box

Not everything that can be counted counts and not everything that counts can be counted.

 Albert Einstein

Most researchers still lean towards traditionally accepted approaches to knowledge transfer. The few strategies they prefer are mostly influenced by academic standards and rarely by the need to influence policy. The old catch phrase “publish or perish” makes many a researcher aim first (and frequently only) for the holy grail of publication in high impact journals rather than directly impacting policy or practice within their countries.1

In this article I posit that this impact factor (IF) predilection grossly under values research evidence and provide arguments to support this case. I also propose ways by which researcher can step out of this “IF” box engage better in practices to extract more value from their research.

Description: Macintosh HD:Users:suzannekiwanuka:Desktop:imgres.jpg

The impact factor, its origin, measurement and criticisms

The impact factor is a measure of the frequency with which the average article in a journal has been cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times it's articles are cited. The impact factor of an academic journal reflects the average number of citations of recent articles published in that journal and is used as a proxy for the relative importance of a journal.  High impact factors are deemed to be more prestigious than lower ones. 

The calculation of impact factor is based on a two-year period and involves dividing the number of times articles were cited by the number of articles that are citable.

A = the number of times articles published in 2008 and 2009 were cited by indexed journals during 2010.

B = the total number of "citable items" published in 2008 and 2009.

A/B = 2010 impact factor 

A few reasons why the “IF” under values research impact

  • Only citations within a two-year time frame are considered. The IF is calculated considering only those citations that a particular journal has received within 2 years prior. For those disciplines, which garner most citations outside the two‐year window, even the academic impact of papers goes unnoticed.
  • The nature of the citation is ignored. As long as a paper in a journal has been cited, it contributes to the journal’s impact factor, even if the cited paper is being credited or criticized, refuted or exemplified as weak.
  • Only journals indexed in the source database are ranked. Thus, journals not indexed in Web of Science (which has more than 12,000 titles) don’t have an impact factor and cannot be compared with indexed journals.
  • Review articles are generally cited more often than other types of articles because the former present a compilation of all earlier research. Thus, journals that publish review articles tend to have a higher impact factor.
  • The data used for Impact Factor calculations are not publicly available. The JIF is a product of Thomson Reuters®, a private company that is not obliged to disclose the underlying data and analytical methods. In general, other groups have not been able to predict or replicate the impact factor reports released by Thomson Reuters.
  • Editors can manipulate their journals’ impact factor in various ways. To increase their JIF, they may publish more review articles, which attract a large number of citations, and stop publishing case reports, which are infrequently cited.
  • The nature of research is such that its impact may not be immediately apparent to the scientific community. Some of the most noteworthy scientific discoveries in history were recognized years later, sometimes even after the lifetime of the contributing researchers. No numerical metric can substitute actually reading a paper and/or trying to replicate an experiment, or incorporating its lessons into policy and practice to determine its true worth.

Description: id you know?

 

  • Impact factor devalues human subject research ethics in that it unfairly and completely prioritizes journal and researcher prestige over and above the research communities and participants who often are told that their participation in research will benefit them. Moreover the majority of policy makers never consult a journal before making a decision.

Since the IF is premised generally on the institutional background of the neoliberal academia replacing it with more sophisticated metrics, it can be a futile approach to determining the value of research and therefore its benefit. It is important to explore other more relevant indicators for this purpose. Indeed a democratic discussion on the social value of research assessment would be a better and more valid reflection of the actual value of research. 1, 2, 3

How can researchers make their work count beyond the contribution to impact factor?

  • Package context and stakeholder appropriate research products such as policy briefs, newspaper articles, videos and other KT products.
  • Advocate for change in the academic culture of measuring academic excellence on the basis of publications to include other media such a online platforms and local media channels.
  • Conduct focused disseminations, share products and evaluate the impact of engagements.
  • Engage policy makers and other decision makers in the conceptualization and all stages in the conduct of research.
  • Broaden the timeline expected for research to show impact to periods long after the project has ended and take note of any delayed impact.
  • Research evidence should not die a natural death at the end of each research project but rather it should take on another life within the spheres it can be utilized
  • Anticipate, pursue and ultimately document and share the social impacts of their research findings.
  • Develop passion to see their research findings used, invest time and resources towards that end.

In summary, to support better KT, researchers need to step out of the “IF” box and find more context appropriate strategies for sharing research evidence beyond those approaches that simply pat a few journals on the back and leave the majority of research beneficiaries (the study participant) out in the cold.

1.   Katchburian E (2008). Publish or perish: a provocation. Sao Paulo Medical    Journal, 202-203.

  1. The PLoS Medicine Editors (2006). The impact factor game. PLoS Medicine 3(6): e291.
  2. Smith L (1981). Citation analysis. Library Trends, 30: 83-106.
  3. Sevinc A (2004). Manipulating impact factor: An unethical issue or an  Editor’s choice? Swiss Medical Weekly, 134:410.

 

 

Share this Article..

To Post your Comments Securely, a quick, one-time sign-in/registration is needed below.

Follow us