Apart from reproducibility, p-value, and diversity, there is one more thing which troubles the research community. That is impact factor, a measure of the number of citations the articles in a particular journal had received for a period of two years. Journals with higher impact factors are given higher regard, same for the content it bears. 

Results do not come easy, so a researcher can publish only a limited number of articles annually. If none of it finds it feet in high impact journals, he/she may not be considered for a promotion, or even be scrap off for future grants. Like getting the number of likes for an Instagram post, sometimes researchers are driven to be influencers rather than an explorer.

The San Francisco Declaration on Research Assessment (DORA) had pledged to replace impact factor with something more impartial since 2012. Nearly seven years later, there is still no alternative. Impact factor is so well-received that churning out a replacement became a daunting task. When the need for recognition meets with time pressure and even one’s livelihood, often it leads people to take the short way out: cheating. 

There is no way to judge 

At the end of March, Duke University announced that it will pay the US government a total of $112.5 million for the use of false data in various research grant applications. The consequences are massive because it does not only affect the reputations of the institution and individuals but also many other papers which had referred to the retracted ones. 

There is no incentive to doing good science since it no longer guarantees a good career, lessons on ethics and punishments become obsolete. Especially when the need for an impact or quick success is valued more than research and technology. The Duke University incident is not be one of its kind. Search “scientific misconduct in recent years” and you may be reading a list of them, mostly in the realms of biomedical sciences, chemistry and social sciences. 

There are scientists trying to redefine the quality of journal papers under new considerations like methodology, honesty, error rate, logic and experimental redundancy etc. But can you see how this proposed measurement is also trapped in the same dilemma? The number of times the article was referenced comes at the end of it. 

Lessons learnt 

Challenging is not enough to describe the culture. Most of these high impact journals also come with a paywall, which means running them is like maintaining a brand: no one wants to be the antonym of influencers, if there is ever one. In the recent AIMed Breakfast Briefing in Chicago, Dr. Paul J. Chang, professor and vice-chairman of Radiology Informatics at University of Chicago commented on the lack of intuitiveness in some of the medical technologies. 

Looking at the speed of our technology, if a medical innovation needs to be confronted with many rounds of research and development, to determine it is safe for patients, by the time it is ready to face the World, some aspects of it may have been outdated. This has not include the time requires for validation and ensures every result is of “high impact”. 

Surely, one can argue technology personalized medicine, so perhaps related research should move away from this “one size fits all”. But again, who is going to decide what is the best fit for each? 

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in current affair, culture and arts; a no-med from an (almost) all-med family. Follow on Twitter.