Apart from reproducibility, p-value, and diversity, there is one more thing which troubles the research community. That is impact factor, a measure of the number of citations the articles in a particular journal had received for a period of two years. Journals with higher impact factors are given higher regard, same for the content it bears.
Results do not come easy, so a researcher can publish only a limited number of articles annually. If none of it finds
The San Francisco Declaration on Research Assessment (DORA) had pledged to replace
There is no way to judge
At the end of March, Duke University announced that it will pay the US government a total of $112.5 million for the use of false data in various research grant applications. The consequences are massive because it does not only affect the reputations of the institution and individuals but also many other papers which had referred to the retracted ones.
There is no incentive to doing good science since it no longer guarantees a good career, lessons on ethics and punishments become obsolete. Especially when the need for an impact or quick success is valued more than research and technology. The Duke University incident is not be one of its kind. Search “scientific misconduct in recent years” and you may be reading a list of them, mostly in the realms of biomedical sciences, chemistry and social sciences.
There are scientists trying to redefine the quality of journal papers under new considerations like methodology, honesty, error rate, logic and experimental redundancy
Challenging is not enough to describe the culture. Most of these high impact journals also come with a paywall, which means running them is like maintaining a brand: no one wants to be the antonym of influencers, if there is ever one. In the recent AIMed Breakfast Briefing in Chicago, Dr. Paul J. Chang, professor and vice-chairman of Radiology Informatics at University of Chicago
Looking at the speed of our technology, if a medical innovation needs to be confronted with many rounds of research and development, to determine it is safe for patients, by the time it is ready to face the World, some aspects of it may have been outdated. This has not include the time requires for validation and ensures every result is of “high impact”.
Surely, one can argue technology personalized medicine, so perhaps related research should move away from this “one size fits all”. But again, who is going to decide what is the best fit for each?
A science writer with data background and an interest in current affair, culture and arts; a no-med from an (almost) all-med family. Follow on Twitter.