It all began over a decade ago when anesthetist John Carlisle and his colleagues were discussing results published by a Japanese researcher named Yoshitaka Fujii. Fujii performed a series of randomized controlled trials (RCTs) to analyze the effects of medications which prevent patients from vomiting and having nausea after surgery.
Carlisle and his fellow anesthetist thought the data used in these studies were “too clean” to be true and real-world data would not yield results as described by Fujii. As such, they speculated an underlying influence. RCTs are essential to statistically verified the effects of a new medication or product does not occur by chance or affected by the pre-existing conditions of the patients. In an actual RCT, participants are indiscriminately assigned to either the experimental group which is given the new medication or product or the control group, which is having a placebo.
What Carlisle did to expose Fujii’s falsified clinical trials was to examine the differences between groups of variables (i.e., weight, height) in patients before they were given either the medication or a placebo. He then calculated the probabilities of expected outcomes based on these differences. He also looked at the p-values, or a number which denotes whether the observed result can be attributed by chance. By accumulating the p-values from various clinical trials, Carlisle was able to determine how randomized are these assignments. A high combined p-value would mean a well-balanced assignment, which is suspicious; whereas a low combined p-value could mean an error in the randomization.
Prevalence of fabricated data
Carlisle’s method is not new, neither was it flawless. It requires variables to be truly independent in order to pass the test. However, most variables (for examples, height and weight) are correlated and not entirely independent. Hence, Carlisle faced some harsh scrutinization from fellow researchers and journal editors. Nevertheless, Carlisle believes what he does help protect patients.
Fujii was eventually fired from Toho University where he then worked and had 183 of his papers retracted. Carlisle moved on to fact-check eight other journals that are not within his specialty. There are also other scientists who followed the flow and developed their respective methods to check on the reliability of data.
Data is precious but it does not come easily and naturally. Data surpasses oil to be the most valuable resource in 2017. That’s why researchers are eager to obtain data, rather, data that can demonstrate their new solutions work as intended. Spil.ly, a Berlin
On one hand, artificial data allows researchers to carry out research at a larger scale with smaller funding. On the other hand, it means the developed product may not be fully understood realistically or predict how it will react when it is interacting with
A huge price to pay
Regardless of whether it is falsified data or dodgy clinical trials, it is challenging to harness ethics while developing something new. As mentioned in The Bleeding Edge, an investigative medical documentary released last year, most medical professionals would label the foreseeable side-effects or dangers coming from a novel product. However, we are looking at a $400 billion medical device industry and artificial intelligence (AI) related healthcare will reach $6.6 billion by 2021. Driven by incentives, researchers tend not to openly disclose their affiliations with MedTech companies or any conflict of interests.
We need more professionals like John Carlisle, but with lax regulation and the tendency to put-down individuals who are overtly critical towards change, have we ever realized we are gambling away patients’ safety and future, in exchange for a scant relationship with the truth that is termed as innovation?
A science writer with data background and an interest in the current affair, culture,