If you have been following us, you may realize how much AIMed values data. We dedicated a recent Breakfast Briefing session discussing about it. We offered tips on how to make data less intimidating in healthcare professionals’ daily workflow. We mentioned a data hub and other novel channels which make data-storage and sharing less cumbersome. We knew that the public are concerned about their privacy and worried they may be identified even through de-identified data, so we explained it may not necessarily be the case in an article

The AI (artificial intelligence) for Good Summit held in Geneva, Switzerland this May, shared a similar thought too. The use of AI to speed up drug development, enhance personalized medicine, assist in epidemic prevention and endorse population health were some of the information given to the audience who attended this annual meeting propelled by the United Nations. At the same time, they were told there is a scarcity of open sources and healthcare data that are made available right now, may not be sufficient to develop good AI and machine learning (ML) tools. 

A Data Commons 

This is probably a result of the various regulations which govern the use and sharing of data at the moment. Privacy laws may have protected our interests but it may not represent our values. Because people do wish to have their health data shared, especially if it is for a good cause. Last June, researchers from Stanford University found that 93% of nearly 800 clinical trial participants are willing to have their data shared with scientists. 82% of them are very or somewhat likely to share their data with scientists in for-profit companies. 

These individuals may not be representative of our population. Nevertheless, perhaps it is time to begin a dialogue on Data Commons or a channel which promotes transparency when it comes to data sharing. Something which not only individuals are able to find out how their data is being employed but also for researchers and healthcare professionals to inform the public, how the collected data is being used and where are they being stored. 

Missing of trust 

Individuals probably view consent for data access to research, a little different from the industry. After all, the latter is not subjected to the stringency of an Institutional Review Board (IRB). Despite so, some form of public education needs to be in place, to keep individuals inform of the underlying contradiction between the acquisition of their data to develop new technology-driven tools and privacy. Most importantly, there is a need to induce trust. 

However, a recent finding from the Wellcome Global Monitor, a new survey which interviewed more than 140,000 people across 140 countries in the World, found that only 18% indicated a high level of trust towards scientists. This means we may still have a long way to go. 

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.