Dr. Anthony Chang, AIMed Founder; Pediatric Cardiologist and Chief Intelligence and Innovation Officer at Children’s Hospital of Orange County (CHOC) highlighted earlier, “There are more than 20,000 papers on COVID-19 but we are dying of thirst in an ocean of information. We need AI (artificial intelligence) to help us have actionable insights”.

Indeed, as the pandemic continues to affect us at an unprecedented way and hundreds of related studies being published on a daily basis in both peer-reviewed and digital preprint manner. It probably takes researchers more time to sift through and digest the information than to generate effective remedies that can combat COVID-19.

Besides, these 20,000 odds and ever-increasing publications do not include those that were already in the pool before the global health crisis. Papers on the outbreak of SARS in year 2002 and MERS a decade later, may all be relevant for researchers to better understand coronavirus and pneumonia.

AI for better search results  

Since searching through the literature via traditional means may no longer be efficient, Amalie Trewartha and John Dagdelen, Post-Doctoral Fellow and Graduate Student Researcher and their colleagues at the University of California, Berkeley, built COVIDScholar. This is an AI-driven search engine solely dedicated to COVID-19 related work. It enables researchers to pick up tiny bits of details including drugs used and research methodologies and recommend other corresponding studies to fellow scientists.

Trewartha and Dagdelen thought as researchers spent 23% of their time searching and reading past journal articles, this tool will save up some of their time to focus on data analyses and making new discoveries. It will also establish connections that human researchers miss during their searches. To do so, Trewartha, Dagdelen and team first built multiple web scrapers to collate new studies within 15 minutes of their initial online presence. They then clean the data; ensuring they are receiving the paper in its best format.

After which, machine learning algorithms will take over to categorize and label the papers in respective categories and marks their relevance to COVID-19. At the moment, Trewartha, Dagdelen and team have gathered over 60,000 papers. They are now improving COVIDScholar, so that it will not only suggests findings that are linked or be used to look for papers on different concepts but also have quantified models that will help in the study of targeted topics such as protein reactions.

AI for factchecking  

On the other hand, a Seattle based non-profit research institution – Allen Institute for AI (AI2) had developed an experimental tool, SciFact to assist researchers in differentiating myths and facts. For example, one may insert a claim which says “hypertension may lead to complications in COVID-19 patients” and the tool will lead user to an array of papers that are either for or against it. The tool will also show users the abstract of each paper and point out the sentences that agree or disagree with the indicated claim.

SciFact was created using VeriSci, a neural network that are trained using FEVER: Fact Extraction and VERification, an existing set of factchecking data that were extracted compiled from Wikipedia. Researchers also refined the tool using a new set of data with over 1400 scientific claims and 5000 abstracts. They do so by randomly selecting a sample of papers published in renowned journals and stored in Semantic Scholar, an open source database that was launched and maintained since year 2015.

Researchers asked experts to rewrite the annotations in these papers into scientific claims that are either endorsed or contradicted by literature. These experts will also go through the abstracts of these papers and denote within them, sentences that will support or disagree with the claims they wrote. Although SciFact only retrieved accurately 23 out of 36 times relevant papers related to COVID-19, researchers believe it is one of the pioneering tools that can offload some of the factchecking burden from scientists.

Nevertheless, the creators did note, SciFact is not meant for debunking myths, conspiracy theories or misinformation that are circulating on social media platforms, as it is afterall, an experimental tool for scientists to differentiate claims and facts and subjective judgements had been induced when experts are rewriting some of the scientific claims, thus the tool should be used with cautions.


Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.