The last open forum discussion of AIMed Pediatric 2020 virtual conference was targeted at ethics and inequities, in which fellow panelists agreed it is likely to be more challenging to develop and deploy artificial intelligence (AI) driven tools in children as compared to adult populations. Sara Gerke, Research Fellow in Medicine, AI and Law at Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics in Harvard Law School explained that is probably because there have not been many discussions around AI ethics and children yet.

The unique ethical challenges of pediatric AI

Gerke said more research is needed to highlight the differences of using AI in children and adults. She felt that difficulties are likely to be coming from informed consent (i.e., parental consent is needed to use data of patients under the age of 18) and efficient ways to protect children and their privacy. At the same time, there is also a need to think about how consent should transit as the patient moves beyond 18-year-old. There are many considerations to figure out and it is important to have some official framework or guidance in place.

Dr. Victor F. Gracia, Founding Director of Trauma Services and Professor of Surgery and Pediatric at the University of Cincinnati and Co-Director of Chest Wall Center echoed the comment. He added children are one of the most vulnerable groups in our society. Even physicians who are attending to this group need to be more resilient than others to take on the impact of making certain decisions. Yet, he believes children are perhaps most in need of AI due to how delicate they are.

The other concern panelists have was data quality and availability. Pediatric patients are a relatively heterogenous cohort. As mentioned in the earlier session on data, there are drastic differences in what considered as “normal” in plain radiography for a 2-year-old; 12-year-old, and 18-year-old patient. Infant patients also tend to move quickly; with a faster heart rates and have a range of heights and weights that can affect how certain data may turn out to be. So, developers are likely to find themselves with many, small pools of data that exist in their respective silos.

Moreover, Dr. Gracia noted on the possible implicit and cognitive biases that AI have when they interact with patients, particularly those from the ethnic minorities or lower socio-economic or deprived neighborhoods. Delayed diagnosis and not responding to treatment are not uncommon in pediatric as some individuals think children are malingering or caregivers are not able to convey the conditions that the children are in.

Dr. Addison Gearhart, Clinical Fellow in Pediatric at Boston Children’s Hospital said sometimes language may also be a barrier and information could be lose during translation. Despite so, she is confident that as long the conversation is crafted with the intention to help other patients, caregivers are usually very supportive in sharing the data. More importantly, involve different stakeholders and construct data infrastructure or sharing framework that agrees with the majority’s views.

AI may be helpful in early detection of child abuse

An audience asked panelists if they think AI plays a role in the early detection of child abuse and even preventing it? Dr. Gearhart replied it reminded her of the role of wearables in outpatient setting. She said parents, especially first-time parents are putting bracelets on their babies to monitor vital signs. Perhaps an algorithm can be instilled to detect factors associated with abuse and try to bring these children sooner to the emergency rooms.

On the other hand, Dr. Gracia cited two research studies, one on the use of national language processing (NLP) to identify adolescents at risks of committing suicide and the other on the use of AI to predict those who are at high risks of being violent in Chicago area. He said even though these studies do not directly address child abuse, they may provide developers some insights on how this kind of tool can be created.

Overall, panelists agreed this is a challenging topic and they do not have more insights into it. Nevertheless, a quick search on the internet revealed there had been an effort to use AI to tackle child abuse in the UK. Back in 2014, the Metropolitan Police had created a Child Abuse Image Database (CAID), with more than 13 million images and half a million new images being added once every six months.

The UK police knew there is an unstoppable demand for child sexual abuse images and video and each abuse case can be split into terabytes of data being stored in smartphones, laptops and external hard disks. So, they decided to leverage image recognition to speed up the process of finding abuse images to offload the required manual effort. The AI will match the new images with the CAID database to identify if a minor was involved and where was the image taken.

Developers admitted that this tool is still in its early days and they are now working on the recognition of knives and guns, so it can also help to tackle other crimes. More information about AIMed Pediatric 2020 can be found here. You may also re-visit Day 1 of the virtual event on demand here and Day 2 here.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.