Biases in artificial intelligence will aim to be eradicated in a world first as the NHS in England trials a new approach to the ethical adoption of AI in healthcare.

Algorithmic Impact Assessments (AIAs) designed by the Ada Lovelace Institute will be piloted to support researchers and developers to assess the possible risks and biases of AI systems to patients and the public before they can access NHS data.

While artificial intelligence has the potential to support health and care workers to deliver better care for people, it could also exacerbate existing health inequalities if concerns such as algorithmic bias aren’t accounted for.

Speaking at the time of the announcement, UK Innovation Minister Lord Kamall said:

“While AI has great potential to transform health and care services, we must tackle biases which have the potential to do further harm to some populations as part of our mission to eradicate health disparities. This pilot once again demonstrates the UK is at the forefront of adopting new technologies in a way that is ethical and patient-centred. By allowing us to proactively address risks and biases in systems which will underpin the health and care of the future, we are ensuring we create a system of healthcare which works for everyone, no matter who you are or where you are from.”

The pilot complements ongoing work from the ethics team at the NHS AI Lab on ensuring datasets for training and testing AI systems are diverse and inclusive. Taken together, this will result in better health outcomes for everyone, and in particular minority groups.

To ensure best practices are embedded in future technologies, the NHS will support researchers and developers to engage patients and healthcare professionals at an early stage of AI development when there is greater flexibility to make adjustments and respond to concerns. Supporting patient and public involvement as part of the development process will lead to improvements in patient experience and the clinical integration of AI.

It is hoped that in the future, AIAs could increase the transparency, accountability and legitimacy for the use of AI in healthcare.

Brhmie Balaram, Head of AI Research & Ethics at the NHS AI Lab, said:

“Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI. Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market. The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.”

Following a commission from the NHS AI Ethics Lab, the Ada Lovelace Institute has published their research which maps out a detailed, step-by-step process for using AIAs in the real-world. It is designed to help developers and researchers consider and account for the potential impacts of proposed technologies on people, society and the environment.

Octavia Reeve, Interim Lead, Ada Lovelace Institute, said:

“Algorithmic impact assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can in turn build public trust in the use of these systems, mitigate risks of harm to people and groups, and maximise their potential for benefit. We hope that this research will generate further considerations for the use of AIAs in other public and private-sector contexts.”

The NHS AI Lab introduced the AI Ethics Initiative to support research and practical interventions that complement existing efforts to validate, evaluate and regulate AI-driven technologies in health and care, with a focus on countering health inequalities.

We believe in changing healthcare one connection at a time. If you are interested in the opinions in this piece, in connecting with the author, or the opportunity to submit an article, let us know. We love to help bring people together! [email protected]