Nothing beats the stimulation of designing and building a machine learning model, but it has become more obvious in recent years that the process can be a tedious one too. The call for a more inclusive, liable, and transparent artificial intelligence (AI) has never been louder. Yet, in the absent of so-called the “gold standard”, one’s AI creation is probably just as democratic, equitable and honest as the many others.
Participatory methods aimed to create better AI
Participatory approaches are gaining some attention in recent years as it incorporates those who are interacting with or affected by the algorithm to be part of the designing process. For example, getting medical professionals to help in the development of a sepsis detection tool. Nevertheless, participatory method itself contains flaws and it should not be regarded as a solve-it-all solution. Primarily, not everyone’s effort is recognized.
In medicine and healthcare, patients are often not aware their data are being used to generate a machine learning model. They are also kept in the dark when physicians deploy AI for a second opinion on clinical decision making or predictions. Besides, there are administrators or research assistants who are involved in collating, cleaning and refining the data before they can be used for model developments.
There are also be senior advisory or consulting roles that provide the team with valuable technical or business insights. Some of these contributions are not necessarily long term, so those who were involved in the initial part of the development process are very likely to be forgotten thereafter.
Ultimately, credits might only be given to those who created the models only. Systematic participation washing is not new, according to New York University’s sociologist Mona Sloane it might have permeated the field of machine learning for the past 30 years while anthropologist Mary Gray termed these behind-the-scenes labor as “ghost work”.
Important notes for health executives
Sloane urged the AI community to start practicing “participation as justice”, which means to establish a tight knitted and diverse group of individuals that will follow the design work from day one and stop engaging ad-hoc contributions. She believes the concept has “has social and political importance, but capitalist market structures make it almost impossible to implement well”. Those who have contributed should be compensated for their effort and the reward system can either be monetary or non-monetary.
Interested health executives should be aware that AI development and deployment is a long process. It will not stop after the final product is born. In fact, an efficient algorithm should progress with the context and time. At the same time, consent should be part of the design, that means the machine learning system should ask users for permission to use any of their details and they should be given the choice to opt out at any time.
Slone added “people are more likely to stay engaged in processes over time if they’re able to share and gain knowledge, as opposed to having it extracted from them”. This is difficult to achieve as AI design is a proprietary process. As such, she encouraged a constant reiteration of purpose and what the design team will like to achieve. At the end of the day, learnt from past mistakes and do not give in to complexity. Sometimes, addressing the hard questions like equality, fairness, and representation right from the start goes a long way.