Blackbox occurs when certain forms of artificial intelligence (AI) such as deep learning or neural network become too complex to explain or transparent enough for one to trace back how a solution was derived. According to a recent article written by Katarzyna Szymielewicz, Polish lawyer; activist and co-founder and Head of surveillance opposition group – Panoptykon Foundation; Daniel Leufer, Philosopher and Policy Analyst, and Stanford PhD Computer Science student Agata Foryciarz, AI opacity may also be done on purpose as developers do not wish their algorithms to be tempered with or to protect certain trade secrets.

Opacity on purpose

For example, iBorderCtrl (Intelligent Portable Control System) was a project funded by the European Commission’s Horizon 2020 initiative to render an AI-driven lie detector service for police at the borders within the European Union. The attempt was severely questioned when the European Commission and iBorderCtrl’s developer refused to make public its ethics assessment to prove that technology is not at risk of exercising discrimination and injustice.

Although the Council of Europe had latter recommended human rights impact assessments (HRIAs) which barred public authorities from acquiring AI systems from companies that are not willing to disclose reviewing processes or ways in which the algorithm is developed. In spite so, the public remains largely in the dark as they do not know when and where some of the AI systems are deployed.

The authors said understanding AI is the first step to address AI opacity. Simplistically speaking, AI models are statistical models and some of these algorithms have been studied since the 1950s. What marks the ongoing AI hype is the explosion of data made available to the public as a result of technological advancement. At the same time, the authors also stressed that understanding AI does not equate to knowing every step of its process.

It’s more of understanding “the choices, assumptions and trade-offs made by the people who designed this system – which all shape the behavior of the algorithm”.

Transparency before an algorithm is built

Whether it’s for prediction or decision-making process, before an algorithm is being made, the human behind it will have to define his/her goal. During which, he/she will also have to make certain assumptions and exercise certain choices in the selection of training, validating, and testing of the algorithm. All these, have nothing to with AI at all. For example, shall the developer’s goal is to design a model which assists a particular healthcare system to select individuals to be enrolled into a health management program.

The US hospitals planned to roll out something similar. However, the model was later discovered to be biased towards healthier White patients over Black patients because patients with high health needs were wrongly assumed to be those that produce higher healthcare costs. Whereas in reality, US hospitals tend to spend less on Black patients.

The authors believe these are the trade-offs of AI. As much as we wish an algorithm will address a real-world problem, not all real-world scenarios can be presented in a quantifiable manner. There are incidences whereby some circumstances need to be simplified while assumptions have to be made for others. Besides, developers do not always get the kind of data they need to generate the solution they desire.

“Even the best model will not perform well if it was trained on datasets that are only remotely connected to the phenomena the system is supposed to predict,” the authors wrote. Because of the many decisions and agendas that take place prior to the creation of an AI system, most probably the “real Blackbox” is hidden behind an algorithm rather than embedded within. As such, the authors thought transparency should be practice right from the beginning rather than when something goes wrong or after the AI system has been developed.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.