The Artificial Intelligence (AI) Blackbox: Colloquially, we feed a machine with sufficient data to churn out algorithms that assist us. AI succeed in spotting patterns that human being fails to notice or takes a long time to derive. However, we have no idea what goes on in between. At the moment, AI is not transparent enough for us to trust it, especially when sensitivity and safety like investment or medicine are of concern. 

To some, the problem is unavoidable because the code which goes into an AI is complex. The “spaghetti code” of Toyota Camry was a fine example. A decade ago, the vehicle had resulted in several life-threatening accidents. It took NASA and experts 28 months to untangle the code and realize the software which supports human control of a car’s breaking went malfunction. 

Code turns messy because it gets piled up over the years. New code addressing new feature will be added into existing software. If an error within a line of code can lead to such adverse consequence that takes a long time to correct, imagine how confusing the codes will be in sophisticated machines that have many different functions. The result could probably be creating an AI which no one can understand. Recently, researchers have been trying to open the black box by underlining the pathways that AI take as they develop the algorithms. 

Recent research 

A group of researchers from Massachusetts Institute of Technology (MIT) Lincoln Laboratory’s Intelligence and Decision Technologies Group had developed a neural network which make reasoning step that mimics human being. Known as Transparency by Design Network (TbD-net), the model will visually present its thought process as it solves problems related to images. The team believes it is important to make AI transparent so that human is able to understand how a self-driving car differentiates a pedestrian from a traffic sign. This is going to help researchers to correct any wrong assumptions that an AI has. 

Georgia Institute of Technology, in collaboration with Cornell University and the University of Kentucky, had developed an AI agent that can use everyday language to explain the rationale and motivation of an AI in real time. The study took place in a form of a video game whereby AI needs to make a decision to achieve its goal. Both human confederates and AI generates rationales to justify the moves. The given rationales will be ranked accordingly. Researchers believe the study not only understand human perception but also the preferences of AI systems. Ultimately, if AI is to be democratized, it should be accessible to different people regardless of their technical background. 

In another separate research project, scientists from SRI International had developed an explainable AI system which generates output and provides explanations to how it derives at the output. Like the other studies mentioned above, researchers believe in the importance of knowing what goes into the AI to better understand it. 

The hidden challenges

The AI Blackbox makes it extremely challenging for human to trust a machine. For example, in 2015, a group of researchers at Mount Sinai Hospital in New York had created Deep Patient, an AI platform built using data from 700,000 patients. While it was efficient in assisting physicians by giving them insight. Users were skeptical at its brilliance to predict the onset of psychiatric disorders like schizophrenia. Something which even experienced psychiatrists find it hard to detect. 

Healthcare professionals are trained in evidence-based practice. An opaque system like this is not going to gain their trust as they are concerned about its safety and possible impacts when things go wrong. 

As a result, this leads to two schools of thoughts: One aims to build AI based on rules and logic so that its inner mechanism become transparent by examining its code. The other is inspired by biology, by requesting AI to learn from observations and experiences like a small child. However, most believe there is no easy way out because, at the end of the day, some thoughts and decisions are explainable are may just be instinctual. 

Author Bio
synthetic gene empathy chinese artificial intelligence data medicine healthcare ai

Hazel Tang

A science writer with data background and an interest in current affair, culture and arts; a no-med from an (almost) all-med family. Follow on Twitter.