Last week, artificial intelligence (AI) had achieved yet another breakthrough. A neural network built by Synthetic Biologist Jim Collins and his research team from the Massachusetts Institute of Technology (MIT) had uncovered a completely novel antibiotic known as halicin from scratch. Although this is not the first time AI is being employed in an antibiotic-discovery process, the system pioneers in making molecular function forecasts without relying on assumptions of how drug works and chemical grouping labeling. This means it’s able to learn patterns that are even blind to human experts.

Researchers trained the neural network with 300 approved antibiotics, 800 natural products from flora, fauna and microbial sources and more than 2300 known antibacterial activity molecules, to locate molecules that hinder the growth of Escherichia coli (i.e., E. coli). They also validated the model in animal tests. Bacterial resistance to antibiotics is becoming a concern as scientists predict resistant infections may perished 10 million people annually in the next three decades. As such, there is an extreme urgency to identify new and powerful antibiotics to deal with new challenges.

What’s wrong with affective computing?

However, such success is not met in all corners of artificial intelligence (AI) research. Those who are working in the area of affective computing are now in a dilemma. On one hand, emotion AI are used commercially to analyze gestures and facial expressions of individuals; to screen for job applicants, deployed on students in classrooms or to catch potential deceits inside courtrooms. On the other hand, there is insufficient peer-reviewed studies and evidence to indicate if these algorithms will not result in discriminations or even injustice. Although the emotion AI market is likely to reach $25 billion in three years’ time, some researchers are concerned their work may not be valued since there’s an absent of regulation over its use.

Fundamentally, the debate lies in whether emotions can be evaluated or interpreted distinctively or it’s subjected to contextual and cultural influences. Aleix Martinez, Professor of Electrical and Computer Engineering at The Ohio State University and his research team built an algorithm to study the relationship between face muscle movements and a person’s emotions and presented their findings at the annual meeting of the American Association for the Advancement of Science in Seattle on 16 February. Most of the time, Martinez and his team realized facial expressions do not necessarily tell the full story of emotions and it’s dangerous to use it as a basis to determine someone’s actions and/or motives.

Other affective computing researchers are aware of this too. They believe emotion recognition is more than looking at one’s facial expressions and related research should encompass postures, gait and even physiological and biometric information. Unfortunately, what most commercial AI on the market do is to estimate how emotions are perceived by others, not relating to one’s internal thoughts and experiences.

How should affective computing be regulated?

Illinois imposed the Artificial Intelligence Video Interview Act on 1st January. The first of its kind in the US, the law advised companies to notify job applicants that AI will be used during interviews and how they will be accessed by the technology, as well as means to protect one’s privacy. Last November, the Electronic Privacy Information Center (EPIC) had filed a complaint asking the Federal Trade Commission to investigate recruiting company HireVue if they have complied to basic standards in using AI for decision making.

Some researchers thought affective computing echoes lie detectors which private companies should be barred from using. Even in circumstances when tools like these are employed, companies should disclose why and how the technology was used and what could be the limitations to the affected individuals. At the same time, the public should be educated on the differences between research and commercial use of new technologies. What should be discouraged is the unregulated use of non-adequately validated AI in the commercial setting.

Ultimately, there is a need to ensure sensitivity and uphold a balance of power. Those who are using the technology should not feel entitled and have to rights to neglect the rights of those that the AI is being used upon.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.