Renowned AI expert and superstar Stanford computer science professor, Dr. Fei-Fei Li, recently recalled the very first research paper she read as a first-year PhD student at Caltech back in 2000. “It was a seminar paper on real-time face detection,” she remembered. “Looking back, I thought when we were discussing it in class, nobody, including myself would have thought about privacy, fairness, human rights, racial profiling, and all the other important societal topics and consequences that come with it. We were just excited about what computer vision algorithms were capable of.”

Noting how much machine learning has grown over the last two decades and the rise of facial recognition, she adds, “We cannot fault the researchers behind that seminar paper I read back then, because we could not imagine how technology would evolve down the road. But now that we have witnessed all these consequences, we want the future of these technologies to be much more hand-in-hand with considerations around humans.”

Dr. Li famously rose to prominence after creating ImageNet, a large visual database designed to train visual object recognition software, before joining Google Cloud as its Chief Scientist of AI/ML in 2017 and later, becoming Vice President. She subsequently co-founded AI4ALL, a nonprofit organization dedicated to boosting diversity and inclusion in the field of AI.

Throughout her meteoric rise, she still maintains a humbling sense of responsibility, insisting that the next phase of AI needs to consider all aspects of human society and individual lives. It’s why when Dr. Li co-founded the Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University in 2019, she genuinely wanted it to be an interdisciplinary research and education organization.

“Studies have shown that when diverse groups of people work on any problem, the solution is much more innovative,” explains Dr. Li. “At Stanford, we believe the next generation of students should be bilingual. Bilingual of technology and human considerations because the two disciplines intertwined and touched each other at many points like a double helix.”

When students participate in some of the HAI initiatives, such as the Partnership in AI Assisted Care (PAC) which leverages AI to solve challenging healthcare problems, Dr. Li makes them go to the clinics and shadow physicians. Students will also have to attend a weekly meeting joined by five to six clinicians and have constant conversations with them. “My instructions to the students are; ‘Forget about your algorithms. Just soak in that human experience, look at the human vulnerability and heroism.’ I think this is a small step towards helping students to develop that fuller body of knowledge in both technology and human conditions.”

In the words of philosopher Shannon Vallor, there is no independent “machine” values. Machine values are human values. So, Dr. Li believes the role of HAI is to create a healthy ecosystem for important dialogues to take place. She hopes young technologists who go on to become C-Suite leaders will hire ethicists in their teams; speak with the vulnerable groups or work with local and national policymakers, so that AI will be a field that is designed by humans, for humans.

“Our algorithms now are, by and large, black boxes,” she says. “They are not explainable and their robustness and safety are not parameterized and well-understood. This erodes trust and it is a shaky position we need to solidify.”

As such, she emphasizes that human issues cannot be an afterthought; they need to be baked into the designs of technical systems. It should start from where one gets the data; annotates the data; uses the data; interprets the results; builds in the human factors and enables humans to interact with the system when it has conflict. Ultimately, an ethical framework needs to be embedded in the processes and technologists cannot do it alone; everything demands collaborations.

Dr. Li is quick to add, “If we have that collective scientific curiosity on pushing to create machines that mimic that kind of mental image of human intelligence, while at the same time understanding humans, we can make humanity better in so many ways.”

 

 

photo credit: Drew Kelly/Stanford HAI