How can we ensure that intelligent machines have a human centered design?
Experience shows that needfinding (the act of defining problems) requires empathy or a feeling of involvement with a person in need. Empathy is an ability that allows us to comprehend the situations and the perspectives of others, both imaginatively and affectively (Rogers 1975). It is rare that a designer will see a problem if she does not relate to the user in some personal or professional way .
In healthcare design classes I teach at Stanford’s Hasso Plattner Institute of Design, interdisciplinary design students from various schools at Stanford come together to learn design thinking methodology. A substantial part of every class
is building empathy for our users.
Developing empathy for our users requires research. Research involves an investment of time and energy to deeply understand a particular group of people and any peripheral stakeholders involved. I train my students to shadow, interview and in some cases, such as medical device design, engage in hands-on training. Each of my students comes to design thinking methodology with themselves.
I depend on student diversity to bring varying perspectives to a particular topic. An engineering student may be more likely to identify an ergonomic need, a humanities student may clearly see educational needs, and a computer science PhD might be sensitive to data gathering and automation needs. Needfinding is therefore a very subjective activity which is influenced by the finder’s current state of being, motivation, and point of view .
As a practicing designer for the past 20 years, I believe the single most important phase of the design process is research and synthesis in order to define the problem. (Ideation, prototyping, testing, and iterating are the subsequent phases of the design process.)
When we spend time to gather information and make meaning from the data, we begin to identify with our users, feel emotions around their predicament, and become passionate about helping them solve their problems.
What will human centered AI value?
I tell my students that if they don’t find themselves unusually curious, saddened, or angered by a problem, to the point where it keeps them up at night, their solutions will probably never see the light of day. Passionate, laser-focused human energy propels meaningful solutions. Having emotions is uniquely human and what drives human centered innovation.
In contrast, an artificially intelligent machine does not have emotions that guide its process or motivation. The understanding of a machine is simplistic compared to the human brain. Artificial intelligence (AI) can’t reach human-level creative problem solving without making intuitive leaps to be able to extrapolate from the known to the unknown. Our personal background, education and values color the way we synthesize all of this data we collect during our research, and an AI machine is not characterized by these traits.
Today there is much discussion about creating AI that is “aligned with human values.” I would argue that this is impossible, since different people have different values. AI safety pioneer Eliezer Yudkowsky coined the term “friendly AI”: AI whose goals are aligned with ours . In order for an AI to figure out our goals, it must observe what we do, and understand why we do it—a tall order for a computer, with myriad opportunities for misunderstanding.
Why people do what they do is tied to their values, which relates to their education level, socio-economic background, religious affiliation and more. Therefore, designing human centered AI will vary widely according to who is doing the needfinding and their perceptions of the users they are designing for. What are the norms of a community? Who are all the stakeholders? What do they need? What do they value?
Keeping teams human centered and diverse
In a design thinker’s ideal world, machines would be programmed to approach modeling an algorithm much like as an ethnographer approaches a new culture—with naïveté and respect. One solution is to teach an AI machine commonsense knowledge. This can be any knowledge commonly shared by individuals from the same society and culture . In this way, we can become more precise about what a machine knows about the group it is built to serve; we can work to engineer AI that incorporates insights about a group’s specific behavior.
Since humans are wired to be effective communicators and collaborators with other humans, we can try to build AI to understand how humans reason, communicate, and collaborate . But again, the way an AI works will depend on who is programming the system.
This leads to the issue of diversity. I think Carissa Carter, Director of Teaching and Learning at Stanford’s Hasso Plattner Institute of Design at Stanford says it best: “Exclusion happens every time the hype of a new technology precedes the maturation of its actual value and applications: A new technology shows promise, the media picks up on it, most people struggle to understand what it is and why it’s relevant to them and so they give up leaving the tech in the hands of the tech people. This eliminates the discoveries that come when people from varied disciplines and backgrounds approach projects with their own unique lenses of life, experience and expertise. AI applications will expand exponentially if we break tech’s exclusionary habit .”
This article originally appeared in AIMed Magazine issue 04, which you can read here.
Greenson RR (1960) Empathy and its vicissitudes. Int J Psychoanal 41:418–424
Rogers CR (1975) Empathic: an unappreciated way of being. Psychologist 5:2–5
- Empathy via Design Thinking: Creation of Sense and Knowledge
Eva Köppen and Christoph Meinel
- Life 3.0
- Perceiving Needs
Rolf A. Faste
- Human Centered Machine Learning by, Jess Holbrook and Josh Lovejoy
Jules Sherman – MFA Design
Jules Sherman has been a product designer for 20 years. She attended RISD (BFA, Industrial Design) and Stanford’s Graduate Design Program (MFA Design), where she concentrated on learning design-thinking methodology at the Hasso Plattner Institute of Design (d. school). Jules has been a design consultant for the Stanford Medical School for over three years working with a group of clinicians at Stanford and UCSF to improve safety in labor & delivery.
Jules’s company, Maternal Life LLC, was awarded three grants by the New England Pediatric Device Consortium and The National Capital Consortium For Pediatric Surgical Innovation to fund Primo-Lacto. Primo-Lacto is a closed system for colostrum collection for mothers with preterm infants or infants who have trouble latching due to other health issues. She is currently selling Primo-Lacto to hospitals in the US and Australia. Since 2013, Jules has designed curriculum, and co-taught classes with clinicians focused on infant and maternal healthcare at Stanford’s d. School.