A legend in the robotics industry who began her career working at the NASA Jet Propulsion Laboratory believes healthcare is the next frontier for AI-powered robotics…and it IS possible to build a bionic clinician.
Ayanna Howard, Ph.D. is Professor and Chair of the School of Interactive Computing at the Georgia Institute of Technology. Dr Howard’s career focus is on intelligent technologies that must adapt to and function within a human-centered world.
Her work, which encompasses advancements in artificial intelligence (AI), assistive technologies, and robotics, has resulted in over 200 peer-reviewed publications in a number of projects – from healthcare robots in the home to AI-powered STEM apps for children with diverse learning needs.
In an exclusive interview with AIMed Magazine, Dr Howard said she is confident robotics will solve many of the fundamental problems in healthcare today.
AI Med: What can robotics do for healthcare?
Ayanna Howard: Every patient that comes to a doctor is unique, there really is no norm. An AI powered robotic system could personalize an interaction: it could learn from a patient, it can take outcomes in real time as the patient is, for example, doing rehabilitation, and adapt its protocols in real time.
Patients getting treatment might see a clinician once a week, but if they’re seeing a robotic system they can interact with it every day and get customization on a daily basis.
Therefore, the doctor can get better data of what’s going on in the home environment and adapt the overall protocol at a much finer, personalized resolution.
We don’t have enough clinicians and nurses and doctors, especially in developing countries, to address the need and robotics fills a gap if we are really thinking about quality of life for everyone.
AI Med: Will clinicians have robotic assistants in the future?
AH: I do think they will. There is some research I know is being deployed right now in hospitals in terms of pilot studies.
For example: nurses have an immense amount of back pains caused by moving patients and changing them over. Robots are being developed and piloted which can pick up a patient, allowing a nurse to change the covers without having to do the heavy lifting.
There’s also some research where robots are being deployed in hospitals to do fetching of supplies. The amount of time a nurse spends going back to the supply room because they forgot something, or they need something takes away from patient interactions. It makes sense for a robot to do that, to go back and forth as a fetch and carry.
I think there’s an immense amount of abilities for robots to work as helpers to other medical professionals as well.
AI Med: Is getting a robot to fetch and carry a very complex computing problem?
AH: The challenge isn’t getting the robot to go from point A to point B – that is almost a solved problem, especially since hospital floors aren’t going to change overnight.
The problem is the supply part. If I tell a robot: “I need you to fetch this type of bandage from the supply room”, how does the robot go into the supply room, and actually grab it from the supply list and identify it and figure out where it is?
That is what we call the last mile – the last and most challenging bit of the interaction – and it is not a solved problem. That’s one of the reasons why Amazon, as much as they’ve deployed robotics, still has people who do the pick and place at the end.
AI Med: Tell us about your work in the healthcare robotics space
AH: I’m working in the in-home space. Children don’t get enough services, they don’t see the clinician enough for various reasons like access and insurance. So, how do we augment what the clinician provides in the clinic to the home environment?
Children are given exercise prescriptions telling them what they’re supposed to do in the home and compliance is not good, for various reasons. Robots can enhance the compliance with these exercise prescriptions in the home environment.
Firstly, robots can monitor if the child is doing the exercise, which can be reported back to the clinician. Through robots, we’re also able to provide guidance to help do the exercise correctly. Finally, we can be that motivating companionship which goes through the process with the child and says: “Yeah! We can do this let’s do this together!”
Even adults don’t like to exercise and now we’re asking a child to exercise every day in the home. We found that with the robot present it becomes more of a game.
In terms of the outcomes we’ve seen that on a number of parameters kids do improve and we see that the amount of time they’re willing to exercise goes up when technology is present in.
AI Med: You work with children with cerebral palsy, and in this issue, we spoke to a brilliant innovator with cerebral palsy who was diagnosed as a child 40 years ago. How will robotics help children with disabilities in the future?
AH: Even from 40 years ago to now, the types of technologies that are out there are amazing. I also work with children that are non-verbal. Your ipad has apps now that can be your voice, whereas before you had to buy a $15,000 piece of equipment and hopefully have someone to work with you as well.
When I think about 25 years from now, I think this technology will not only be pervasive, but it will be highly integrated.
If you have a child who has difficulties walking, you could rapid prototype prosthetics at a very cheap price and put it on an infant that’s learning how to crawl. As they expand and grow you’d just rapid prototype another one.
I see autonomous cars being able to come to your doorstep, I see robotic assistants being able to prevent dementia patients from becoming confused or lost.
I am confident these goals can be achieved because I see the progress within, not even 40 years, but the last 15 years. And technology is in exponential growth. Like with any technology, it’s slow to start, but then there’s this inflection when it’s much easier to develop, design and iterate.
And I can see that inflection point coming, so once we get past that it’s like having a new iPhone every year that has brand new capabilities, like face recognition.
AI Med: You design AI which will allow robots to travel into space and explore Mars. Given the scale of that ambition, could we ask what you see as an ambitious goal for healthcare here on Earth?
AH: To identify a disability, or the potential of a disability, before it even occurs. For example, early Alzheimer’s.
They say that with Alzheimer’s, the brain is starting to change before we can even diagnose it. But if we can detect it when there was just a hint, you could put in an intervention and it might not develop.
If you could identify a disability in a child at birth and start doing interventions at that point to give them the ability to meet development milestones – that would be like the holy grail.
AI MED: Do you think that robots can become conscious beings?
AH: You have to give the system the motivation to learn. What is the motivation to learn? There’s some type of objective function. What makes a child wanna learn how to walk? What makes a college student wanna learn how to code? We’re always learning, our entire lives, there’s something about us that’s curious and it makes us better.
I think for AI systems to truly be integrated with us, we need to give them the motivation to learn. But what are those motivations? It’s unclear right now.
Maybe it will just be my robot loves its child and wants to make sure its child has the best optimal experience, and that’s why it will think: “I need to learn to help my friend”.
Right now, the way we input learning into robots is like an engineer has a switch and says: “Ok, it’s time to learn, you have an objective function and it’s different than the norm” – but there’s no real motivation.
I also think that there must be some aspect of a value system. Now the question is: whose values? That’s still an open-ended question, but I know that there are certain accepted values like the value of life and the right to education. Instilling those values in the systems is a start.
AI MED: How does diversity impact the development of healthcare robotics?
AH: I think of all industries, healthcare is the most relevant with respect to diversity.
Just looking at gender, we now know that some drugs have a negative effect on women because they were mostly tested on guys. We’re now better able to detect heart attacks in women, whereas before tests were missing the signals in women. Researchers are now realising this.
If you don’t have a diverse test-population you’re gonna design based on one type of person. That’s why, especially in healthcare where it’s about quality of life and better outcomes, it has to be personalised, and how can you design personalised technology if you don’t have the data to learn from in the first place?
One of the issues, and where patients can become involved is, a lot of times the reason why we don’t have a diversity of data is because people don’t necessarily volunteer their data, and so it’s hard to learn how to interact if all the data is from a US-centric viewpoint, or a European-viewpoint, it’s hard to then figure out what the differences are if I design a robot for sub-Saharan Africa.
Patients have to be open to understanding that their data helps others, helps level the playing field and helps make sure that the sets are diverse so these robotics and AI systems actually function correctly for everyone.
AI MED: You’ve spoken about how you were inspired at a young age by The Bionic Woman. As you’ve developed your expertise in robotics over the years, have you decided whether building a Bionic Woman would be possible?
AH: Yes, I do, and why I say that is because there are parts that are already in existence and in planning.
We have the prosthetics and the bionic arms and the bionic legs, and some of them are getting much more advanced. And of course, we have the pace maker, so you can think about internally what’s going on. And if you think about Jamie Sommers, they didn’t replace her brain. Her brain and her heart were still there, they just replaced the external.
We have amazing research on artificial skin through which you can touch and feel sensations because it’s wired up to your brain. You can touch something and feel that it’s cold, or hot, but it’s not your skin.
All the components are there and what’s missing is it being put all together in a form factor that is the whole body.
AI MED: But could they build a Bionic Clinician?
AH: I guess the concept is more philosophical – where does robotics end and humanity begin? We don’t know that yet, we haven’t pushed that yet because the technology hasn’t arrived at that level.
The bionic clinician: is it just a clinician with robotic hands, and robotic arms and a robotic body? Well then, it’s still a human clinician, it just has bionics to help them do their job better. So, in that regard it’s just a clinician, it has all that a clinician would have, just bionic. And so, yeah, it would be possible.
AIMed is offering 100 clinicians the opportunity to be rebuilt as bionic. To apply, please send us an illustration of how you would look as a Bionic Clinician via email to [email protected]
This article originally appeared in AIMed Magazine issue 05, a deep dive on Robotic (bionic) Technology & Virtual Assistants available for you to access here.