Mohammad Ghassemi, National Scholar of Data and Technology Advancement at the National Institute of Health and Assistant Professor of Computer Science at Michigan State University, reveals the grand project he’s working on, and why we’re all just like pieces of a mosaic

 

What initially sparked your interest in AI?

I became, and continue to be, interested in AI because of our human limitations in knowledge acquisition and our ability to retain, search, and understand knowledge at scale. More specifically, there are classes of problems that are not tractable for us to solve because they are too huge, too complex, or both. This is where machines and AI systems can help – they can perform analysis at a scale and level of complexity that we simply cannot.

Human beings have certain physical limitations on the rates at which we can learn new knowledge, and also how quickly we can act on the knowledge we acquire. AI can augment our abilities by taking the things that we know how to do, and doing them significantly faster and at a larger scale. That’s why I became excited and continue to be excited about AI.

Did you ever think of embarking on a completely different career?

Yes, my career’s trajectory has not been very linear. When I was young, I was interested in cinematography, film, and poetry. When I was an undergraduate student, I studied electrical engineering and went to work in the photovoltaics industry. After that, I went to Cambridge, where I was initially admitted for a master’s program related to business. Creativity is ultimately what kept me in science and engineering. If I hadn’t been able to exercise my creativity while I was learning about engineering and computer science, I would have left the field a long time ago.

You’ve been involved in many interesting projects ranging from mental representations and personalized medicine to using speech to determine happiness. Which one’s been the most memorable?

They’ve each been memorable for different reasons. One of the things that makes being a scientist a wonderful occupation are the people. I get to work with many people who are a lot smarter than I am, so I learn from them, grow and experience new ways of approaching problems all the time.

More generally, I tend to enjoy projects with a downstream impact on people. I think that AI and technology can impact people by liberating us from our circumstances. Fundamentally, technology should be created to make people’s lives easier so that we can focus on the things we love more, and less on the things that are boring and tedious.

Some of the technologies I have developed have helped people make friends, find business partners, and form more meaningful relationships. The outcome of that technology has changed people’s lives. That’s rewarding. That’s why technology should exist. When I see that happen, I feel like I have succeeded.

What are some of your responsibilities as the National Scholar of Data and Technology Advancement at the National Institute of Health?

I have this wonderful opportunity to work with a whole lot of very talented and smart people inside the National Institutes of Health (NIH), who are working hard to improve the health of the world. We all breathe the same air so healthcare is important for all human beings collectively.

My role at the NIH is to figure out how we can take the collective knowledge generated by scientists and technologists over the years and learn new things about science that are not implied by each element of knowledge alone. I’m developing tools and techniques using AI to read through, digest and understand the enormous body of scientific work that has been undertaken in the past 40 years to see what higher-order conclusions and learnings we can draw out of that knowledge that one wouldn’t glean by looking at small parts of it in isolation.

Sounds like a really challenging project. So where are you right now?

Fortunately, I am working with some extraordinarily smart and talented people both within and outside the NIH. Collectively, we are exploring ways to take concrete steps closer to a real system that both scientists and everyday people can use to help them digest the knowledge coming from science.

Let me paint a picture for easy understanding. Imagine a knowledge graph that evolves over time. One can start from any year, you can touch a button, and a graph of knowledge that links all the topics in the scientific universe will appear. We can view the pieces of academic literature that contributed to the connections between the nodes, strength between these nodes, and you can see how all these are evolving to the present day. This is the end vision of what we are trying to accomplish.

Right now, we are focusing on the most important piece, which is the collection of a robust, high-quality dataset that we can use to power an AI system that creates the graphs, and performs reasoning on them.

There’s always information that is more accurate or better supported with evidence than others. So how do you and your team decide the kind of information to be put in the graph and not be misled by misinformation?

Great question. It’s important to keep in mind that science is ultimately a discussion among people who are deeply and earnestly interested in certain topics. Such discussions take place in the form of scientific papers. Different scientists provide different forms and levels of evidence in their papers. So, when building a knowledge graph, it is important to distinguish between these different forms of evidence.

We have some initial ideas on how to approach this, but we want to keep this as an open discussion with the community. The collective knowledge of the community may help us arrive at a more fair but nuanced definition. That’s something we are working on right now.

In short, we do have some ideas on how to differentiate the different levels of knowledge that are reflected within the academic literature and science, but I think ultimately the way that this will be decided is by consulting with and getting the inputs from lots of scientists to make sure we do it correctly.

What excites you most about the future of AI?

I’m particularly excited about the potential of AI in the healthcare domain. I believe all physicians operate with the utmost integrity and are doing their best to provide care. However, they are still human beings. There are limitations around how much knowledge they can acquire. That’s the first thing that AI might be able to help with.

The second thing that makes AI especially interesting for health is its ability to solve complex problems. When you think about the human body, it’s a phenomenally complicated machine. We have different organ systems, chemical compounds that organs interact with, and there are also forces outside of our body like the air we breathe and the food we eat that are affecting our physiological state. This is complex. Imagine accounting for all of those factors while trying to decide on a treatment plan that works. That is what physicians are doing when they are treating us. They are thinking about a vast number of factors, the interactions between the factors, and how those may affect our responses to treatment. AI could be useful here, not to replace physicians, but to help them manage some of this complexity.

The synergy between machines and human beings is the focus of my scientific laboratory at Michigan State University, where I am a professor. One of my goals is to create this partnership between AI and physicians. We can let AI take some of the burden off their shoulders, empowering them to be more effective. The whole point of having AI is we continue to do the creative stuff while it takes over the boring stuff.

Do you think this synergy between machines and human beings will take place in the coming decade?

I think it already has and it will continue to grow, but it may not happen in the way a science fiction writer would put it and it also may not happen in the way that a pessimist thinks about it. It will be somewhere in the middle. For example, for many years, word processor technology has helped us write more coherent documents by highlighting spelling and grammatical errors. I think that AI can assist us in other areas of life in a similar way – by highlighting potential problems, and helping us fix them. In healthcare for instance, AI may help with scheduling, taking basic measurements, and other routine administrative tasks. But it’s unlikely we will walk into a clinic with just an AI doctor anytime soon.

What are you most concerned about the future of AI in medicine?

My concern is the divergence between the perception and the reality of what AI is capable of. AI solves optimization problems, not philosophical ones. For example, AI can look at several complex factors that are changing over time and give us a probabilistic estimate that a patient in a coma has a 20% chance of ever waking up. But AI should not tell us to withdraw care because 20% is a low probability. What are good odds, and not good odds, depends on your perspective. That’s why this is a philosophical question, not an optimization problem. AI doesn’t provide a philosophical stance; it only provides possible solutions to the problems we pose. It can’t always tell us which pathway to take.

Some researchers want to train AI to be creative, philosophical, and even to have feelings. What are your thoughts on that?

If we want to teach machines to be creative, philosophical, or have emotions, it’s up to human beings to provide precise definitions of what creativity, philosophy, and emotions are so that the AI can learn to imitate us.

There are examples of algorithms that are creative in some sense. You may have seen algorithms that generate brand-new faces that don’t belong to anyone.

Chatbots are another example of AI technology that is again, in some sense, creative. But it’s important to keep in mind that even the most convincing of our chatbot algorithms do not understand language in the way that you or I do, even if they appear to. They don’t, for instance, understand that there is a physical universe with laws of nature, where real objects exist, and that human beings have given some of those object’s names. That important context is missing. Instead, the chatbot lives in a universe comprised entirely of words that can only be understood in terms of other words. They can copy how we speak, mimic our emotions and tell jokes but, just like a very clever parrot, they probably don’t understand the meaning of the words in the way that we do.

What’s the best piece of advice you ever received?

To be open to learning from everyone you meet. This includes people whom you don’t like. Even if you meet a person you dislike or disagree with, you can still treat that interaction as an opportunity for learning or growth.

What advice would you give someone starting their career in AI?

My first piece of advice is to be patient. I think everybody likes to do creative things, and AI has plenty of opportunities for creativity. But before one can arrive at that stage, there is a considerable amount of background they must first learn. It won’t always be fun, but if you persist, you will get through the tunnel and enjoy the creative part of AI one day.

My second piece of advice is to be reflective about what you want, and don’t want, to use AI to accomplish. Many young practitioners of AI are thinking about what they want their AI methods to accomplish, without thinking about the downstream consequences. Do you want to train algorithms that hurt people or help them? Keep that in mind.

Who has been the biggest influence on your career?

There have been so many who have had an impact on me that it’s impossible for me to select just one. I think we’re all like a mosaic on a church window. When we encounter each other, we can take pieces from other’s mosaic and add it to our own. None of us gets the pieces of our own mosaic in isolation. We get it from other people that we have known.

What do you consider your greatest achievement and failure?

There are achievements coming from recognition by professional colleagues. They are humbling and gratifying but they are not the ones that I am proudest of. It’s the little human things that have made me feel proudest.

For example, there is a software platform I created to help build community at universities. I remember there was a student on the platform that sent me a note before he graduated. He said, “I have never met you and I don’t know who you are, all I know is that you created this software and I met some of my greatest friends and had some of the most interesting conversations through what you built. This platform really improved my quality of life and my feeling of connectedness.” That to me is special in a way that awards can’t be because my work touched someone’s life.

In terms of failures, it’s when I have forgotten my own rule about learning from others. There have been multiple instances where I failed. I will probably continue to fail occasionally, but I will try my best not to.

If you could return to the past, what would you change or do differently?

I think I would tell my younger self to avoid worrying about the future as much and to avoid regretting the past as much. Regret is a penalty you pay now, for a penalty you have already paid in the past. You have already suffered once and by regretting it, you are just repaying that suffering again. That doesn’t mean we shouldn’t regret things, but I believe there’s both a healthy and unhealthy manifestation of regret. It’s okay to fail. It happens. Do your best to learn from it and move on. Don’t regret.

The second thing I would tell my younger self is not to worry too much about the future because you are paying for a problem now that you are going to pay for later anyway. Feeling bad about something that’s about to happen is like paying for it twice.

There have been times in my life where I’ve had my eyes focused too much on the rear-view mirror or too much on the horizon. It’s better to have one’s professional and personal attention, grounded in the present.

 

 

The opinions and statements shared in this article are those of Mohammad Ghassemi alone, and do not necessarily reflect the view of the NIH, the Department of Health and Human Services, the US Government, or Michigan State University.