Conversa Health

Menu

Will Chatbots Dehumanize Healthcare?

Jun '18

Not all chatbots are created equally…

Chatbots, those automated conversation bubbles that pop up on websites, chatting with people without the need for humans, are all the rage. But what will their role be in healthcare? Will they improve care and cost or will they de-personalize the patient experience? Before going any further, I need to give my standard disclaimer when divining the future of healthcare. In the wise words of Ygritte from Game of Thrones: You know nothing… In Westeros, the night is dark and full of terrors. When predicting what healthcare will look like tomorrow, forecasting is hard and full of errors. With that out of the way, let’s talk about some things we do know.

 

We’re not starting from a neutral position. 

Healthcare is arguably already dehumanizing. In the U.S., a patient waits an average of 24 days to see her primary care physician. When she does see him, she gets a rushed seven minutes. Healthcare today is marked by episodic in-person visits that are inconvenient and stressful. Just because a human is involved, doesn’t mean the experience is humanizing.

There is enormous pressure to change.  

  1. Cost pressure. Warren Buffett rightly calls healthcare a hungry tapeworm feeding off the belly of the American economy. Healthcare spending at over 18% of GDP and growing is not sustainable. Taxpayers, businesses, and increasingly patients bear the burden.
  2. Patient Pressure. 85% of all consumer interactions with enterprises will be done without humans by 2020 (Gartner), and healthcare will not be spared. 76% of patients say they prefer access over personal interaction with a healthcare professional (Cisco).
  3. Provider Pressure. On top of that, providers are demanding more insight. 70% of doctors say they want patient-generated health data (PGHD) so they can see how patients are doing between visits. Currently, providers get snapshots of how patients are managing chronic illnesses, recovering from acute incidences, and preparing for procedures. We have a ‘data desert’ in between these snapshots, and healthcare professionals recognize the value of continuous feedback from patients so they can intervene early when patients move from low to rising risk or rising risk to high risk.

These pressures make change inevitable, and healthcare organizations are scrambling to figure out “how” not “if” to make this change.

Artificial intelligence may be artificial, but it’s real.

With AI technology projected to add $15.7 trillion to the global economy by 2030 (PwC), you’d be hard-pressed to walk through a conference today without bumping into somebody that’s not working in AI, but claims they do. Software may be eating the world, but AI is quickly eating software – and chatbots and conversational AI will swallow a significant portion.  Given the state of the healthcare experience, immense pressure for change, and promise of AI, the question isn’t will chatbots be used in healthcare, but how?

Will we take the current dehumanizing situation, stir in AI, and make it worse?  Or will we figure out how to add AI into the mix as a complement, transforming healthcare from a one-size fits all, episodic visit approach into a system characterized by shared decision-making, personalization, and continuous conversations? Will chatbots become an inferior replacement for healthcare professionals or will they augment them by checking in with patients in a thoughtful way, helping to determine the patient who is doing just fine and who needs human attention?  

 

Which way we go depends on how we answer four key questions:

First, what approach to chatbots will we take – Technology-driven or Problem-driven?

Elon Musk thinks AI poses an existential threat to humanity. Rodney Brooks, former head of MIT’s AI Lab, Professor of Robotics and founder of iRobot, thinks it’s hard enough to get AI to do what we want it to, so there’s no way it’s going rogue. Stephen Hawking, may he rest in peace, took the position super-smart people tend to take on issues like these – he said AI would either be the best thing to happen to humanity or the worst. These guys are debating artificial general intelligence. An AI for any problem. This is Technology-driven AI. It’s exciting; it’s being pursued in the research labs of Big Tech; it’s off in the future. Problem-driven AI is task-specific, it starts by asking what problem are we trying to solve and then applies purpose-built solutions to solve them. Problem-driven AI is used today around the world to solve a variety of problems. Whereas Tech-driven AI requires new invention, problem-driven AI uses existing invention and just requires the right data.

Healthcare needs problem-driven AI.

 

Second, will we use chatbots to automate or augment?

Vinod Khosla of Khosla Ventures thinks AI will replace 80% of doctors, while, Marc Andreeson of Andreeson Horowitz says AI will cause the job of a doctor to shift to a higher level. I like to think of this situation as Terminator versus Iron Man…. Will we try to replace humans in healthcare with artificial intelligence in a ruthless press toward ever-improving efficiency, or will we augment healthcare professionals with AI to improve efficiency, effectiveness, and the patient experience?

I’m a fan of Iron Man.

 

Third, on which use cases will we focus our chatbots?

A recent report by WBR Insights and Conversa revealed significant healthcare executive interest in automated patient experience technology, largely in conversational AI. 79% of U.S. healthcare providers plan to roll out chatbots in the next 24 months, and are most interested in using chatbots to improve patient experience, collect patient-generated health data (PGHD), and support population health management.

 

Another report just released by Accenture dove into the patient perspective: patients want intelligent virtual assistants that can help monitor their health, coach them to improve their health, help with administrative tasks, diagnose symptoms, and identify health risks.

We need to focus where the needs of patients and providers intersect .

 

Fourth, How will we design our chatbots?

While there are many considerations in designing a healthcare chatbot, a few key ones are:

The Interface – will it be text-based, voice, avatar, robot? Will it be an app, browser-based, SMS? Other?

The Interaction – will it ask for structured responses (e.g. yes/no, multiple choice), chat in natural language, read facial expressions and gestures, or all the above?

What will be the role of the human? – Is a healthcare professional involved in some or all of the interactions (e.g. reviewing responses), are healthcare professionals providing training data, or is there no human involvement whatsoever?

I find a very helpful way to think about the right design for a particular use case is to look at how much intelligence is required and how much humanness you need. Most people confound these two variables. But they are two different things and need to be considered independently.

Intelligence is the smarts required to solve a problem. It does not need to solve it the way a human does nor does it necessarily require any other human traits. Humanness, on the other hand, includes traits that make somebody feel the way they feel when they interact with a human (e.g. empathy).

Making an appointment requires little smarts and not much in the way of humanness, given the transactional nature of the task. Whereas caring for a cancer patient requires high intelligence (there are lots of data-driven decisions) as well as considerable empathy. Reading a radiology scan requires a rich data set to train the AI to achieve an acceptable level of accuracy, but it doesn’t require much humanness since the use case is informing a highly-trained pathologist. On the other end of the spectrum, end of life care doesn’t require many complex, data-driven decisions, but is very high on the empathy scale.

This framework makes it easy to understand the importance of chatbot design elements.

 

Using these design principles in practice: 

At Conversa, our focus is augmenting care management, which requires both high intelligence and high humanness. This dictated our design, which is characterized by deep integration with the patient record, so we have a thorough understanding of the patient and her clinical situation. We capture patient responses to the chatbot conversation in a structured way so the experience is a very good one. There is little or no error trying to infer what a patient intended to say, which is a downside of current clinical natural language processing (cNLP) technologies. Our structured approach also ensures very high data integrity, minimizing unnecessary escalations to a case manager, nurse navigator, or doctor. We also use self-determination theory, a proven behavioral psychology approach, to inform our conversations – enhancing the humanness of the experience.

Using an avatar for administrative or for urgent care, for example, would be unnecessary and introduce complexity and points of failure trying to put high humanness into a situation where it’s not required. On the flip side, I don’t see chatbots being used in palliative care anytime soon. Here high humanness is needed but the technology is just not good enough yet, in my opinion.

 

In summary, chatbots are easy; conversations are hard.

 

It’s very easy to build and deploy a chatbot. There are many resources available, and you don’t even have to know how to code.  However, creating an AI that can have meaningful conversations, improve care, cost, and experience is much more involved. It requires a deep understanding of specific the use cases, what’s required and what’s not, and a purpose-built solution to address them.

 

Whether chatbots take us into the 21st century or dehumanize healthcare is a choice.

I leave you with two views of the future. Which will you choose?

 

Or