How doctors will work with bots to reduce the barriers between doctors and patients
For patients and caregivers, the ability to chat with clinical professionals through messaging applications is already a breakthrough in convenience, efficiency, and continuity of care. But as with any major technological shift, challenges arise when it comes to the finer details. For instance, how can health care professionals avoid being inundated with minor questions and requests, when the barrier between patient and doctor becomes nearly nonexistent?
This post will highlight how some of these challenges are being met with artificial intelligence and chatbots, and share a practical example of how “blending” of human and machine will allow this breakthrough to truly take hold in the healthcare market.
The seed of opportunity to bring messaging into the clinical setting is starting bear fruit: a number of companies in the space have successfully brought products to market. Examples include ConversaHealth (digital checkups via messaging) and Sherpaa (concierge medicine via messaging).
In these secure messaging apps the patient or caregiver is able to carry a conversation with healthcare professionals involved in their care. This confers a number of advantages over tradition forms of communication. First, messaging app conversations can contain links, images, PDFs and other files. This facilitates the immediate exchange of a wealth of information not possible with voice communication alone. Messaging conversations are also frequently asynchronous – meaning they do not have to occur in live-time. For instance, the patient can ask a question of the nurse on his care team via message in the middle of the night, and the nurse can respond the following morning. This all but eliminates “phone tag”. Lastly, a messaging conversation can include – or exclude – members of the care team at will. It’s no more difficult to transmit information to every care team member at once than it is to contact just one person.
With messaging, communicating with and among a care team becomes as easy as conversing in a Slack channel.
The opportunity for clinical messaging goes beyond human-to-human interaction, however. Consider a simple request from a caregiver, such as “please send me info on hypertension.” Why should a registered nurse take the time to search for a web page, copy the URL and send the link when it’s easy for a machine to discern the intent of the request, and resolve it? Finding these types of efficiencies is possible with what I refer to as “human augmented automatons”— chatbots with human guidance.
In his 2008 book The Innovator’s Prescription, Clayton Christensen posits that by capturing “institutional knowledge” and decentralizing it towards the patient one can achieve efficiencies, lower costs and improve care. For an example of how this might work in a world of clinical messaging with automation, think of a quick screener — an assessment. It’s a series of interconnected questions given to the patient that collects data from which decisions can be made. Often such screenings are periodic and data must be looked at longitudinally.
Carrying out an assessment is time-consuming work for the nurse or clinician. Yet it turns out that in most cases a machine is better at administering such surveys – consistent, accurate, never forgetful – compared with a healthcare professional.
Clearly, healthcare organizations have no tolerance for the kind of erroneous or inappropriate responses many chatbots have become famous for. The nature of a conversation about health is contextual, collaborative, and carries zero tolerance for absurdity. It’s simply not a place for “AI only” responses. So there needs to be a way to blend automated and human responses with high integrity. But how would this work?
A reasonable chatbot framework should produce a probability value for a prediction.
Data: “what should I do about high blood pressure?”
Probability: 0.861 intent: BP_info
Using one of a number of approaches to intent classification, a probability is produced given the sentence entered by the user and a given intent. The entry is matched with a level of probability with some useful response. Vela uses a neural network classifier to generate an intent probability.
The key to human augmented response is an intent classification probability. Let’s look at a schematic showing the response process:
Noteworthy aspects of this flow include that: a probability threshold can be set to whatever is appropriate, depending on setting (e.g., off hours responses vs. daytime responses); when above threshold probability, the human expert need not be involved; and when below threshold probability, there will be additional machine learning.
In other words, as the system generates responses below the threshold, the human decision provides machine learning from which to improve the data. This further increases efficiency without sacrificing quality.
Using this process flow, a healthcare organization can initially set an artificially high threshold and review the probability scores for each suggestion over a period of time. In this way a threshold can be set that provides for an acceptable tolerance, adjusted for different settings such as after-hours emergency contact, information-only inquiries, and specific tasks. This maximizes efficiency while still improving access and collaboration.
By combining, or blending, human expert messaging with automated responses we improve efficiency without sacrificing quality. In a setting like healthcare, it’s crucial to be able to drive quality of care while at the same time driving efficiencies.
Institutional knowledge in the hands of patients and their caregivers is empowering, and secure messaging apps are increasingly the most effective and most efficient medium to carry out this decentralization.