• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Health care bots are only as good as the data and doctors they learn from

June 23, 2018   Big Data
 Health care bots are only as good as the data and doctors they learn from

The number of tech companies pursuing health care seems to have reached an all-time high: Google, Amazon, Apple, and IBM’s Watson all want to change health care using artificial intelligence. IBM has even rebranded its health offering as “Watson Health — Cognitive Healthcare Solutions.” Although technologies from these giants show great promise, the question of whether effective health care AI already exists or whether it is still a dream remains.

As a physician, I believe that in order to understand what is artificially intelligent in health care, you have to first define what it means to be intelligent in health care. Consider the Turing test, a point when a machine becomes indistinguishable from a human.

Joshua Batson, a writer for Wired magazine, has mused whether there is an alternative measurement to the Turing test, one where the machine doesn’t just seem like a person, but an intelligent person. Think of it this way: If you were to ask a random person about symptoms you experience, they’d likely reply “I have no idea. You should ask your doctor.” A bot supplying that response would certainly be indistinguishable from a human — but we expect a little more than that.

The challenge of health care AI

Health is hard, and that makes AI in health care especially hard. Interpretation, empathy, and knowledge all have unique challenges in health care AI.

To date, interpretation is where much of the technology investment has gone. Whether for touchscreen or voice recognition, natural language processing (NLP) has seen enormous investment including Amazon’s Comprehend, IBM’s Natural Language Understanding, and Google Cloud Natural Language. But even though there are plenty of health-specific interpretation challenges, interpretation challenges are really no greater in this particular sector than in other domains.

Similarly, while empathy needs to be particularly appropriate for the emotionally charged field of health care, bots are equally challenged trying to strike just the right tone for retail customer service, legal services, or childcare advice.

That leaves knowledge. The knowledge needed to be a successful conversational bot is where health care diverges greatly from other fields. We can divide that knowledge into two major categories: What do you know about the individual? And what do you know about medicine in general that will be most useful their individual case?

If a person is a diabetic and has high cholesterol, for example, then we know from existing data that the risks of having a heart attack are higher for that person and that aggressive blood sugar and diet control are effective in significantly lowering that risk. That combines with a general knowledge of medicine which says that multiple randomized controlled trials have found diabetics with uncontrolled blood sugars and high cholesterol to be twice as likely as others to have a cardiac event.

What is good enough?

There are two approaches to creating an algorithm that delivers a customized message. Humans can create it based on their domain knowledge, or computers can derive the algorithm based on patterns observed in data — i.e., machine learning. With a perfect profile and perfect domain knowledge, humans or machines could create the perfect algorithm. Combined with good interpretation and empathy you would have the ideal, artificially intelligent conversation. In other words, you’d have created the perfect doctor.

The problem comes when the profile or domain knowledge is less than perfect (which it always is), and then trying to determine when it is “good enough.”

The answer to “When is that knowledge good enough?” really comes down to the strength of your profile knowledge and the strength of your domain knowledge. While you can make up a shortfall in one with the other, inevitably, you’re left with something very human: a judgment call on when the profile and domain knowledge is sufficient.

Lucky for us, rich and structured health data is more prevalent than ever before, but making that data actionable takes a lot of informatics and computationally intensive processes that few companies are prepared for. As a result, many companies have turned to deriving that information through pattern analysis or machine learning. And where you have key gaps in your knowledge — like environmental data — you can simply ask the patient.

Companies looking for new “conversational AI” are filling these gaps in health care, beyond Alexa and Siri. Conversational AI can take our health care experience from a traditional, episodic one to a more insightful, collaborative, and continuous one. For example, conversational AI can build out consumer profiles from native clinical and consumer data to answer difficult questions very quickly, like “Is this person on heart medication?” or “Does this person have any medications that could complicate their condition?”

Not until recently has the technology been able to touch this in-depth and profile on-the-fly. It’s become that perfect doctor, knowing not only everything about your health history, but knowing how all of that connects to combinations of characteristics. Now, organizations are beginning to use that profile knowledge to derive engagement points to better characterize some of the “softer” attributes of an individual, like self-esteem, literacy, or other factors that will dictate their level of engagement.

Think about all of the knowledge that medical professionals have derived from centuries of research. In 2016 alone, Research America estimated, the U.S. spent $ 171.8 billion on medical research. But how do we capture all of that knowledge, and how could we use it in conversational systems? This lack of standardization is why we’ve developed so many rules-based or expert systems over the years.

It’s also why there’s a lot of new investment in deriving domain knowledge from large data sets. Google’s DeepMind partnership with the U.K.’s National Health Service is a great example. By combining their rich data on diagnoses, outcomes, medications, test results, and other information, Google’s DeepMind can use AI to derive patterns that will help it predict an individual’s outcome. But do we have to wait upon large, prospective data analyses to derive medical knowledge, or can we start with what we know today?

Putting data points to work

Expert-defined vs. machine-defined knowledge will have to be balanced in the near term. We must start with the structured data that is available, then ask what we don’t know so that we can derive additional knowledge from observed patterns. Domain knowledge should start with expert consensus in order to derive additional knowledge from observed patterns.

Knowing one particular data point about an individual can make the biggest difference in being able to read their situation. That’s when you’ll start getting questions that may make no sense whatsoever, but will make all the sense in the world to the machine. Imagine a conversation like this:

BOT: I noticed you were in Charlotte last week. By any chance, did you happen to eat at Larry’s Restaurant on 5th Street?

USER: Uh, yes, I did actually.

BOT: Well, that could explain your stomach problems. There has been a Salmonella outbreak reported from that location. I’ve ordered Amoxicillin and it should be to you shortly. Make sure to take it for the full 10 days. The drug Cipro is normally the first line therapy, but it would potentially interact badly with your Glyburide. I’ll check back in daily to see how you’re doing.

But while we wait for the detection of patterns by machines, the knowledge that is already out there should not be overlooked, even if it takes a lot of informatics and computations. I’d like to think the perfect AI doctor is just around the corner. But my guess is that those who take a “good enough” approach today will be the ones who get there first. After all, for so many people who don’t have access to adequate care today, and for all that we’re spending on health care, we don’t yet have a health care system that is “good enough.”

Dr. Phil Marshall is the cofounder and chief product officer at Conversa Health, a conversation platform for the health care sector.

Let’s block ads! (Why?)

Big Data – VentureBeat

Bots, care, data, doctors, from, Good, Health, Learn, Only, they
  • Recent Posts

    • We were upgraded to the Unified Interface for Dynamics 365. Now What?
    • Recreating Art – the unexpected way
    • Upcoming Webinar: Using Dynamics 365 to Empower your Marketing and Sales Teams with Digital Automation
    • Center for Applied Data Ethics suggests treating AI like a bureaucracy
    • Improving Dynamics 365 Data Integrations with Alternate Keys
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited