CareMore, Sentrian's remote monitoring pilot combines machine learning, human insight

By Jonah Comstock
04:03 pm

Aliso Viego, California-based Sentrian's remote patient monitoring program has been in a pilot with COPD patients at Anthem subsidiary CareMore for about six months, Sentrian founder and Chief Medical Officer Jack Kriendler told attendees at the American Telemedicine Association conference in Minneapolis this week. The results are encouraging.

[Ed note: To learn more about CareMore's remote patient monitoring programs and other patient generated health data initiatives, be sure to attend MobiHealthNews 2016 in San Francisco next month. CareMore CEO Sachin Jain is one of our keynotes.]

The Sentrian Remote Patient Intelligence platform uses biosensors to monitor patients remotely, but the company uses machine learning to customize the alert parameters for each patient. When they set the initial parameters, determined by CareMore physicians, they were predicting 60 to 70 percent of COPD hospitalizations, with a high number of false positives, Kriendler said. After six months of machine learning, they'd created an algorithm that detected 88 percent of hospitalizations five days in advance, with only a 3 percent incidence of false positives.

"I’m not saying those levels will maintain," Kriendler said. "There will be fluctuations and there’s always a risk of overfitting, but we had hopeful, modest expectations that machine intelligence, driven by clinical reasoning in the beginning, was going to supersede even a team of the world’s greatest experts and it turns out machine intelligence is getting it right. We’re very happy with how it’s going."

Kriendler and Dr. David Ramirez, Chief Quality Officer at CareMore, both spoke about how Sentrian combines machine learning with human insights. 

"We use a very specific kind of neural network called a branching forest algorithm that we limit to being only four branches deep, which means that the rules that are constructed can’t be hyperdimensional," Kriendler explained. "'If this happens and that happens and this doesn’t happen' -- that’s the maximum we’ll go, because human brains just can’t interpret it [beyond that]. If we start to do hyperdimensional stuff we become a Class 2 medical device, we have to file a 510(k) every time we want to change the algorithm. These guys if they want to iterate the algorithm every week they can. So we specifically restrain the complexity of the neural network to human understanding."

Humans are the second pair of eyes on everything the algorithm does, Kriendler said. Human judgment is also important when it comes to getting patients to actually use the sensors, so the algorithm can have good data, Ramirez added.

"You work out the economics and ergonomics  -- as in how well can patients actially use the devices you want to give them?" Ramirez said. "After three devices the drop off in compliance is catastrophic so you really have to minimize the amount, and then there is a careful and methodical and well-scripted way of getting people to understand what it is they’re getting into and continue to use it reliably on a convenient basis. And a lot of that stuff is not to do with technology, it’s to do with good communication."


The latest news in digital health delivered daily to your inbox.

Thank you for subscribing!
Error! Something went wrong!