MIT researchers' AI model detects COVID-19 by listening to coughs

The tool was "100% accurate" when spotting cases among asymptomatic individuals, the researchers wrote, and could be deployed as a low-cost prescreener to support diagnostic testing efforts.
By Dave Muoio
02:06 pm
Share

MIT researchers have developed an artificial intelligence tool that listens to a person's coughing to determine whether or not they may have COVID-19, regardless if they are or are not symptomatic, according to research published last week in IEEE Open Journal of Engineering.

To build it, the researchers solicited audio recordings of individuals coughing and accompanying information about their condition through an opening online website. This effort yielded a dataset of more than 70,000 recordings containing an average of three coughs per subject – and an estimated 2,660 subjects with a positive case, to date.

Using these COVID-19 cough recordings and an equal number of COVID-19 negative samples randomly selected from the dataset (n = 5,320), the researchers developed, trained and validated a convolutional neural network-based model that listens for specific acoustic biomarkers related to muscular degradation, vocal cord changes, sentiment or mood changes, and changes in the lungs or respiratory tract.

Based on the testing, the researchers said their tool discriminated COVID-19 positive participants with 97.1% accuracy, 98.5% sensitivity and 94.2% specificity. Of particular note, the model performed at 100% accuracy when detecting coughs from asymptomatic positive cases.

WHY IT MATTERS

The research team envisions their AI tool as a low-cost COVID-19 prescreener that could be deployed in settings where comprehensive diagnostic testing is unavailable or unable to scale for entire populations.

"This noninvasive, free, real-time prescreening tool may prove to have a great potential to complement current efforts to contain the disease in low-infected areas, as well as to mitigate the impact in highly-infected areas, where unconscious asymptomatics may spread the virus," the researchers wrote. "We contend the MIT Open Voice approach presented has great potential to work in parallel with healthcare systems to augment current approaches to manage the spread of the pandemic."

The team also noted that it's continuing to refine its model by incorporating hospital data from Mount Sinai, as well as from other providers located in Mexico and Italy. Additionally, the team wrote in the paper that it has "reached an agreement with a Fortune 100 company to demonstrate the value of our tool as part of their COVID-19 management practices."

THE LARGER TREND

The MIT team had previously been developing its vocal biomarker model for use in diagnosing respiratory conditions and Alzheimer's disease – the latter of which they noted used "the exact same biomarkers, ... suggesting that perhaps, in addition to temperature, pressure or pulse, there are some higher-level biomarkers that can sufficiently diagnose conditions across specialties once thought mostly disconnected."

Still, these researchers aren't the first to employ vocal biomarkers for COVID-19. Back in July, Sonde One rolled out an app that listens to voice recordings and, along with user-reported symptom information, makes a recommendation for whether or not they may have a respiratory condition. Designed as a COVID-19 screener for employers, the company had plans to deploy the tool among a 5,000-person customer in August. Over the summer Sonde One also acquired NeuroLex Laboratories, a fellow voice-based platform, to strengthen its voice-sample dataset.

MobiHealthNews:

The latest news in digital health delivered daily to your inbox.

Thank you for subscribing!
Error! Something went wrong!