When it comes to mental health apps, validation is a hot topic. But studying those apps can pose some hefty challenges. A recent meta analysis published online in the Journal of Affective Disorders found that studies of apps for depressive symptoms had around a 50% dropout rate when accounting for bias.
However, researchers discovered that the rate was lower when the apps included a human connection, like feedback and mood monitoring.
“[T]his analysis offers benchmarks on what can be expected in terms of study retention in mental health mobile app trials to help guide investigators in conducting a priori power analyses and set a standard upon which to improve as research practices in this nascent field continue to develop,” the authors wrote.
Researchers found that the pooled dropout rate was 26.2%. However, after the team adjusted for publication bias this increased to 47.8%. The analysis found that dropout rates between the controls and intervention group did not differ significantly.
When there was real-person feedback at the other end of the app, the dropout rate was 11.74%. However, when there was not feedback from a real person, the rate was 33.96%
“High dropout rates present a threat to the validity of RCTs of mental health apps,” authors of the study wrote. “Strategies to improve retention may include providing human feedback, and enabling in-app mood monitoring. However, it critical to consider bias when interpreting results of apps for depressive symptoms, especially given the strong indication of publication bias, and the higher attrition in larger studies.”
HOW IT WAS DONE
Researchers conducted a systematic search of several healthcare databases, and employed a keyword search algorithm to scan through the documents.
In order to be included, journals had to be published in English and peer-reviewed. All of the studies in the meta-analysis were randomized controlled trials. Studies in the analysis all used a mobile app to help depression, and all reported retention in post-treatment assessments.
Researches included a total of 18 independent studies. In these, a total of 22 apps were used in active treatment conditions.
Historically, mental health apps have fallen short when it comes to validation. Last year a study Nature Digital Medicine found that a majority of the apps studied do not provide evidence or peer-reviewed studies to back up their products.
Over the last year stakeholders have been pushing for more studies to validate these apps.
“A fun fact is that everyday at least 200 apps get added on the app store. Just in the mental health space there are 1,500 apps,” Dr. Ramya Palacholla, research scientist at Partners HealthCare Pivot Labs, said at a Partners HealthCare Pivot Labs event last year. “We are so passionate about validating solutions because when there are so many different apps —1,500 different apps that claim to do the same thing — then what separates me from the others?”
However, this study highlights some of the potential pitfalls when studying apps such as these.
“Our results suggest that realizing this promise must be considered through the potential lens of strong publication bias and the underlying realities of dropout inherent in clinical intervention studies,” researchers wrote. “However, the digital health field is uniquely suited to rapid adaptations and adjustments, meaning that progress and new solutions are always on the horizon.”