Do mental health apps work
Your access to the NCBI website at www.ncbi.nlm.nih.gov has beentemporarily blocked due to a possible misuse/abuse situationinvolving your site. This is not an indication of a security issuesuch as a virus or attack. It could be something as simple as a runaway script or learning how to better use E-utilities,http://www.ncbi.nlm.nih.gov/books/NBK25497/,for more efficient work such that your work does not impact the ability of other researchersto also use our site.To restore access and understand how to better interact with our siteto avoid this in the future, please have your system administratorcontact [email protected]
Researchers at the University of Wisconsin-Madison have spent years making sure that their meditation app, called the Healthy Minds Program, passes clinical muster and delivers positive outcomes. Designing studies to test the app’s efficacy led Simon Goldberg, an assistant professor at UW, to confront the mountain of thousands of studies of different mobile mental health tools, including apps, text-message based support, and other interventions.
Researchers had taken the time to synthesize some of the studies, but it was hard, even for someone steeped in the science like Goldberg, to draw definitive conclusions about what works and what doesn’t. So Goldberg teamed up with a few other researchers and took a step back to see if they could put order to the work collected in these meta-analyses — a kind of deep meditation on the existing research inspired by UW’s meditation app.
The meta-review, published on Tuesday in PLOS Digital Health, examined 14 meta-analyses that focused specifically on randomized control trials for mental health interventions, including treatments for depression, anxiety, and smoking cessation. In total, the review included 145 trials that enrolled nearly 50,000 patients. The review found universal shortcomings in study design, leading the researchers to write that they “failed to find convincing evidence in support of any mobile phone-based intervention on any outcome.”
It’s a provocative claim that hints at the work ahead for an industry garnering billions of dollars in investment to develop products that can help people more easily manage health conditions from their phones. Researchers and entrepreneurs are hoping to collect enough evidence to prove to health care policymakers and the public that their interventions work.
Sign up for STAT Health Tech
Your weekly guide to how tech is transforming health care and life sciences.
Please enter a valid email address.
Leave this field empty if you’re human:
But it’s also a sign of how nascent the industry still is, and how even scientists and companies committed to rigorous evaluation are still sorting out what a good trial of an app looks like. Goldberg told STAT that though the evidence isn’t strong yet, better studies will surely emerge for some of the more promising interventions.
“I would bet the farm that if you wait five years and people keep running these trials, there will be convincing evidence,” he said.
To analyze the pool of studies, the researchers identified 34 different combinations of study criteria: the population targeted, the type of intervention, the control, and the outcome sought. The researchers gathered the effect sizes for these studies and then graded the evidence based on the number of factors, like the statistical significance of the results and the consistency of findings across studies.
That none of the interventions managed to show “convincing evidence” reflects that they weren’t able to satisfy a high standard, including an absence of publication bias, or the tendency to publish only favorable results, which was rarely assessed.
Eight of the interventions, however, were found to hit a slightly lower bar of having “highly suggestive” evidence — though that came with the caveat that the effect of the interventions and the strength of the evidence both “tended to diminish as comparison conditions became more rigorous,” the authors wrote.
None of those eight used control treatments that were designed to be therapeutic. Only one type of treatment, text message support for smoking cessation, was compared to an active control, meaning the comparison group received something to occupy their time and attention.
“For me, this suggests that mobile phone-based interventions might not be uniquely effective, but still are effective relative to nothing or non-therapeutic interventions,” said Goldberg. “Given the scalability of these interventions, that’s still good news.”
Rajani Sadasivam, a professor at the University of Massachusetts Chan Medical School who develops text-based services to help people quit smoking, agreed that researchers need to consider cost and reach. For example, a face-to-face, 45-minute counseling session to help someone quit smoking would almost certainly work better than a text-based intervention, but the text-based intervention will be easier for far more people to access.
Lisa Marsch, the director of the Dartmouth Center for Technology and Behavioral Health, told STAT that the new meta-review highlights the limitations of existing literature, including that researchers often don’t dig into variables that can impact outcomes and rarely reported adverse effects.
She said one of the downsides is that meta-analyses tend to group together interventions that may be quite different from each other, like a mobile app that delivers a “potent therapeutic approach” like cognitive behavioral therapy and another that provides inspirational messages or tips. Each might have different levels of clinical impact.
“Lumping them together into categories such as smartphone apps or text messaging apps loses sight of this,” she said. “This is quite distinct from something like a medication which… is more invariable across trials.” Another limitation of the study, as noted by Goldberg and colleagues, is that the paper leaves out evidence that hadn’t been examined in past meta-analyses.
Despite the somewhat dismal conclusions about convincing evidence and active controls, Goldberg sees the study’s findings as a positive sign of where the research is headed.
“Given how recent apps are in human history, there’s a ton of research on them, and there’s evidence that they’re yielding benefits,” he said. “To me, it’s super encouraging.”
One of the challenges for clinicians, patients, and consumers seeking mobile apps that address mental health is that many rely on unproven therapy techniques. Woebot is based on the principles of CBT, one of the few evidence-based approaches to mental health that has actually generated verifiable results. However, a content analysis of apps that are intended for patients with clinical depression is less than encouraging. It reviewed 117 programs that focused on depression and found only 12 of them (10.3%) offered CBT or behavioral activation, another evidence-based treatment approach. The analysis found that those apps that focused on CBT concentrated on education about depression; an explanation of how the CBT approach works; depression rating; monitoring cognitions; emotions, physical sensations and behaviors; conceptualization; and behavioral and cognitive techniques. The core ingredients of behavioral activation also include education about depression and explanation of the model, as well as “depression rating, activity monitoring, giving each activity a rating for pleasure, giving each activity a rating for mastery, activity scheduling of pleasant behaviors and activity scheduling of avoided behaviors.”
A more extensive review of the research on mental health chatbots evaluated 10 studies and concluded: “Overall, potential for conversational agents in psychiatric use was reported to be high across all studies. In particular, conversational agents showed potential for benefit in psychoeducation and self-adherence. In addition, satisfaction rating of chatbots was high across all studies, suggesting that they would be an effective and enjoyable tool in psychiatric treatment.” But on a more cautionary note, the analysis also found that, “given the heterogeneity of the reviewed studies, further research with standardized outcomes reporting is required to more thoroughly examine the effectiveness of conversational agents.”
Woebot uses natural language processing to create a conversational “counselor” that makes users comfortable revealing some of their most personal thoughts. Many patients with depressive symptoms apparently like this approach because they are too embarrassed discussing these issues with a real human being. It is estimated that as many as three out of four college students do not get the medical services they need, and it is not because the services are unavailable or expensive. One survey suggests approximately 70% of teenagers are interested in mobile apps to help them monitor and manage psychiatric problems.
They found that the program delivered measurable improvements on the nine-item Patient Health Questionnaire, the seven-item Generalized Anxiety Disorder scale and the Positive and Negative Affect Scale among college students. It is important to note, however, that these students had not been officially diagnosed with depression or anxiety.
The evidence on mental health apps and chatbots is mixed. Kathleen Fitzpatrick , Ph.D., with the Department of Psychiatry and Behavioral Sciences at the Stanford School of Medicine, and her associates have studied a chatbot called Woebot , a conversational, text-based agent that uses the principles of cognitive behavioral therapy (CBT) to assist users experiencing depressive symptoms. They performed a randomized controlled trial (RCT) to compare the chatbot to a control group who had access to an ebook on mental health.
With that in mind, many patients are turning to online and mobile health (mHealth) resources for help. Similarly, several psychiatric experts are taking a closer look at these digital tools to evaluate their potential value, as well as any possible adverse effects.
While these figures are disturbing, the full impact of mental disease really strikes an emotional chord when you know friends, family or business associates who battle these problems. These disorders wreck lives by destroying promising careers, ending marriages, threatening domestic abuse and much more. Unfortunately, with so many individuals suffering from psychiatric problems, it is virtually impossible for every affected patient to receive the one-on-one professional care they need. There just aren’t enough mental health professionals available.
The toll that depression, anxiety, psychosis and related disorders have taken on society is immeasurable. One can, of course, quote the statistics : About 46 million U.S. adults experience a mental disorder in any given year; 18% of adults experience an anxiety disorder; and 7% have had at least one major depressive episode in the past year.
Among the many mobile apps targeting the mental health community, Deprexis stands out. It too uses CBT, as well as other well-documented approaches to mental health. It generates simulated conversations that last up to an hour.
A meta-analysis found that this digital “therapist” was moderately effective in relieving depressive symptoms. The evidence supporting Deprexis was strong enough to prompt Great Britain’s relatively conservative National Institute for Health and Care Excellence (NICE) to recommend that the National Health Service conduct an in-depth evaluation of the app and online program because it may serve as an “effective alternative therapy for adults with mild to moderate depression.”
A second meta-analysis performed by Mary Rodgers, Ph.D., M.S. and her colleagues at the University of Michigan’s Department of Internal Medicine found that for every four depressive persons using Deprexis, one recovered, when compared to those who didn’t use the program. Critics still believe these findings should be duplicated in real-world clinical settings, as opposed to those found in a research protocol.
While Deprexis has attracted much attention among mental health professionals, it’s by no means the only mobile app that holds promise. Joseph Firth, Ph.D., an Australian investigator with Western Sydney University, Campbelltown, and his associates analyzed 18 RCTs and 22 mobile mental health apps and found overall, the apps significantly reduced depressive symptoms with a moderate positive effect size when compared to inactive control participants. An inactive participant refers to someone who didn’t receive any treatment during a given study, compared to an active control, who either used an app that was not designed to treat depression, was treated in person or was engaged in other activities. Firth and his colleagues found less of a difference between patients using a mental health app and someone in an active control group.
A closer look at this meta-analysis, including several subgroup analyses, reveals important insights. The RCTs that looked at smartphone apps lasted from four to 24 weeks, and depressive symptoms were measured using a variety of well-documented tools, including the Depression Anxiety Stress Scale, the Patient Health Questionnaire (PHQ-9) and the Beck Depression Inventory II scale. The meta-analysis also found that the mobile apps were only effective in users who had self-reported mild-to-moderate depression. They had no significant impact on patients with major depression, bipolar disorder or anxiety disorders.
Some of the subgroup analyses were unexpected. Apps that did not involve any in-person feedback generated statistically significant moderately positive effects, while those that did include human feedback did not. Apps that delivered their content entirely through a mobile device appear to have been more effective than those that were not self-contained, though the difference was just short of reaching statistical significance. Finally, those apps that offered cognitive training had less of an impact on users than those that focused more generally on
An independent meta-analysis, conducted by Jesse Wright, M.D., Ph.D., with the University of Louisville School of Medicine in Kentucky, and his associates specifically evaluated computer-assisted CBT. They looked at 40 RCTs and found a moderately large treatment effect. They also noted that apps in which patients were supported by a clinician or other professional were more effective than apps that were used by patients without any outside help.
A third meta-analysis that looked at the value of mental health apps, performed by German investigators, included 12 studies; it too found that the programs improved depressive symptoms. It’s important to point out, however, that none of the clinical trials that were evaluated lasted for more than 12 weeks, so it’s not possible to know whether these apps have any long-term impact. One unique aspect of this meta-analysis is that it also looked at health professionals’ attitudes toward mental health apps. The survey of 72 clinicians found that: “Significant differences were found between the level of technology experience and how much the healthcare professional would consider the use of mobile applications in clinical practice. Survey participants reported openness towards therapeutic app use but very little knowledge and experience in the field.”
Although RCTs are considered the gold standard to determine the effectiveness of any medical treatment, that doesn’t mean we should disregard less rigorous types of evidence. A mobile app called Mobilyze, for instance, which was developed by the Center for Behavioral Intervention Technologies at Northwestern Medicine, may have merit according to a small controlled clinical trial in which seven patients with major depression saw significant improvements in depressive and anxiety symptoms at the end of eight weeks. Also encouraging was the fact that the patients no longer met the criteria for depression spelled out in the Patient Health Questionnaire (PHQ-9).
Don’t Ignore the Therapeutic Relationship
There may be significant research support for mental health apps, but clinicians and the public need to keep in mind that a mobile app can never replace the therapeutic relationship that develops between a mental health professional and their patients. While standalone apps may benefit the casual user experiencing mild depressive symptoms, it is unlikely it will be a substitute for the kind of trusting, respective alliance that occurs in a doctor/patient relationship.
John Torous, M.D., an international authority on digital psychiatry and director of the digital psychiatry division, in the Department of Psychiatry at Beth Israel Deaconess Medical Center in Boston, points out: “The digital therapeutic relationship for mobile health has been ignored because it is often invisible to those building apps. App developers create impressive apps that are marketed to clinicians or patients as a discrete tool….While there isn’t a consensus definition of the therapeutic relationship, elements include mutual trust, alliance, respect, empathy and positive regard between the patient and clinician. The current therapeutic relationship is developed in the brick-and-mortar clinic and strengthened through future in-person clinical visits. The nature of the clinical workflow, appointments, electronic medical records, liability and billing reinforces the predominance of face-to-face therapeutic relationships.”
Torous emphasizes the importance of reframing mental health apps so that they can serve as complementary tools to strengthen the bond between therapist and patient rather than destroy it by encouraging patients to go it alone with an independent computer program — no matter how lifelike and empathetic its conversational interface may seem.
Professional Criteria Can Help Evaluate Apps
Clinicians in need of assistance as they sift through the many mental health apps can turn to the American Psychiatric Association for a set of guidelines. APA has published a list of five criteria to help users choose an app that fits their needs and those of their patient population.
While APA does not recommend specific phone apps, its five-step evaluation process can help narrow down one’s choices. The association recommends first collecting background information and then assessing the app’s risks, including the program’s potential to compromise patient privacy. Then review the evidence on effectiveness, evaluate its ease of use and finally evaluate its clinical integration — including interoperability — and how well it strengthens the therapist/patient relationship.
APA explains, “the last step in the model is interoperability. This is the topmost level, as the ability to share data only matters if this is an app that you and the patient want to use… if it is safe and secure… has some evidence base … and is easy to use…. The reason why interoperability becomes important in this model is because apps should not fragment care and the patient and psychiatrist should be able to share and discuss data or feedback from the app as appropriate. In some cases, the ability for apps to share data may not be relevant. For other apps, however (e.g., mood trackers and medication management), ensuring that such data can be easily shared and accessed by those who need to see it is an important factor to consider.”
Get the best insights in digital health directly to your inbox.
Paul Cerrato has more than 30 years of experience working in healthcare as a clinician, educator and medical editor. He has written extensively on clinical medicine, electronic health records, protected health information security, practice management and clinical decision support. He has served as editor of Information Week Healthcare, executive editor of Contemporary OB/GYN, senior editor of RN Magazine, and contributing writer/editor for the Yale University School of Medicine, the American Academy of Pediatrics, Information Week, Medscape, Healthcare Finance News, IMedicalapps.com and Medpage Today. The Healthcare Information and Management Systems Society (HIMSS) has listed Mr. Cerrato as one of the most influential columnists in healthcare IT.
John D. Halamka, M.D., leads innovation for Beth Israel Lahey Health. Previously, he served for over 20 years as the chief information officer (CIO) at the Beth Israel Deaconess Healthcare System. He is chairman of the New England Healthcare Exchange Network (NEHEN) and a practicing emergency physician. He is also the International Healthcare Innovation professor at Harvard Medical School. As a Harvard professor, he has served the George W. Bush administration, the Obama administration and national governments throughout the world, planning their healthcare IT strategies. In his role at BIDMC, Dr. Halamka was responsible for all clinical, financial, administrative and academic information technology, serving 3,000 doctors, 12,000 employees and 1 million patients.
Tackling the Misdiagnosis Epidemic with Human and Artificial Intelligence
AbleTo Is Changing How We Use Behavioral Health Data
Finding mHealth Apps that Doctors Can Trust