When it comes to digital mental health, the old adage about a tree falling in the woods is as relevant as ever:  If a digital mental health product launches, but is rarely used, does it actually help anyone?

Research Insights


A piece of research published in the Journal of Medical Internet Research in 2019 cut to the core of this very issue (link). The researchers identified 59 free English-language apps with 10,000 or more installs on Google Play. Each provided unguided, mental health self-management through mindfulness, tracking, breathing exercises, psychoeducation, or peer support – a hit list of interventions common in many digital mental health solutions. Using nonpersonal data from the app analytics tool SimilarWeb, they measured the 30-day retention rates for each app. 


The chart visualizing the daily open rates is forever burnt in my memory. The shape of the graph is an exponential decay curve showing that 69% of users opened an app on day 1, before a sharp cliff to 10% of users by day 6, then gradually approaching 3% by day 30. There is some minor fluctuation according to the focus area of the app, but the trend is blazingly clear.


You could of course argue that most users got what they came for in the first day or two. However, our more honest inclinations would have to admit that would be a rose-tinted assessment. Three percent retention is not a figure any of us would like to see on our dashboards. Compare this, for example, to medical and fitness apps which 2020 industry data found to have 90-day retention rates of 34% and 31% respectively (link).

Retention Rates, Accessibility, and Acceptability


Retention rates are a kind of proxy for programme completion, and the research on that is no brighter. Completion rates for digital mental health interventions can be as little as 0.5% and at most 28.6% (link). We all know that long-term improvement in a person’s mental health takes time, and no product owner – mental health or otherwise – would feel excited about such low retention or completion.


In the first wave of digital mental health, accessibility was our rallying cry. It was and in many respects remains the central argument for investment. The closer in reach we can make mental health care, the more people will get meaningful help, right? The mobile revolution brought information, entertainment and shopping into the palms of people’s hands. Why can’t it do the same for mental health support?


However, solely solving for accessibility masks an equally important problem: acceptability – which is the reason for low engagement in the first place. The question should not only be ‘Is it within reach?’ but also ‘Do people want it?’.


Think about it this way: Imagine a supermarket is half a mile from your home, but its produce isn’t fresh and the store is a maze to navigate (more so than normal). This supermarket is accessible but not acceptable, and you’re much less likely to go there for your weekly shop. In digital mental health, we’ve made great strides toward accessibility, with an ever-increasing array of apps, websites and tools. However, in many respects, the level of acceptability of solutions is still in its nascent stages.


Take products built in an academic context, for example. While they may prove effective in a controlled environment, in the wild west of the real world, where there’s no incentivization or human contact, engagement levels can plummet by 4 times (link). Controlled trials that rely on human behaviour can tell us a lot about effectiveness in the lab, but they can’t tell us how something will truly play out in the wild.


Outside of academia, government or philanthropic funding structures can force new product solutions into waterfall project management. This linear, sequential approach to building is more fit for road works than digital products and is the opposite of the Agile approach, which has been central to the consumer digital product revolution. Without constant iteration, aspirational ideas can hit a ceiling of mediocrity, never quite breaking through to transformative and truly scalable impact.


The commercial world has its own problems. Seed funding and revenue to re-invest in constant iteration enable commercial products to get closer to solving the engagement gap. Yet, research has shown that just over 55% of mental health apps claim any sort of evidence base (link). Only a further 6% of those 55% have any sort of published evidence for their efficacy.

In all of the above scenarios, various cognitive biases are at play. Confirmation bias, where you default to existing beliefs, can cause academics and clinicians to undervalue the importance of consumer-grade digital user experiences. Anchoring bias, where you lean too heavily on initial information, can cause funders without product experience to assume the first solution will be the best. Finally, the Dunning-Kruger effect, or the overestimation of competence, can cause product managers to undervalue evidence-based psychology. The common thread is: you don’t know what you don’t know.

User Experience


The sweet spot for both accessible and acceptable innovation lies at the confluence of both clinical and product knowledge, and an attendant obsession with user experience. Teams funding or building digital mental health products are wise to ensure that both product managers and clinicians are involved from the very beginning, value each other’s expertise, and work iteratively to help unlock unforeseen value.


A model example comes not from digital mental health, but the now canonical story of a consumer marketplace we’re all familiar with: Airbnb. In 2009, Airbnb was a small team of 3 and the business was close to bankruptcy. The aha moment came when they realised that hosts didn’t know how to present their listings, which also meant guests didn’t know what to expect.


“The photos were not great photos,” said co-founder Joe Gebbia, years later in an interview. “People were using their camera phones or using their images from classified sites. It actually wasn’t a surprise that people weren’t booking rooms because you couldn’t even really see what it is that you were paying for.”


The founders experimented with taking professional photos of stays in New York. Within a week, sales had doubled. The photos created trust through transparency. They learnt that the problem wasn’t with accessibility but acceptability.

ThroughLine and The Path Forward


As an industry example, at the organization I lead, ThroughLine, we started with an accessibility problem but have invested more and more in solving for acceptability. Initially, our product Find A Helpline was built in response to the plethora of online helpline lists with incorrect phone numbers and broken links. Resources were out of reach, making support at people’s most difficult moments fundamentally inaccessible.


To address this, we built data verification into the heart of our product. Our information is sourced from helplines directly, with over 1,200 services in 100+ countries a part of our network, and over 600 reverifying in recent months. This approach has also unlocked partnerships. Today, we’re fortunate to work with groups ranging from Google to Grammarly to the International Association for Suicide Prevention to help connect people to crisis resources online.


We knew from the beginning that a clean, simple user experience would make our product more acceptable to users. When a person is distressed, it can be difficult to think straight or make decisions, and we can help that person get what they need by reducing their cognitive load. In time, we saw that how we order services was an integral part of acceptability, and had a big impact on whether or not a person takes action to contact help. When a person searches for a mental health service online, the results should feel instinctively relevant. Imagine if Google presented search results alphabetically rather than by their relevance and usefulness – that’s essentially most mental health resource lists on the internet today.


We wanted to do better, so we built a suite of features including automatic country detection, AI-powered popularity-based ranking, online chat prioritisation, and helpline descriptions. Before these changes, 1 in 12.2 users would take action to contact a helpline. Today, it’s 1 in every 8.5 – an improvement of 44%. At our current scale, that’s about 300 more people each day, many of them experiencing suicidal thoughts, taking action to get help. In making our product experience more acceptable to users, we also made help more accessible.


Like us, accessibility is where many digital mental health products start. However, acceptability is the natural arc of development, as teams work to squeeze out more impact. Where accessibility can feel solved simply by launching something new, acceptability is often the hard-won prize that comes from years of iteration and improvement (while Airbnb unlocked insight with their photos experiment, it took many years and thousands of photographers before they unlocked that value at scale).


If the research is anything to go by, the earlier we think about these issues, the better. In digital products, the simple closing of a tab or an app means that a user might never come back. In digital mental health, the consequence of that closed tap is much more dire: we might be failing in our goal to support someone through what might be one of the most difficult times of their lives.

The first wave of digital mental health aimed to make support more accessible. The next wave must tackle an even more challenging task: building acceptable experiences that people want to use. Only then, will we keep people engaged and see the widespread positive impact in people’s lives that we all so deeply desire.