With a standard scientific information database like PubMed, even an unrefined set of search terms like “AI” or “artificial intelligence” or “machine learning” in combination with “mental health” or “psychiatry” will yield an increasing number of “hits” for publications involving humans. The volume of that activity has accelerated dramatically since around 2013.

 

 

The Smartphone is about to enter a fourth decade. The pivotal smartphone event was arguably market penetration by the iPhone released in June 2007. The introduction of apps (emerging as Apple technology in 2008, with the iOS App Store, followed rapidly by Google’s apps for Android devices that had been launched in 2008) transformed the digital eco-system and, arguably, altered the nature of human experience for many.

 

With these ground-breaking personal devices and emerging platforms, a human digital revolution followed quickly, accompanied by the birth of personalized internet sector business ventures and the ubiquitous rise of complex analytical methods that we know as machine learning, often labeled as artificial intelligence or AI. 

 

The availability of a digital space in which personalized options abound, raised many frustrations for those interested in the highly conservative area of health applications. The relative lack of adoption in health is highlighted by a massive leap into virtual interaction spurred by the social distancing requirements in public health response regulations for attempts at COVD-19 infection containment, including the development of digital mental health clinics (Rauseo-Ricupero et al, 2021).

 

For mental health broadly, relatively few carer-client interactions occurred online pre-COVID. The physical distancing requirements of COVID-19 public health policies resulted in a real reduction of access to mental health care for many. This was sometimes due to health system regulations blocking health carers authority to interact virtually. The effects of those limitations were compounded in an environment in which many individuals under mental health care were reluctant to visit a physical facility, especially hospitals, due to fear of contracting COVD infections. In the years following 2019 we have seen a massive increase in virtual therapy sessions across a wide range of health systems in the case of mental health consult. This shift has also occurred for clinical interactions in many medical specialties beyond psychiatry and family medicine.

 

Without dwelling on details of what it took for health administrators, health practitioners and health care clients to make this change, it is clear that the transition to digital access is growing and likely here to stay. There is, of course, no upside to COVID, but an unexpected consequence has been to drag the unwilling and the uninformed over the digital divide to a new era of opportunity, and to enable those with skills in this domain to move rapidly to attempt to embrace the challenge.

 

In terms of potential benefits, digital medicine and the power of machine learning / AI promises brave new opportunities. But there are areas of concern, and we need to move forward appropriately and mindfully in relation to key concerns.

 

One approach to conceptualizing benefits and concerns is to consider dimensions of quality in a health ecosystem. The Canadian Health Quality Council of Alberta has produced a well-resourced matrix that is a useful framework for the AI conversation.

 

The Dimensions of Quality along the upper aspect of the Matrix apply to all areas of health need. These dimensions apply quite intuitively to all stakeholder groups in the digital space of health ecosystems.

 

These Dimensions of Quality are:

ACCEPTABILITY, ACCESSIBILITY, APPROPRIATENESS, EFFECTIVENESS, EFFICIENCY and SAFETY.

 

Digital Accessibility is a great advantage in the domain of mental health in cases where direct interaction and possibly supervision are not possible or not necessary. Access to internet- or smartphone-based services reduces and sometimes removes significant barriers to initially accessing care and maintaining access to care, potentially offering great potential gains in efficiency of service provision. Buy-in (acceptability) from all stakeholder levels is necessary and that will depend on the appropriateness and effectiveness of digital service offerings.

 

A fascinating question that we will only mention here is “will we need human carers in the digital health care delivery space”. With rapidly advancing robotic and chatbot development it is certainly possible that such technological advances will pave the way for open accessibility to services – but acceptability is a key question. Having seeded that thought, we move onto what are currently more pressing questions.

 

 

Safety is a complex issue in this digital space, encompassing many questions. These include ethical concerns around privacy and security for confidential health data, avoidance of the impact of unconscious developer or user bias in digital tool development and implementation (from the perspective of equity diversity and inclusion or EDI), and avoidance of adverse consequences of changing practice approaches.

 

An example of potential adverse consequences includes questions about safety protocols for virtual interactions in the context of managing non-suicidal and suicidal self-harm risks for health system clients. Some of these safety issues have been, at least partially, addressed in the context of TelePsychiatry and other digital interaction areas. Nevertheless, the breadth and depth of the growing and evolving digital ecosystem imposes a higher priority and a wider set of responsibilities for both stewards of and participants in this space.

 

It is convenient to consider two domains within the digital health ecosystem. The first comprises publicly and privately funded health care organizations that provide a significant suite of hospital and / or community clinic services. These are generally stable and publicly recognized organizations that typically have ongoing contractual relationships with clients and health care practitioners. The second domain is more akin to a largely wild unregulated space that is far less stable or sustained than the first domain (Kahane et al, 2021). The main stakeholders in this second space are independent app developers who operate largely outside regulatory boundaries. In many jurisdictions, wellbeing- or health-oriented apps are not “medical devices” and therefore not subject to health licensing and approval regulations.

 

Nothing is ever truly simple, and it must be acknowledged that many publicly and privately funded health care organizations are also interested in implementation of app approaches to improving health and wellness of their clients. Many of these organizations engage in and around the app development space. The main point is that, in relation to some key aspects of Dimensions of Health Quality, the second domain is fraught with flaws and potential pitfalls for the user. These concerns have been discussed in depth and raise some questions of deep importance related to guiding public users towards appropriate safe and effective choices from the veritable ocean of possible app offerings (Washington Post, 2020; Lagan et al, 2021)

 

With an emphasis on the client and caregiver interface and the complexity of the multidimensional spaces of machine learning and AI there are many knowledge gaps for exponentially growing number of stakeholders who believe they need to implement AI or machine learning. In a very timely action, several AI experts recently published a set of three articles in the Nov 21, 2021 issue of the Canadian Medical Association journal (CMAJ).   The first (Verma et al, 2021) outlines broad key issues for implementing machine learning solutions in the health domain. Verma and colleagues, outline the need for an initial exploration phase, present some points about designing machine learning solutions and then touch broadly on how to implement and evaluate such solutions. Written from a broad health perspective, this paper, particularly the concluding paragraph highlights the need for “a disciplined, inclusive, engaged and iterative approach to development and adoption of these technologies”. Within the very complex specific domain of mental health this message is no less important. The other two papers in this set offer much needed general detail on problems of machine learning solution deployment – highlighting importance of pre-deployment evaluation (Cohen et al, 2021). These authors discuss evaluation of machine learning solutions – highlighting the need for a truly multidisciplinary team approach with clinician experts, data scientists, and implementation scientists: including both qualitative and quantitative components (Antonoiou and Mamdani, 2021). The utility of machine learning and AI in identifying clinical entities and predicting clinical outcomes in health is very clear. The complexity of developing, implementing and continually evaluating such machine-learning / AI applications is actually a little daunting – with clear requirements for culture change and a willingness to change the content and consistency of health data that we routinely capture.

 

These CMAJ papers complement two other recent publications that provide non-computing specialist access to this field, specifically in relation to mental health. “Evaluating the Machine Learning Literature: A Primer and User’s Guide for Psychiatrists” (Grzenda et al, 2021) and “Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom” (Lee et al, 2021) provide excellent discussion and explanation of some basic principles that light up this space in a much needed way.

 

A final aspect of AI and machine learning application focuses on predictive analytics using a multi-modal “biomarker” approach. This is an area of prolific scientific inquiry. Such studies have great promise to elucidate the etiology and underlying mechanisms of disease and to offer personalized medicine solutions for positive treatment outcomes, for example with pharmacotherapy. For such studies to have clinical translational value in the short term, key measurement systems must be freely accessible to patients.This is not the general case with modalities such as neuroimaging and complex molecular marker assays. It is important to remember that, in most health systems even basic pharmacogenomic data are not routinely measured or retained in electronic health records.

 

Many questions remain open with respect to the quality indicators that we must consider for machine learning / AI applications in mental health. The future is full of promise, but we have to move forward thoughtfully and transparently to engage in a balanced culture of selective technology adoption. For a recent thoughtful scoping review of the importance of patient and public involvement see Zidaru et al (2021).

 

References

  1. Rauseo-Ricupero N, Henson P, Agate-Mays M, Torous J. Case studies from the digital clinic: integrating digital phenotyping and clinical practice into today’s world. Int Rev Psychiatry. 2021 Jun;33(4):394-403. doi: 10.1080/09540261.2020.1859465. Epub 2021 Apr 1. PMID: 33792463.
  2. Kahane K, François J, Torous J. PERSPECTIVE: The Digital Health App Policy Landscape: Regulatory Gaps and Choices Through the Lens of Mental Health. J Ment Health Policy Econ. 2021 Sep 1;24(3):101-108. PMID: 34554108.
  3. Washington Post 2020. Teletherapy is helping Americans get through the pandemic. What happens afterward?https://www.washingtonpost.com/lifestyle/wellness/app-anxiety-mental-health-covid/2020/12/24/6ab9eb14-40b8-11eb-8bc0-ae155bee4aff_story.html
  4. Lagan S, Emerson MR, King D, Matwin S, Chan SR, Proctor S, Tartaglia J, Fortuna KL, Aquino P, Walker R, Dirst M, Benson N, Myrick KJ, Tatro N, Gratzer D, Torous J. Mental Health App Evaluation: Updating the American Psychiatric Association’s Framework Through a Stakeholder-Engaged Workshop. Psychiatr Serv. 2021 Sep 1;72(9):1095-1098. doi: 10.1176/appi.ps.202000663. Epub 2021 Apr 22. PMID: 33882716.
  5. Amol A. Verma, Joshua Murray, Russell Greiner, Joseph Paul Cohen, Kaveh G. Shojania, Marzyeh Ghassemi, Sharon E. Straus, Chloe Pou-Prom and Muhammad Mamdani. Implementing machine learning in medicine. CMAJ August 30, 2021 193 (34) E1351-E1357; DOI: https://doi.org/10.1503/cmaj.202434
  6. Joseph Paul Cohen, Tianshi Cao, Joseph D. Viviano, Chin-Wei Huang, Michael Fralick, Marzyeh Ghassemi, Muhammad Mamdani, Russell Greiner and Yoshua Bengio. Problems in the deployment of machine-learned models in health care.CMAJ September 07, 2021 193 (35) E1391-E1394; DOI: https://doi.org/10.1503/cmaj.202066
  7. Tony Antoniou and Muhammad Mamdani. Evaluation of machine learning solutions in medicine.  CMAJ September 13, 2021 193 (36) E1425-E1429; DOI: https://doi.org/10.1503/cmaj.210036
  8. Grzenda A, Kraguljac NV, McDonald WM, Nemeroff C, Torous J, Alpert JE, Rodriguez CI, Widge AS. Evaluating the Machine Learning Literature: A Primer and User’s Guide for Psychiatrists. Am J Psychiatry. 2021 Aug 1;178(8):715-729. doi: 10.1176/appi.ajp.2020.20030250. Epub 2021 Jun 3. PMID: 34080891.
  9. Lee EE, Torous J, De Choudhury M, Depp CA, Graham SA, Kim HC, Paulus MP, Krystal JH, Jeste DV. Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom. Biol Psychiatry Cogn Neurosci Neuroimaging. 2021 Sep;6(9):856-864. doi: 0.1016/j.bpsc.2021.02.001. Epub 2021 Feb 8. PMID: 33571718; PMCID: PMC8349367.
  10. Zidaru T, Morrow EM, Stockley R. Ensuring patient and public involvement in the transition to AI-assisted mental health care: A systematic scoping review and agenda for design justice. Health Expect. 2021 Aug;24(4):1072-1124. doi: 10.1111/hex.13299. Epub 2021 Jun 12. PMID: 34118185; PMCID: PMC8369091.