Use of artificial intelligence (AI), advanced analytics, and workflows are going to be required for mental health organizations to be more efficient and effective, maximize revenue, and provide improved outcomes. In fact, we’re seeing a growing number of AI technologies and algorithms in use by mental health providers. Those who have yet to pursue adoption will want to take steps sooner than later to start taking advantage of current and future advancements.

 

But the decision to add AI should not be rushed or taken lightly. With many technologies already available and more seemingly coming on the market every day, leadership will want to take the time to understand their organization’s options, assess the pros and cons of these solutions, and carefully determine what technologies make clinical, financial, and operational sense.

Key AI Questions to Ask Yourself

 

As you learn about and begin to assess the mental health AI options available on the market, there are three questions you will want to ask yourself — questions worth asking when analyzing any type of technology, including artificial intelligence. These questions will help you determine whether the solution will move your organization forward in one or more ways and is thus worth pursuing.

1. Does the AI solution help you perform target costing?

Organizations need tools that help them determine their actual costs for services. This is required for value-based payment initiatives. Check to see whether the benefits associated with AI solutions you are considering may help with this crucial area of operations.

2. Does the AI solution help with process reengineering?

Organizations need tools to improve productivity and performance. Does this AI solution you’re considering help remove redundant tasks? Convert sequential steps into parallel processes? Bring more automation to data collection? Reduce unnecessary costs? All of these are improvements worth pursuing.

3. Does the AI solution provide technological substitution?

Another question concerns whether the AI technology replaces an older technology and brings with it new applications and benefits. Examples include AI helping eliminate provider burden, improving patient engagement and/or self-service, and improving clinical decision-making.

Key AI Questions to Ask Vendors

 

As you identify the types of AI solutions you think will best serve your mental health organization, you will want to engage in conversations with the vendors offering those AI technologies. During those conversations, there are several important questions you will want to ask to help you gain a better understanding of the solutions you’re considering. While AI brings tremendous promise, there are also concerns about these technologies. 

 

Below are several important questions you will want to ask your AI vendors. Note: Make sure you get satisfactory answers to these questions. If a vendor is unable or unwilling to provide you with an answer you deem satisfactory, you may want to consider this a red flag and look elsewhere for AI technology.

What is the AI designed to do?

In other words, what challenge(s) is it meant to help solve? This should be a straightforward answer.

How was the data collected to power the artificial intelligence algorithm?

Data has revealed that not all AI and machine learning tools and algorithms have been implemented in ways that recognize and eliminate bias. In fact, some technologies have been built on bias. 

 

An example of such bias concerned an algorithm designed to help identify which people should be let out of prison for parole. But the algorithm was heavily biased. It identified Black and brown people as more dangerous than white people. The result of using this algorithm would have guided parole decisions with the effect of fewer Black and brown people being released from prison on parole than white people. The use of the algorithm was discontinued.

There are three kinds of biases you should know about:

 

  • Illegal bias — models that break the law (e.g., discriminating against a social group)
  • Unfair bias — models with embedded unethical behavior (e.g., above parole example, men over women, one political party over another)
  • Inherent bias — models that use data patterns which unintentionally steer users in a particular direction

Inherent bias is the most common type of data bias. It can be difficult to keep bias out of our work, and this includes the work of those building AI solutions and algorithms. 

 

It’s important to understand that it’s not necessarily bad that there is inherent bias in the data. Unconscious bias is engrained in our everyday life. What’s important is for AI vendors to have a process(es) for mitigating bias. 

 

There are six key areas where AI vendors should work to identify and address bias:

  1. Data collection
  2. Pre-processing of data
  3. Model training
  4. Model validation
  5. Feature engineering
  6. Data selection

How do you best ensure consistency across domains?

Mitigating bias is a crucial step for AI vendors to take. They also must ensure that when their algorithm is run, results are the same across domains. In other words, when a care provider examines clients across multiple demographic and social determinants of health (SDOH) domains, the algorithm should show the same result. No matter the age, gender, race, ethnicity, or diagnosis of those you support, you want to be assured there will be no inequities or inequalities in the way the data is rendered back to you. 

What are the origins of the data?

It’s important to learn where the data powering the AI solution comes from. An algorithm developed around a specific data set may not be applicable and/or useful for your purposes. For instance, consider an algorithm with data built from subjects in one geographic area — e.g., a hospital in Wisconsin where the patient population is largely comprised of those with a Swedish, Caucasian background. You cannot necessarily bring that same algorithm to a hospital in the Bronx, which likely supports a much more diverse, multi-ethnic patient population. 

Are you in alignment with the “AI Bill of Rights?” 

There are a number of federal initiatives around AI, including the development of a “Blueprint for an AI Bill of Rights.” Mental health providers should ask vendors if they are in alignment with this AI Bill of Rights, which is intended to guarantee Americans are protected from unsafe and ineffective systems, not facing discrimination by the algorithms, protected from abuse of data practices, and protected through built-in safeguards within a solution. Vendors should know about and state they are in alignment with the AI Bill of Rights. 

 

Vendors should also be current on other federal AI initiatives, including the 2023 executive order concerning responsible AI innovation that guides federal agencies about what they need to do to protect Americans’ rights and safety. 

When and how is automation used?

Clients treated by mental health organizations will want to inquire about when, how, and why an automated system is being used to contribute to outcomes that impact their care. Clients should be provided with the right to opt out of these systems in favor of a human alternative who can help remedy problems, where appropriate.

Competency Domains in Artificial Intelligence

 

It will soon be imperative for clinicians, executives, administrators, and others within mental health to gain the ability to understand and speak confidently about AI. This opens up an entirely new area of competency domains.

The competency domains in artificial intelligence can be broken down into six areas:

 

  • Basic knowledge of AI: explain what AI is and describe its healthcare applications
  • Social and ethical implications of AI: explain how social, economic, and political systems influence AI-based tools and how these relationships impact justice, equity, and ethics
  • AI-enhanced clinical encounters: carry out AI-enhanced clinical encounters that integrate diverse sources of information in creating client-centered care plans
  • Evidence-based evaluation of AI-based tools: evaluate the quality, accuracy, safety, contextual appropriateness, and biases of AI-based tools and their underlying data sets in providing care to patients and populations
  • Workflow analysis for AI-based tools: analyze and adapt to changes in teams, roles, responsibilities, and workflows resulting from implementation of AI-based tools
  • Practice-based learning and improvement regarding AI-based tools: analyze and adapt to changes in teams, roles, responsibilities, and workflows resulting from implementation of AI-based tools

Organizations would be well-served by including AI training in these domains as part of their current organization-wide workforce training and retention initiatives.

Examples of Mental Health AI Applications

 

Artificial intelligence can be a great tool to reduce provider burden and deliver better quality care. A number of advancements are starting to make a difference for clinicians. Let’s take a look at a few examples, which should help provide you with a good idea of some of the ways AI is already helping mental health providers like yourself.

Documentation Supported By Ambient Dictation

With the use of ambient dictation, AI is enabling clinicians to record their client sessions and then immediately receive summaries of these conversations for review, editing, and documenting in the client’s progress note. Natural language processing (NLP) is then data mining symptoms identified in the progress notes from all of the providers who engage with a client. This data mining creates a much clearer understanding of the client’s current situation, which empowers mental health providers to better identify client needs before rendering or further recommending treatments and services. Another benefit: Ambient dictation has been shown to save significant clinician time, which can be allocated to further supporting existing clients or helping additional clients, among other benefits.

Symptom Tracking

Using NLP tools, providers are identifying mental health conditions easier and with increased accuracy. An NLP-powered symptom and diagnosis tracking algorithm scans provider and other caregiver notes to identify easy-to-miss and difficult-to-identify symptoms. The technology then connects those symptoms to potential associated diagnoses, allowing providers to make better and faster evidence-based clinical decisions. 

SDOH Tracking

As the National Alliance on Mental Illness notes, “A focus on SDOH can lead to better mental health outcomes, including preventing mental illness.” AI is helping providers identify likely issues concerning SDOH — and health-related social needs (HRSN) — for clients and then making this information available to care management teams and clinicians at the point of care. The AI algorithm, using NLP, scans provider and other caregiver notes across the healthcare ecosystem to quickly find SDOH issues raised by clients. With this information, providers can achieve proactive clinical interventions that should help improve adherence with treatment plans and decrease adverse outcomes. 

Anomaly Detection

Sometimes mistakes are made when pursuing mental health treatments. AI is enabling organizations to identify clinically appropriate outliers in progress notes. For example, consider a provider who prescribes a medication dosage well outside what is considered the normal boundaries of care. Anomaly detection, powered by AI, is quickly flagging such issues that can negatively affect mental health outcomes. AI is also helping review client encounter patterns to identify scheduling efficiencies which can contribute to a reduction in client no-show rate.

Revenue Cycle Management 

In addition to clinical and operational benefits, AI is helping strengthen the financial performance of mental health organizations. AI tools can quickly review numerous billing rules before claims are submitted. These tools identify omitted and incorrect information, which staff can then address. This is improving clean claim rates and leading to faster reimbursement while decreasing denials and resource-draining appeals.

How Can Artificial Intelligence Help Your Mental Health Organization?

 

The highlighted technologies are making it easier for mental health organizations to focus on client needs and improve their quality care. AI can take significant burden off providers, so they have more time to engage with their clients, understand their needs and concerns, and make them the focus. These important technology investments will help providers overcome challenges they’re facing now and take advantage of opportunities that will better position them going forward.