Artificial intelligence (AI) shows promise as a valuable support tool for delivery of mental health services, educational guidance and career counseling. But the American Counseling Association (ACA), the leading organization representing counseling professionals, warns that consumers should not use AI as a substitute for a human counselor.
RELATED: Pope, once a victim of AI-generated imagery, calls for treaty to regulate artificial intelligence
ACA’s AI Working Group has issued a set of guidelines to help counselors and their clients understand the value and limitations of using chatbots, robotics and other nascent AI tools in mental health services. Clients should understand the technical shortcomings, unresolved biases and security risks of AI before using it as part of their counseling, says Russell Fulmer, PhD, LPC, chair of the working group and professor and director of graduate counseling programs at Husson University in Bangor, Maine.
“AI may offer promising benefits, but its claims can sometimes be overly ambitious and simplified, non-evidence based, or even incorrect and potentially harmful,” the panel states in its recommendations.
AI technologies are designed to engage in the same reasoning, decision-making and language comprehension of the human mind. Counselors already use them to automate administrative tasks such as reports on a client’s progress, says Olivia Uwamahoro Williams, PhD, NCC, LPC, clinical assistant professor of counseling education at the College of William & Mary in Williamsburg, Virginia, and a member of the ACA working group. Some are inviting clients to use AI chatbots to help them understand and manage their thoughts and feelings between therapy sessions, Fulmer says.
But as the ACA panel notes, the algorithms include the same fallibilities and biases of the humans who create them. The AI tools may rely on data that overlook certain communities, particularly marginalized groups, creating the risk of culturally insensitive care. They risk providing false claims or inaccurate information. And although they show promise as a diagnostic aid, they can’t replicate the professional reasoning and expertise required to accurately assess an individual’s mental health needs.
“Unlike human counselors, AI lacks the ability to holistically consider a client’s complex personal history, cultural context, and varied symptoms and factors among others,” the guidelines state. “Therefore, while AI can be a supportive tool, it should not replace the professional judgment of professional counselors. It is recommended that AI be used as an adjunct to, rather than a replacement for, the expertise provided by professional counselors.”
The ACA panel recommends that clients consider the following:
- Make sure your provider informs you about what AI can and cannot offer so you can make informed decisions about using it as part of your counseling.
- To protect confidentiality, verify that the AI tools you use comply with federal and state privacy laws and regulations.
- Discuss with your counselor how to mitigate the risks of AI tools providing falsehoods or factual mistakes that could harm your well-being.
- Refrain from using AI for crisis response, and instead use crisis hotlines, emergency services and other forms of assistance from qualified professionals.
Providers should develop a detailed understanding of AI technologies, their applications in counseling services, and their effects on confidentiality and privacy, the working group says. The recommendations call for counselors to receive comprehensive and continuous training in evolving AI applications, says Fulmer, who studies AI in the behavioral sciences.
“We are ethically obliged before we use something to be quite competent in it,” he says. “So, one of our recommendations is to simply accumulate more knowledge about AI.”
The panel also calls on technology developers to involve clients and counselors in the design of relevant AI tools. Including these users will ensure that AI tools are client-centered and address real-world needs.
ACA is taking a leadership role in ensuring the appropriate use of AI in mental health services, says Shawn Boynes, FASAE, CAE, the organization’s chief executive officer.
“The adoption of AI and its impact on mental health is growing exponentially in a variety of ways that we’re still trying to understand,” Boynes says. “As one of many mental health organizations focused on well-being, we want to lead by offering solutions to help mitigate future concerns.”