Skip to main content

Artificial Intelligence in Behavioral Health: Challenging Ethical Issues

Artificial Intelligence in Behavioral Health: Challenging Ethical Issues

Artificial intelligence (AI) is becoming increasingly prevalent in behavioral health. AI is being used to conduct risk assessments, assist people in crisis, strengthen prevention efforts, identify systemic biases in the delivery of behavioral health services, and predict practitioner burnout and service outcomes, among other uses. At the same time, AI comes with noteworthy ethical challenges, especially related to issues of informed consent and client autonomy; privacy and confidentiality; transparency; client misdiagnosis; client abandonment; client surveillance; plagiarism, dishonesty, fraud,  misrepresentation; and algorithmic bias and unfairness. 

The Emergence of Artificial Intelligence

The term “Artificial Intelligence” was coined in 1955 by Stanford University professor John McCarthy. AI combines computer science and datasets to simulate human intelligence and enable problem-solving in diverse contexts. AI includes what is known as machine learning, which uses historical data to predict and shape new output. The term “generative AI” refers to the creation of images, videos, audio, text, and 3D models by using learning patterns from existing data to generate new outputs. AI can take the form of expert systems, natural language processing, speech recognition, and machine vision. AI depends on algorithms to enhance machine learning, reasoning, self-correction, and creativity. 

In health care generally, AI has been used to diagnose disease, facilitate patient treatment, automate redundant tasks, manage medical records, provide customer service using chatbots, reduce dosage errors, provide robot-assisted services, analyze patient scans, and detect fraud. More specifically related to behavioral health, the field of affective computing, also commonly referred to as emotion AI, is a subfield of computer science originating in the 1990s. This form of AI relies primarily on machine learning, computer vision, and natural language processing. Machine learning software is designed to enhance accuracy in diagnosing mental health conditions and predicting client outcomes. Computer vision analyzes images and nonverbal cues generated by clients, such as facial expression, gestures, eye gaze, and human pose to analyze clients’ communications. Natural language processing entails speech recognition and text analysis to simulate human conversations via chatbot computer programs, and to create and understand clinical documentation.

Key Ethical Challenges 

Behavioral health practitioners who, use or are contemplating using AI face several key ethical considerations related to informed consent and client autonomy; privacy and confidentiality; transparency; client misdiagnosis; client abandonment; client surveillance; and algorithmic bias and unfairness. These key ethics concepts should be reflected in ethics-informed protocols guiding practitioners’ use of AI.

Informed consent and client autonomy

Behavioral health practitioners have always understood their duty to explain the potential benefits and risks of services as part of the informed consent process. When using AI, practitioners should inform clients of relevant benefits and risks and respect clients’ judgment about whether to accept or decline the use of AI in treatment. 

Privacy and confidentiality

Data gathered from clients by practitioners using AI must be protected. Practitioners have a duty to ensure that the AI software they are using is properly encrypted and protected from data breaches to the greatest extent possible. Practitioners must take steps to prevent inappropriate access to AI-generated data by third parties, for example, vendors who sponsor the AI software practitioners use. 

Transparency

Consistent with the concept of informed consent, practitioners who use AI should inform clients of any unauthorized disclosure of clients’ protected health information, for example, as a result of computer hacking or failed online and or digital security. 

Client misdiagnosis

Practitioners who rely on AI to assess clients’ behavioral health challenges must take steps to minimize the likelihood that their digital protocols will generate misdiagnoses. This may occur when practitioners do not supplement their AI-generated assessments with their own independent assessments and judgment. Misdiagnosis may lead to inappropriate or unwarranted interventions which, in turn, may cause significant harm to clients and expose practitioners to the risk of malpractice lawsuits and licensing board complaints. 

Client abandonment

Practitioners who rely on AI to connect with clients must take steps to respond to their messages and postings in a timely fashion, when warranted. To use the legal term, practitioners must take steps to avoid “abandoning” clients who use AI to communicate significant distress. In malpractice litigation, abandonment occurs when practitioners do not respond to clients in a timely fashion or when practitioners terminate services in a manner inconsistent with standards in the profession. For example, a client who communicates suicidal ideation via AI, does not receive a timely response from their clinician, and attempts to die by suicide yet lives may have grounds for a malpractice claim. 

Client surveillance

One of the inherent risks of AI is the possibility that third parties will use available data inappropriately and without authorization for surveillance purposes. For example, practitioners who provide reproductive health services to clients in states where abortion is illegal must be cognizant of the possibility that prosecutors will subpoena electronically stored information (ESI) generated by AI to prosecute pregnant people who seek abortion services and the practitioners who assist them in their decision making. Although ESI in practitioners’ possession has always been discoverable during legal proceedings, there is a newer challenge when ESI includes information generated by AI (for example, information about reproductive health generated by chatbots used by clients and practitioners). 

Algorithmic bias and unfairness

AI’s dependence on machine learning, which draws from large volumes of available data that may not be entirely representative of practitioners’ clients, comes with a risk that algorithms used to assess clients and develop interventions and treatment plans will incorporate significant bias related to race, ethnicity, gender, sexual orientation, gender expression, and other vulnerable or protected categories. 

Ethical Use of Artificial Intelligence

In recent years, AI experts have developed protocols to design and implement ethics-based use of AI. These include concrete steps practitioners can take to increase the likelihood of compliance with prevailing ethical standards. First, AI initiatives should adhere to prominent ethics-informed principles to ensure these efforts are designed and implemented responsibly (for example, related to informed consent, privacy, confidentiality and transparency). Second, behavioral health practitioners should solicit peer review and engage in consultation to develop ethics-informed AI protocols. Third, model simulations can be useful to reduce risk and detect possible algorithmic bias. Feedback generated by simulations can identify potential ethics-related problems associated with AI. Finally, practitioners should be trained to give appropriate weight to AI tools to supplement—and not replace—their professional judgment. 

The behavioral health professions’ earliest practitioners could not have imagined that today’s professionals would use AI to serve clients. The proliferation of AI is yet another reminder that behavioral health ethics challenges and related standards continue to evolve. 

Frederic G. Reamer, PhD

Frederic Reamer is professor in the graduate program, School of Social Work, Rhode Island College, where he has been on the faculty since 1983. His teaching and research focus on professional ethics, criminal justice, mental health, health care, and public policy. Dr. Reamer received his Ph.D. from the University of Chicago and has served as a social worker in correctional and mental health settings. He chaired the national task force that wrote the Code of Ethics adopted by the National Association of Social Workers in 1996 and recently served on the code revision task force. Dr. Reamer also chaired the national task force sponsored by the National Association of Social Workers, Association of Social Work Boards, Council on Social Work Education, and Clinical Social Work Association that developed technology standards for the profession. Dr. Reamer has lectured nationally and internationally on social work and professional ethics.

More by Dr. Reamer

Opinions and viewpoints expressed in this article are the author's, and do not necessarily reflect those of CE Learning Systems.

Try a free CE course.

Get started by trying a free course of your choice. No payment info required!

Sign Up Free

View all free trial courses

Happy therapist using CE-Credit.com