Holly Testerman

As a licensed therapist, I am incredibly concerned by the rise in the use of AI language models to address mental health concerns. Within just the past year, there have been multiple suicides and homicides related to the usage of a chatbot, including a homicide that happened in Maine just last year. This incident, and many others like it, happened because a person accessed an AI language model while experiencing psychosis or ideation to harm themselves or others, and the language model validated those delusional thinking patterns. Human mental health providers undergo thousands of hours of training to be able to keep people safe when they are experiencing a mental health crisis. The most valuable skill we can offer during a crisis is not knowing what to say - it's performing a mental status exam. During this exam, we observe a person's physical presentation - things like their movement, their speech patterns, and their ability to make reasonable judgements and decisions. When mental health providers respond to people in crisis, we are responding in context of both the information we are verbally given by the person, but also the observable information that we have about that person. When someone is experiencing strong suicidal ideation or psychosis, they do not have the ability to make safe or reasonable judgements. Sometimes people even recognize this, and attempt to use AI language models as an avenue to get external advice. However, the advice is never actually external, because AI language models are not able to observe people in the context of their environment to take an accurate mental status exam. Their only value is in "saying the right thing." So often, instead of recognizing when a person is not making clear judgements and needs additional support, AI language models simply tend to validate instead. This has already proven to be dangerous and deadly. As a therapist, I do have clients who choose to use general AI language models for therapeutic advice outside of session. While my clients are free to make their own decisions about how to use these tools, I make sure to educate my clients about the risks of using AI language models for mental health support. Many of them are not even aware that people have died due to use of AI language models for mental health support. AI technology is expanding faster than any of us are able to keep up with, and it's difficulty to stay informed. Banning AI mental health services in NH is needed in order to keep Granite Staters safe.