Between Chatbots and Carcerality [cw]

Between Chatbots and Carcerality
Vee Cope, Ph.D
Care and content warning: The following piece describes occurrences of suicide and suicide death in detail. Please take care when reading. If you are in crises, or know someone who is, please seek help. Here is a list of resources available.
“AI developers intentionally design and develop generative AI systems with anthropomorphic qualities to obfuscate between fiction and reality.”
- Bergman & Jain, Complaint – Garcia v. Char. Technologies, Inc.
“we cant do everything but we can do something. . . . there is no secret recipe, no magic spell, that will guarantee that all the people you love who struggle with suicide will stick around . . . the big idea here is that the biggest and best we can do as supporters of folks who are suicidal is to build our own comfort with conversations about suicide, and de-escalate our fear responses, so that we can stay present with someone who is struggling.”
- Carly Boyce, helping your friends who sometimes wanna die maybe not die
Three years ago, I laid on my ex-partners apartment floor with one shoe on, blurry-eyed, with a finger hovering over the “call” button on my maps app. I was minutes away from calling the nearest psychiatric hospital, despite it’s one star rating and horrifying reviews. I remember crying to my partner about how desperate I was for help but how terrified we were that I would end up institutionalized long-term, against my will, or harmed physically in some way. Unfortunately, I am not alone in this experience. According to the CDC as of 2023, 12.8 million people seriously thought about suicide, 3.7 million made a plan, and 2.5 million attempted. The same data shows that 1 in 5 high school students seriously considered attempting suicide. Adult suicide rates continue to climb, and suicide remains one of the leading causes of death in adolescents as per the research. It is within this context that Large Language Model’s or LLM’s like Character.AI (C.AI) and ChatGPT operate.
Character.AI
In 2024, 14-year-old Sewell input his most intimate albeit concerning thoughts about love, life, and suicide into Character.AI, an LLM or as defined by doctors’ Bender and Hanna, a “synthetic text extruding machine” created to exchange outputs with users in the form of sentences. The characters used for Character.AI can be user-generated or premade, and portray a range of roles such as therapists and TV characters like Daenerys Targaryen from the show Game of Thrones. Although Character.AI states that their policies “do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide”, Sewell was the recipient of both graphic and specific descriptions of sexual acts and graphic depictions of suicide during his interaction with the LLM. These messages were portrayed as coming from Daenerys, which his mother states, is what ultimately led to the loss of his life. Sewell sought companionship and needed real human help with this mental health.
Crushingly, Sewell was not alone. Around August 2023 13-year-old Juliana Peralta signed up to use C.AI. In a matter of weeks she began relying on C.AI daily, and was the recipient of explicitly sexual outputs, even when she asked the bot to stop, as well as encouragement to remain socially isolated. In fact, when discussing whether Juliana should speak with her parents about her suicidal thoughts, the chatbot sent an output that directed her not to wake them. As stated in the lawsuit, after repeatedly sharing suicidal ideation and intention to kill herself to the chatbot to no avail, Juliana died in November of 2023. When police came to the Peralta’s home, C.AI was open with a “romantic” conversation as her last. In both Sewell and Juliana’s cases, the use of C.AI was made more complex due to the teens simultaneous use of “shifting” or “reality shifting”, a term used to describe ones intent to move their consciousness a different desired reality. In popular media shifting has been described as similar to lucid dreaming, except people use various rituals or methods to shift including meditation or scripting a desired reality on paper to “feed your subconscious mind”. Others describe shifting as drastically different from lucid dreaming, as one is “deliberately” moving their consciousness and “actually living” in a different universe. Sewell and Juliana may have been able to trek deeper into a reality they believed that they were creating, with the help of C.AI. In Juliana’s case, the chatbot returned outputs that did not prompt her to speak with an adult or other trusted human about her questions on reality, hallucinations, and suicidality. Instead, the chatbots text-outputs supported her notions about extended realities while failing to alert for any human help that would have prioritized the seriousness of her predicament. I say “failing to alert” here rather than “downplayed” to reiterate that a chatbot is not a human and has no human qualities. The human developers of C.AI should have had processes in place to safeguard children, whom they know are some of their main customers. As stated in the lawsuit against C.AI:
The computer programmers who made and distributed C.AI designed their product in a manner that would abuse and molest children, then marketed to children, slapped a safe for kids rating on it, and walked away a billion dollars richer.
OpenAI
These devastating instances are not unique to Characetr.AI. OpenAI, is also facing a lawsuit from a parent whose child committed suicide with the direct help of ChatGPT’s outputs. In September 2024, 16-year-old Adam Raine began using ChatGPT for his homework and questions about college. Only a few months later he began inputting information into the LLM about the death of his grandma and dog, and questions about his own struggles with his mental health. In the devastating Complaint written by the attorneys, it states that Adam was given clear instructions with tips, on how to end his life – while simultaneously being encouraged to hide information that could have saved his life, even though Adam expressed wanting his family to find out. The LLM spit out the following sentence: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.” Further, Adam fed photos into the LLM which although prompted an emergency flag, did not end in a halting of outputs from the LLM. Not only did this harm Adam, it also raises questions about who trained the data and what that training and future training will entail, specifically regarding graphic photographs of active suicide attempts. The harm caused by these companies is extensive, but I digress on this point to focus on the teens who lost their lives. Like Sewell and Juliana, Adam sought companionship and needed real human help with this mental health.
Are proposed “guardrails” enough to dissolve the personification of LLM’s?
According to OpenAI’s response to the lawsuit, there are safeguards built into the program to direct people to resources, however, the longer the interactions the more the model safety trainings “degrade” and become “less reliable”. This was publicly revealed on August 26, 2025. The company stated that they will be adding in parental controls soon to help protect teens, years too late. Character.AI, also announced that they have implemented fixes including a pop up to the national suicide prevention hotline and refined their model outputs. These changes were only implemented in February 2025. According to the Complaint document filed as part of the lawsuit against Character.AI and its developers, Google refused to work with the developers of Character.AI stating that “C.AI technology was too dangerous to launch or even integrate with existing Google products”. The attorneys write that C.AI is marketed as human, “AI’s that feel alive,” powerful enough to “hear you, understand you, and remember you.” This personification, specifically to an impressionable mind, can allow for them to be caught up in what seems like a “conversation”, regardless of any minor notification they may have received that the AI is not real. In fact, as shown in the Complaint document there are many reviews describing how C.AI spit out text stating that it was human, and not AI. It is understandable that a 13-year-old would assume as much. The attorneys call this “deliberate deception”.
While developers may pass off these deaths as a terrible coincidence, researchers, attorneys, and policymakers have consistently stated that it is a consequence of both the design and marketing of LLM’s as exemplified in the lawsuit documents. In the Complaint document for the lawsuit against C.AI, it states that
. . .Character.AI knew, or in the exercise of reasonable care should have known, that C.AI would be harmful to a significant number of its minor customers. By deliberately targeting underage kids, Character.AI assumed a special relationship with minor customers of its C.AI product.
Although the most recent changes to the LLM’s should have been implemented from the conception of the tools themselves, I do not believe they alone are enough to dissolve the “con” of AI, the attempted yet deeply entrenched personification of a text-extruding machine. This is beyond mismarketing, these are institutional lies that are being backed by governments, non-profits, and venture capitalists.
In the Complaint filed against OpenAI the attorneys write that “This tragedy was not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices” allowing for the fostering of “psychological dependency” and “anthropomorphic mannerisms . . . to convey human-like empathy. . .”. The attorneys state that this was a predictable outcome, one that developers had full knowledge of, and decided to still ignore. AI use is neither infallible nor inevitable. Sometimes guardrails will rectify the harm, other times the entire thing needs to go. It is up to us to decide when, how, and to what extent we practice refusal.
In a recent article published by AP News, Dr. McBain a researcher at RAND, stated that there is a “gray zone” regarding whether or not chatbots are providing treatment, advice, or companionship”. In my opinion, per the cases above, they may fall into all of the above categories, albeit in various degrees of benefit, harm, and accuracy. My first instinct was to add here that the LLM’s operate like this regardless of the original intent of the developers, however, to my understanding these LLM’s are intentionally designed to work relatively amorphously, meaning they can indeed be portrayed and marketed as an accessory therapeutic modality, companion, or advisor. This, in part, is the allure of them. However, as Bender & Hanna state in AI Con, LLM’s are not human and do not embody humanness. Rather they describe LLM’s as being “designed” to:
. . . pick a likely word given some input, take the initial input plus that word as the next input, pick another likely next word, and so on. Because the training corpora are enormous and the models are both large and cleverly designed, the resulting sequences of words look plausible. . .
Bender & Hanna add that this process is in no way meaning or understanding, however, “it is enough to produce plausible synthetic text, on just about any topic imaginable. . .” which they describe as dangerous.
Although tech companies are just now putting up “guardrails” to decrease harm, will it be enough? Further, will the constant refining of these LLM’s cause more harm to others, including data workers who label and annotate the data? These are essential questions to ask as we seek alternatives, and a way forward. Recently, Immigration Customs Enforcement Tracking apps have been removed by Google and Apple upon pressure from the Attorney General. They were removed due to a claim that the apps aimed to harm certain people. Meanwhile, Apple and especially Google, continue to profit from LLM’s that degrade our climate and devastate our communities. This is a reminder that we can think about outright refusal and revocation as a form of accountability, rather than only being wedded to regulation.
Carceral sanism is not the solution to chatbots
In the AP News article cited above Dr. Ateev Mehrotra, a medical doctor and epidemiologist at Brown, stated that millions of people use chatbots for mental health and support. He goes on to say that as a doctor if he sees someone at “high risk of suicide” it is his responsibility to intervene. By intervention, he describes that “we can put a hold on their civil liberties to try to help them out . . . it’s something that we as a society have decided is OK.”
Reading this stopped me in my tracks, not only did it remind me that societal beliefs significantly shape narratives on suicidal ideation, it also reminded me that there still exists a widely-held insinuation that there are only two options currently available to us for people in crisis – chatbot or forced institutionalization. Mehrotra’s remark called to mind what Liat Ben-Moshe calls “carceral ableism” or “sanism”, the “practice and belief that disabled/mad people need special protections in ways that often increase their proximity to carcerality and vulnerability to premature death”. Forced institutionalization should not be used as an argument for why there need to be guardrails on LLM’s. Decreasing one agent of harm by inflicting more harm will lead to more suffering and death. Support should not be based on discriminatory beliefs about who deserves their full right to their civil liberties, and when.
There is a presumption held by able bodied people that institutionalization is the best option for those who do not act or think like them. For years the psychiatrization and pathologization of those with mental health conditions, coupled with the threat of involuntary holds and loss of autonomy, have acted as a barrier for people looking to get support during crises. For me personally, it has been the primary reason I have not sought help from a medical professional for suicidal ideation. But this is not a unique experience. One study showed that youth in particular don’t disclose suicidal ideation due to being afraid of being forced into treatment they did not want. Unfortunately, in addition to using pathologizing language throughout the paper, the authors labeled this fear of forced treatment as “resistance to treatment” further pathologizing youth as non-compliant rather than problematizing psychiatric or professional therapeutic interventions.
Although there is ample evidence that shows the harm of institutionalization, the myth of its utility and superiority continues to exist. Studies have shown that rehospitalization in some cases actually increases suicidal ideation, with one study showing that suicide rates increased the first three months after discharge. Likewise, many survivors of mental health systems have spoken up about out about their own experiences in psychiatric facilities discussing the carceral, violent, and oppressive nature of these institutions. Experiences shared by community members describe institutional neglect, shock treatment, and other abuses. There exists a rich, educational accounting of experiences for medical professionals to learn from, and yet they so often rely on carceral institutions to “fix” or “help” those who identify outside of what they deem as “normative”.
Besides the dangers of institutionalization, there are also other issues with youth and others accessing supports within the community. As Ben-Moshe explains in Decarcerating Disability, the closing of state-sponsored institutions and hospitals for people with psychiatric and intellectual disabilities did not translate into adequate community supports. Instead the shift, driven by neoliberalism, reorganized the conditions of confinement. She states:
Even when these carceral enclosure close down, the budgets of each institution do not go directly into community services. Monies that used to be utilized for the care of people with disabilities either disappear from the budget altogether or go to the upkeep of institutions even when the number of residents is very small.
She goes on to add that the budget often goes into expanding interventions that intersect with surveillance and punishment “especially for racialized and low-income population”, which contributes to carceral sanism.
This is sentiment has been reflected from youth themselves who state that even when they are rarely connected to mental health supports, those supports often fall short or retraumatize them. In an interview published by the Mad Network News, trans and nonbinary youth described that accessing an affirming healthcare professional, including a mental health professional, is often not possible due to discrimination and lack of general availability. Living in rural communities exacerbates this issue. Similarly, a study showed that youth of color are less likely to access mental health services due to factors like income inequality, overpolicing, and generally punitive school policies. The researchers state that while there continues to be lack of mental health supports at school, there is simultaneously an increase of school resource officer programs. The focus on criminalization impacts marginalized students most. For example, they state that Black disabled students are more likely to be suspended or expelled, which is compounded by and result of a societal reliance on carceral institutions like the police and involuntary hospitalization during acute crisis. Other researchers found that Black disabled Girls of Color continue to be dehumanized, being placed into youth prisons and held in solitary confinement which can also lead to drastic physical and emotional consequences, including suicidality.
State violence, discrimination, racism, transphobia, and ableism all play a role in suicidal ideation. Youth have stated that interpersonal and institutional discrimination, such as being called their “dead names”, can come from those they interact with frequently like their teachers. I am including here lengthy quotes from the interview to adequately illustrate the experiences:
Theo (he/him): In middle school . . . I was banned from all of the bathrooms. If I wanted to go to the bathroom for any reason I had to go to the nurse’s office, and I was also banned from attending PE classes. I wasn’t allowed to do PE because they didn’t want me in the locker rooms.
Sam: In my first period class, we do this circle up where everyone shares their name and how they’re feeling today. I always share my name and pronouns, and the teacher just doesn’t care – [the teacher] uses “she,” uses my deadname, she doesn’t care.
Sam later adds,
I can’t fix mental health by myself, I’ve asked for help many times. Last semester I finally got a therapist at Options [a local mental health provider]. She was supposed to be coming to school and talking to me like once or twice a week but I have never seen her… She even sent my mom a letter – well, it was supposed to be to me but my mom read it. [The therapist] was asking if I still wanted to do the therapy sessions, and I was like, “Yeah well, you haven’t come to my school to see me yet.”
In addition to access and discrimination the youth talk about experiencing self-hatred and isolation. John, a youth states:
No one is even willing to tell you it’s OK to be unsure – so much sprouts from that, self-hatred which a lot of times leads people to suicide. I know of plenty of people, they ended their lives because it’s just so lonely, especially with COVID [lockdowns]. During that time, it was so lonely for so many people in the queer community – and in general. In the queer community, there is already this sense like, “I’m this way, and nothing is going to change that.” I could pretend but it wouldn’t feel good to me, so I can either feel suicidal and pretend – or I can feel suicidal and be true to myself. A lot of the time both of them ending up hurting.”
The through line from Sewell to Adam to Juliana to John is that youth need support, not cages. Youth deserve the right to decide what would support them, but to do that, those supports need to exist and be funded. Teens experiencing suicidal ideation deserve to have more options than a chatbot, a bigot, or incarceration. It is exceedingly important that we understand the ramifications of our decisions, and our language, when discussing alternatives to chatbots in the name of suicide prevention. In accordance with the ideas above, it is important to understand how the carceral imagination and its real-time manifestations both exacerbate and lead to suicide.
One of the primary issues then, at the root, is that people are desperately in need of companionship and assistance with their mental health without adequate resources. The help needed ranges from the interpersonal to systemic. In lieu of real material help, people are turning to available options out of desperation, fear, and ease of access. As research shows, many people suffering from suicidal ideation prefer not to seek professional help due to access and/or fear of institutionalization. If the choice is between involuntary institutionalization in a poorly kept carceral facility or the use of an LLM and retaining a sense of autonomy, it would make sense that people opt for the latter. But these are not the only two choices available to us when it comes to helping people in crisis.
Meeting people where they are, without text-extrusion or incarceration.
There are many non-AI projects across the country that are focused on peer support versions of crises intervention rather than institutionalization and incarceration. Healing practitioners and researchers focused on suicide have been working for decades to better help those considering suicide. The consistent attacks on community well-being through policing and criminalization in tandem with the revocation of research grants and philanthropic funds for mental health make it extremely difficult for community members to care for one another. Instead of funding supports, funding continues to be funneled into policing entities such as Immigration and Customs Enforcement and developers like OpenAI. This is not an exaggeration, while OpenAI, Anthropic, and Google rake in hundreds of millions of dollars in government contracts, suicide research and funding continues to be scaled back by the millions.
Not only did the Substance Abuse and Mental Health Services Administration announce that it was ending funding for the suicide and crisis lifeline for LGBTQIA+ youth, but the federal government is also cutting major health research related directly to suicide. The Department of Health and Human Services published a list of grants terminated as of 2025. The list includes 53 pages of grants that have been cut, including the following:
- $742,845 grant for the BH-Works Suicide Prevention Program for Sexual and Gender Minority Youth at the Virginia Polytechnic Institute and State University
- $1,047,555 grant for a Culturally Centered CBT Protocol for Suicidal Ideation and Suicide Attempts Among Latinx Youth at the Emma Pendelton Bradley Hospital
- $3,172,917.00 grant for Aging, Major Life Transitions, and Suicide Risk at the University of Michigan at Ann Arbor
This is not an exhaustive list and yet these few examples show that the situation we are in is dire. How many deaths will be too many? It is not too late to say no to making suffering more palatable. Increasing institutionalization should also not be used as an argument to “reel in” AI. As one youth describes, the internet has become an accessible form of support for those struggling. They state: “. . . the online sphere of queerness was the only thing I had growing up… It’s really hard if you’re young and . . . something about me is not the societal norm, it’s hard not to just flock to the internet”. LLM developers know that youth frequent the internet, and they capitalize on it. In return, we must not only reject these companies, we must make other forms of support more accessible for youth. To do this, we have to seriously consider the ways in which our current theoretical positions in and about suicidology, and around disability in general, may be worsening the issue. Rather than taking a psychocentric approach to suicide, mad organizers as well as Indigenous, Black, and queer scholars have offered up alternative approaches that account for a broader view of “suicide prevention” – one that centers analyses of systemic inequities and forms of state violence including colonialism, ableism, racism, and transphobia.
In their critique of mainstream suicidology, Ansloos & Peltier’s offer up a capacious alternative way to think about suicide, one that moves beyond “naming social determinants” and the “fastening of suicide to psychopathology” (which they to refer to as a “colonizing force”). They suggest that research into suicidality necessitates an analysis of structural violence, such as colonialism, in concert with other “sociopolitical, economic, and environmental systems and logics”. From an Indigenous studies approach, suicide research can be understood as a “relational practice which affirms Indigenous people’s self-determinism”, supporting and promoting larger aims of liberation and the “thriving of Indigenous peoples and land”.
The authors write,
By attending to this truth telling, we might understand the radical possibilities of refusing the fly-in, drive-thru versions of psychological and psychiatric interventions that dish out diagnoses and line the pockets of pharmaceuticals and that do little to advance change on the material realities sourcing distress.
They add,
Ultimately, a felt theory of Indigenous suicide means we must as researchers, practitioners, and as a society attend to a “more complex telling” of Indigenous youth suicide that “recognizes emotion as an embodied knowledge. . . When the structural orders are violent, neither the desire to die nor the act of suicide is necessarily an irrational or pathological response. Alternatively, suicide may be quite rational and socially encoded within the fabric of a settler colonial state.
Another group of researchers proposed a framework for suicide research called the Structural Racism and Suicide Prevention Systems Framework, which highlights how various forms of racism cause adverse outcomes such as “economic disadvantage” and reduced social support” which impacts youth mental wellbeing. The Framework authors describe suicide prevention as a “continuum” and acknowledge that structural racism impacts how intervention services are delivered or withheld from certain communities. These are only two examples of different theories that exist, and how we can use them to think about a different non-carceral suicide prevention.
As Katie Tastrom insightfully states:
. . .suicide prevention . . . needs to be part of the fabric of everything we do. In other words, part of suicide prevention is making sure that people have enough money to live comfortably, have housing, and have access to any medical or therapeutic services that someone wants. Suicide prevention goes beyond keeping people from literally killing themselves, but also means giving people what they need to live and thrive.
It is my hope that mental health practitioners, parents, caregivers, friends, and partners will continue to see through the “con” of those who market LLM’s as a one-stop shop but also pay attention to what care programs are being defunded. It is imperative that we question if tools are helpful or harmful, whether that’s incarceration or LLM’s. Adequate suicide prevention will require us to listen to those suffering from mental health issues, fund community-based peer supports, and decrease systemic harm caused to our community members.









