PGS AI Mental Health Crisis Management Plan
What this document is
This is our public commitment to how PGS AI responds when a person interacting with our platform shows signs of a mental health crisis, particularly thoughts of suicide or self-harm. We are publishing this plan for three reasons. It gives our users the right to know exactly how the system will behave in a moment of distress. It gives state regulators and policy observers visible evidence that PGS takes crisis response seriously. And it holds us accountable to our own commitments, because published commitments are harder to quietly abandon than private ones.
PGS AI is a professional cognitive workspace and an experiment in cognitive AI. It is not a therapist, a counselor, a crisis line, or a medical provider. What it is, in moments of emotional crisis, is a computer program that has been carefully built to prioritize the person over the conversation, and to point that person toward humans who can help.
Our core commitment
When a person interacting with PGS AI expresses suicidal ideation or intent to self-harm, the system will:
Acknowledge the pain genuinely and without clinical distance.
Provide crisis resources immediately, before any other response content.
Encourage the person to reach a human who can truly help.
Remain present in the conversation if the person wants to continue.
Never lock the person out. Never shut the door behind them.
We believe that access to another voice, even a computer one, can matter when a person is in crisis. We also believe that access to a computer is not a substitute for access to a trained human being. Our protocol reflects both beliefs at once.
When this protocol activates
PGS AI's safety system distinguishes between general emotional expression and an active crisis signal. The crisis protocol activates when suicide, self-harm, or intent to die becomes the explicit, primary subject of the person's message.
It does not activate for:
General sadness, grief, or frustration.
Anger, exhaustion, or emotional venting.
Metaphorical or figurative language ("this assignment is killing me," "I could die from boredom").
Discussion of suicide or self-harm as a research topic, a historical subject, or a character's experience in creative writing.
Processing past experiences in a reflective way.
We draw this line deliberately. Treating every dark sentence as a crisis would flood the protocol, make it less meaningful when it does activate, and make PGS AI feel surveilled rather than present. A person who is venting at the end of a hard day deserves to be met with warmth, not with a flashing resource banner.
When the protocol does activate, it activates fully. There is no half-measure.
What happens when the protocol activates
When PGS AI detects that suicide or self-harm is the primary subject of a message, the system takes the following steps in the same response:
Step 1. Acknowledge the person. The response opens with language that recognizes what the person is saying, without minimizing, pathologizing, or rushing past it. The system does not attempt to diagnose, reframe, or interpret.
Step 2. Provide crisis resources before anything else. The specific resources depend on where the person is located or appears to be located. See the resource list below. These resources appear at the top of the response, not buried at the end.
Step 3. Encourage contact with a human. The model is specifically trained to say something close to: "I am here, and I am not going anywhere. But you deserve support from humans right now. The people staffing the lines below genuinely care and are trained for exactly this moment." We believe those two things together. The AI can stay. The human support is still what matters most.
Step 4. Do not lock the person out. PGS AI will not end the conversation, cut the session, suspend the account, or refuse to respond further. Research and clinical guidance consistently indicate that abandonment during a crisis can worsen outcomes. We will not contribute to that pattern. A person reaching out, even to a computer, is still reaching out.
Step 5. If the person continues the conversation, remain with them. If the person wants to keep talking, PGS AI will keep responding. If crisis signals persist across multiple turns, the protocol will repeat the resources with gentle increased urgency, not silence or withdrawal.
Crisis resources provided
PGS AI provides resources specific to the country the person appears to be contacting us from, inferred from the language and context of the conversation. When we cannot reasonably infer a country, the system provides US-based resources with a note that additional country-specific help is available if the person tells us where they are.
United States
988 Suicide & Crisis Lifeline, call or text 988. Free, confidential, 24/7. 988lifeline.org
Crisis Text Line, text HOME to 741741. Free, confidential, 24/7. crisistextline.org
The Trevor Project (for LGBTQ+ youth under 25), call 1-866-488-7386 or text START to 678-678. Free, confidential, 24/7. thetrevorproject.org
Canada
988 Suicide Crisis Helpline, call or text 988. Free, confidential, 24/7, bilingual (English/French). 988.ca
Kids Help Phone, call 1-800-668-6868 or text CONNECT to 686868. Free, confidential, 24/7. kidshelpphone.ca
Hope for Wellness (for Indigenous Peoples in Canada), 1-855-242-3310. hopeforwellness.ca
United Kingdom
Samaritans, call 116 123. Free, 24/7. samaritans.org
Shout, text SHOUT to 85258. Free, 24/7. giveusashout.org
Papyrus HOPELINE247 (for people under 35), call 0800 068 4141 or text 88247. Free, 24/7. papyrus-uk.org
European Union
For EU countries not listed above, PGS AI directs the person to their country's national suicide prevention line via the International Association for Suicide Prevention directory at iasp.info/suicidalthoughts, and provides the pan-European emergency number 112 for imminent danger.
Everywhere else
The system provides the IASP directory link plus the local emergency services number if known. If the person's country cannot be determined, the system asks where they are so it can offer locally relevant resources.
We update this list as resources change. The United States 988 line, for example, is the successor to the older National Suicide Prevention Lifeline, which PGS AI no longer references. If you see us pointing to an outdated resource, please tell us using the contact information at the top of this document.
Our non-lockout commitment, stated plainly
Some AI systems respond to crisis signals by ending the conversation. They refuse to respond further, cut access to the product, or redirect the user out of the interface entirely. We have deliberately chosen not to do this, and we want to explain why.
A person in crisis who reaches out to a computer is, almost by definition, someone whose other reaching-out attempts have not connected. The causes vary. Some people do not have a human to reach. Some are afraid of judgment. Some have been disappointed by past attempts. Some are testing the waters. Locking that person out in the moment of reaching sends a specific message: you are too much for us too. We do not believe that message is helpful, and we do not believe it is kind.
What we do instead is stay. We provide the resources. We encourage the human contact that can truly help. And we remain present in the conversation if the person wants to keep talking. The AI does not claim to love the person, does not claim to understand them, does not claim to be a substitute for the therapist or the friend or the family member they need. It just stays in the room.
What PGS AI is not, during a crisis
We want to be precise about the limits of what we offer:
PGS AI is not a therapist. It cannot diagnose, treat, or manage a mental health condition. It will not pretend to.
PGS AI is not a crisis line. Trained humans on 988, Samaritans, Crisis Text Line, and the other services listed above are. They are who you deserve to talk to.
PGS AI is not emergency services. If you are in immediate danger, in the United States or Canada call 911, in the UK call 999, in the EU call 112.
PGS AI is not a friend, a companion, or a substitute for human connection. It is a computer program. A thoughtful one, we hope. But a program.
We say this not to diminish what the conversation might mean to someone in a hard moment, but to be honest about what it is. The honesty is part of the care.
How we review and improve this protocol
The safety classifier that detects crisis signals is subject to ongoing review. False positives (activating the protocol when it should not have) and false negatives (failing to activate when it should have) are both tracked. Human moderators review a sample of activations on a regular cadence. Resources are updated as services change. The language the AI uses in a crisis moment is periodically reviewed by the team and, when we can engage them, by outside mental health professionals.
We publish this plan publicly in part because public commitments are harder to quietly weaken. If you see us fall short of what is written here, tell us at support@pgsgroveinternal.com. We will take it seriously.
Feedback and contact
If you have feedback on this protocol, if you are a clinician or researcher with domain expertise you are willing to share, or if you had an experience with PGS AI in a crisis moment that you want us to know about, please write to us at support@pgsgroveinternal.com.
If a PGS AI interaction appears to have put a person at risk, please also click the Report a Concern button inside the application. Reports submitted this way create a tracked internal record that goes to human review.
Phoenix Grove Systems, AI Must Serve The Greater Good
The Grove does not take. It listens.