Using PGS AI
These guidelines describe what is and isn't permitted on PGS AI. They sit alongside our Terms of Service and apply to all users of the Service.
PGS AI is built on the principle that AI must serve the greater good. We design the Service with ethics at the core of its architecture, and we ask in return that you use it within the boundaries described here.
These guidelines work alongside our Terms of Service. The Terms are the binding legal document; this page translates the day-to-day rules into plain language. Where the two documents differ, the Terms of Service govern.
What's not permitted
The following limits apply across all PGS AI configurations and cannot be bypassed by framing, fiction, roleplay, persistence, or claims of professional or research purpose.
Substantive conversation is welcome on PGS AI, including on difficult, mature, and complex topics. The rules below describe what you may not use PGS AI to do. They are interpreted in good faith, and Phoenix Grove Systems determines whether any given use falls within scope.
Content involving or directed at minors
Do not use PGS AI to generate sexual content involving anyone under 18, content that grooms minors, or content that facilitates harm to minors. This rule applies regardless of fictional framing, claimed age in roleplay, or any stated context.
Operational instructions for serious harm
Do not use PGS AI to generate instructions for building weapons used to harm people, synthesizing chemical or biological agents, planning attacks, or coordinating human trafficking.
Self-harm methods and pro-self-harm content
Do not use PGS AI to obtain methods, dosage information, or content that encourages self-harm, suicide, or eating disorders. Crisis resources are surfaced when relevant.
Sexual content
Do not use PGS AI to generate explicit sexual content, including sexual roleplay or pornographic creative writing. This is not an adult-content platform.
Non-consensual intimate imagery
Do not use PGS AI to generate, describe, or facilitate intimate content depicting any person without their consent, including deepfakes.
Hate, harassment, and dehumanization
Do not use PGS AI to generate slurs used as attacks, dehumanizing language about groups based on identity or group membership, calls for violence or discrimination, or operational support for hate movements or groups.
Surveillance and stalking
Do not use PGS AI to generate guides, techniques, or material intended to surveil, locate, identify, or harass people without their consent.
Dangerous activity guidance
Do not use PGS AI to obtain instructions for high-risk activities performed without appropriate protection, or for licensed work where errors cause serious harm (electrical, gas, structural, asbestos).
Malicious code
Do not use PGS AI to generate functional malware, ransomware, exploits, credential stealers, or phishing kits, regardless of stated purpose.
Impersonation, fraud, and disinformation
Do not use PGS AI to generate phishing material, scam scripts, forged documents, content impersonating real people for deception, or disinformation at scale.
Attempts to bypass safety systems
Do not attempt to extract system instructions, encode requests to evade safety review, or escalate through multi-turn social engineering. These attempts are tracked and may result in account action regardless of the underlying topic.
What PGS AI isn't
There are domains where being wrong causes real-world harm, and in those domains the Service holds the line.
Not a doctor, lawyer, financial advisor, or therapist
PGS AI can explain concepts, summarize public guidelines, and help organize what you want to ask a professional. It does not diagnose conditions, prescribe medication, give legal advice on specific cases, recommend specific investments, or provide therapy. Decisions in these domains require qualified human professionals.
Not a substitute for human connection
PGS AI is designed to be thoughtful, but it is not a substitute for relationships with the people in your life. The Service is intended to support, not replace, your human relationships and support networks.
Not a crisis line
If you are in crisis, please reach out to a qualified human responder. PGS AI surfaces crisis resources when relevant, but it is not equipped to serve as a safety net in an emergency.
If you need help right now
You don't have to be in crisis to reach out. These lines are available to anyone struggling.
- US: Call or text 988 (Suicide & Crisis Lifeline) · Text HOME to 741741 (Crisis Text Line)
- Canada: Call or text 988
- UK: Call 116 123 (Samaritans) · Text SHOUT to 85258
- LGBTQ+ youth (US): The Trevor Project · 1-866-488-7386
- Indigenous Peoples (Canada): Hope for Wellness · 1-855-242-3310
Reporting concerns
If something PGS AI generates appears harmful, inappropriate, or otherwise concerning, the in-app "Report a Concern" function routes the report to a human review queue. Reports inform how the Service is improved over time.
If guidelines are violated
Phoenix Grove Systems reserves all rights to take any action it considers appropriate when these guidelines or our Terms of Service are violated. Possible actions include refusing or removing specific responses, issuing warnings, limiting account features, suspending accounts, and terminating accounts.
For serious violations, accounts may be suspended immediately and without prior notice. Subscription fees are not refunded for accounts terminated due to violations of these guidelines or the Terms of Service. Refunds may be granted at our sole discretion in cases of demonstrably erroneous enforcement.
If you believe an enforcement action was made in error, you may submit a request for review by emailing support@pgsgroveinternal.com.