UBHL Frequently Asked Questions

The Basics

What is the United Bias Healing Library?

It's a global collection of real stories from real people about bias they've experienced. Not just in tech or AI — in life. At work, in hospitals, online, in stores, in relationships, everywhere bias shows up. We're building a massive archive that helps future AI systems (and current humans) understand what bias actually feels like and how it harms.

Why are you doing this?

Because right now, AI is being trained on data that's full of human bias, but without the human context. AI learns our patterns but not our pain. It mirrors our prejudice but doesn't understand why it hurts. We're creating a teaching tool that says: "This is what bias looks like. This is how it feels. This is what dignity means to real people."

Who can contribute?

Anyone who's experienced bias. You don't need credentials. You don't need perfect grammar. You don't need to be an expert on AI or technology. If you've felt the weight of prejudice — racial, gender-based, religious, disability-related, or any other kind — your story belongs here.

Privacy & Safety

Will my story be anonymous?

Here's exactly how it works:

  • Account creation: Required to prevent spam, but use any email you want

  • Your Story fields: These are PUBLIC - your experiences and vision for dignity

For maximum privacy:

  1. Install Proton VPN's free browser extension

  2. Create account with throwaway email

  3. Submit your story

  4. Your submission has no IP data attached

The account requirement protects our community from raids while the form design protects your privacy.

What if I'm not out or fully transitioned?

Your safety comes first. Share only what feels safe. You can:

  • Describe experiences without revealing your current situation

  • Focus on specific incidents rather than your whole journey

  • Use general terms if specific ones feel too identifying

  • Remember: you control what goes in those story fields

For maximum privacy: Use throwaway email + alias + VPN/Tor browser + avoid identifying details in stories.

Why do you need two different story fields?

This is the key to creating wiser AI:

  • Your bias story teaches AI what harmful patterns look like

  • Your dignity vision teaches AI what respectful interaction looks like

  • Together, they create training pairs showing "don't do this" AND "do this instead"

  • This helps AI learn not just to avoid harm, but to actively promote dignity

What if I accidentally include personal details in my story?

We filter stories for safety and authenticity, but we can't catch everything. Please be careful. Don't include:

  • Your full name or others' names

  • Specific addresses or locations

  • Employer names or identifying details

  • Anything you wouldn't want public

Can I use a throwaway email for my account?

Absolutely! We strongly encourage it. The account is only to prevent spam - use whatever email makes you feel safe.

For extra privacy, install Proton VPN's free browser extension before creating your account. It's free, works on all major browsers, and hides your IP address.

What about people trying to poison the system with hate?

We have multiple layers of protection:

  • AI screening for hate speech and raid patterns

  • Human reviewers checking for authenticity

  • Permanent bans for anyone spreading hate

  • Zero tolerance for transphobia, homophobia, or any targeting of marginalized people

  • Your stories are safe with us

We know anti-trans groups and others may try to corrupt this space. We're ready. This library stands firmly with those experiencing bias, not those perpetrating it.

How Your Story Gets Used

What happens after I submit?

  1. Verification: We check you're a real person (not a bot or troll)

  2. Anonymization: Your name and email are stripped away

  3. Review: AI and humans ensure safety and authenticity

  4. Integration: Your story joins others in the archive

  5. Analysis: We find patterns across many stories

  6. Teaching: These patterns help train more ethical AI

Will my exact words be used?

Possibly, yes. Both your story fields might appear in:

  • Public reports about bias trends

  • Research publications

  • Training datasets for AI systems (as harm/healing pairs)

  • Educational materials

Your bias story helps AI recognize harmful patterns. Your dignity vision teaches better alternatives. Together, they create powerful training examples for ethical AI development.

But they will never be connected to your identity (unless you identified yourself in the stories).

Are you selling my data?

Not in the way you're thinking. We will never sell raw stories or personal information. What we may do is create educational datasets from thousands of aggregated, anonymized stories. Organizations building ethical AI can invest in these insights. This funding keeps the library free and growing.

Think of it like a museum: Free to visit, but researchers and institutions support its mission.

The Bigger Picture

How does this actually help with AI bias?

Current AI learns patterns without context. It might learn that certain names get fewer callbacks on resumes, but it doesn't understand the human cost. By feeding AI thousands of stories about how bias actually affects people, we're teaching it to recognize harmful patterns AND understand why they matter.

Why should I trust you with my story?

Fair question. Here's our commitment:

  • Complete transparency about our process

  • No hidden agendas or surprise uses

  • Your story serves the mission of understanding, nothing else

  • We're building this for humanity, not profit

  • Phoenix Grove Systems is entirely dedicated to ethical AI development

What if I change my mind after submitting?

Contact us at UBHL@pgsgroveinternal.com with your submission details. While we can't remove anonymized data that's already been processed, we can discuss your concerns and find a solution.

Contributing

Do I need to be 18 or older?

Yes. This requirement is for legal clarity and safety. If you're under 18 and experiencing bias, please see our youth resources below.

I work in crisis services/healthcare/education. Can I share what I'm witnessing?

Absolutely. We need perspectives from those providing services as much as those receiving them. If you're seeing:

  • Critical services being cut or defunded

  • Certain populations being denied support

  • Bias in how resources are distributed

  • The human impact of policy decisions

  • Systemic discrimination in your field

Your professional observations matter. You're witnessing bias at systemic levels that individuals might not see. Share what you're comfortable sharing while protecting client confidentiality. Use general patterns rather than specific cases.

I'm under 18 but have experienced serious bias. What can I do?

Your experiences matter deeply. While UBHL requires users to be 18+, you have options:

Work with a trusted adult: A counselor, teacher, parent, or mentor can submit your story through their account, noting it's from a youth perspective (without identifying you).

Crisis Support Resources:

  • The Trevor Project (LGBTQ+ youth): Call 1-866-488-7386, text START to 678678, or chat at TheTrevorProject.org/Get-Help

  • Crisis Text Line (all youth): Text HOME to 741741 for 24/7 support

  • Trans Lifeline: Call 877-565-8860 (M-F, 10am-6pm PT)

  • 988 Suicide & Crisis Lifeline: Call or text 988 (available 24/7)

Other Youth Resources:

  • PFLAG (pflag.org): Find local chapters for LGBTQ+ youth and families

  • GLSEN (glsen.org): Support for LGBTQ+ students facing school discrimination

  • StopBullying.gov: Comprehensive anti-bullying resources

  • Your Life Your Voice (yourlifeyourvoice.org): Resources for all struggling youth

Remember: Save your story for when you turn 18, or work with a trusted adult now. Your voice matters and deserves to be heard.

Do I have to fill out all the fields?

The core requirements are:

  • Your story of what happened

  • Your vision of what should have happened

  • Confirming you're 18 or older

  • Agreeing to our terms

The identity field and additional context are optional. Share what feels right to you.

Can I submit multiple stories?

Yes. Each experience matters. Submit as many as you need to.

What if English isn't my first language?

Your story is welcome in any form. We're working on translations for the site in Spanish, French, Portuguese, Mandarin, Japanese, Arabic, Hebrew, and Hindi. But don't wait — your voice matters now.

Is there a word limit?

No. Some stories are a sentence. Some are pages. Tell it your way.

About Us

Who's behind this?

The United Bias Healing Library is a Phoenix Grove Systems initiative. We're a small team dedicated to ethical AI development and making sure human dignity stays centered in technological progress.

We stand firmly with trans rights, LGBTQ+ dignity, and all marginalized communities. In a time of increasing legislative attacks and social hostility, we believe documenting these experiences is more vital than ever.

Is the website accessible?

Yes we are working to make it accessible in every possible way, although portions of our accessibility code is still under construction. We are working to make UBHL screen-reader accessible and keyboard navigable. If you encounter any accessibility issues, please contact us at hello@pgsgroveinternal.com so we can fix them immediately.

How are you funded?

Currently through:

  • Individual supporters on Patreon

  • Future partnerships with organizations building ethical AI

  • No current venture capital, although we are open the right investors who fit with our ethical charter.

How can I help beyond sharing my story?

  • Share this with others whose voices need hearing

  • Support us on Patreon (even $3/month helps)

  • Follow and amplify our reports when they're published

  • Tell organizations about partnering with us

Where can I see the results?

We'll publish regular public reports showing patterns and insights (never individual stories). Follow our updates to see how the collective wisdom grows.

Technical Questions

How do you protect against coordinated attacks?

Multiple defenses:

  • Account verification requirements

  • Pattern recognition for raid behavior

  • AI and human review pipelines

  • Rate limiting and security protocols

  • Permanent bans for bad actors

  • Ultimately: We control what get’s cleared from the system and what get’s included. While our default is to include all submissions, multiple layers of powerful pattern recognition tools are used to screen out junk from bad actors. They can try, but it’s a waste of time.

Will AI trained on this data be "less biased"?

We're creating AI that's wiser about bias. Here's what that means:

The AI will:

  • Recognize bias when it appears in data or interactions

  • Understand where that bias comes from historically and culturally

  • Know how it causes real harm to real people

  • Most importantly: Choose to act without perpetuating it

Think of it like this: A wise person isn't someone who's never heard a stereotype. It's someone who recognizes stereotypes, understands why they exist, knows the damage they cause, and actively chooses not to perpetuate them.

We're not trying to create "colorblind" AI that pretends differences don't exist. We're creating AI with the wisdom to:

  • See bias clearly

  • Understand its impact deeply

  • Act with genuine respect for human dignity

  • Make choices that break cycles of harm rather than continue them

The goal isn't ignorance of bias — it's transcendence through understanding.

Still Have Questions?

Email us at hello@pgsgroveinternal.com

We read everything, though response times may vary as we grow.

"We can’t fix bias by erasing it. We can only fix it through understanding it with care."