X, YouTube and Meta Platforms Hit Historic Lows in GLAAD Safety Index


A new report from GLAAD has found that LGBTQ safety is continuing to decline across major social media platforms, with most companies receiving their lowest scores yet in the organisation’s annual assessment of policies affecting queer users.

The sixth annual Social Media Safety Index evaluates TikTok, YouTube, X, Facebook, Instagram, and Threads on policies relating to LGBTQ safety, privacy, and expression.

The 2026 findings show widespread declines across the platforms, with TikTok the only platform to maintain the same score as last year.

Researchers say the results reveal a growing gap between the community guidelines promoted by social media companies and the protections LGBTQ users experience in practice.

Scores reach historic lows

According to the report, X remains the lowest-ranked platform, scoring 29 out of 100. The score reflects ongoing concerns about hate speech, harassment, and weak protections for LGBTQ users.

YouTube followed with a score of 30, dropping 11 points from last year — the steepest decline of any platform included in the index.

Meta’s platforms also saw their scores fall. Instagram scored 41, Facebook 40, and Threads 39, each declining from their 2025 results.

TikTok received the highest score in the index at 56, though its score remained unchanged from last year.

GLAAD researchers say the falling scores are linked to policy rollbacks, reduced transparency, and weakening safeguards for LGBTQ users, particularly transgender and gender non-conforming people.

Policy changes raise alarm

The report highlights several recent changes at major technology companies that GLAAD says have contributed to declining safety scores.

Meta has faced criticism for changes to its hate speech policies, which critics argue allow more anti-LGBTQ rhetoric to remain on its platforms. The company has also scaled back diversity, equity, and inclusion initiatives, and made changes to content moderation, including ending its fact-checking programme in the United States.

YouTube also drew concern after removing gender identity from its list of protected characteristics in hate speech policies. GLAAD says the change places LGBTQ users at greater risk of harassment and abuse.

The report argues that both companies are moving away from previously established online safety best practices.

AI, privacy, and transparency concerns

Beyond individual platform policies, the report raises wider concerns about the role of artificial intelligence in content moderation.

GLAAD warns that automated systems may disproportionately suppress LGBTQ voices while failing to consistently remove harmful content.

The report also raises concerns about data privacy, noting that major platforms are increasingly using user-generated content to train AI systems, often without clear consent processes.

Researchers also point to a decline in transparency, including limited reporting on moderation practices and workforce diversity data.

GLAAD says these trends make it harder to assess whether platforms are properly protecting vulnerable communities.

Online harms reflect offline risks

The report connects online safety concerns with wider real-world trends affecting LGBTQ communities.

It cites more than 1,000 anti-LGBTQ incidents reported in 2025, as well as FBI data showing that anti-LGBTQ bias accounted for more than 20% of reported hate crimes in 2024, marking the third consecutive year at that level.

Researchers argue that online harassment and misinformation can contribute to offline harm, particularly when extremist content spreads across digital platforms.

Calls for greater accountability

GLAAD President and CEO Sarah Kate Ellis says major platforms are failing to meet basic standards for safety, transparency, and accountability.

She urged advertisers and users to reconsider their relationships with platforms that do not adequately protect LGBTQ communities.

“Social media companies do not meet basic best practices in content moderation, transparency, data privacy, and workforce diversity,” Ellis said in a statement included in the report. “They continue to prioritize profit over safety.”

Ellis added that LGBTQ creators and users are often left to manage harassment, threats, and misinformation without meaningful support from the platforms themselves.

What happens next

The Social Media Safety Index recommends stronger content moderation systems, greater transparency around enforcement, and renewed investment in diversity and inclusion programmes.

It also calls on platforms to better protect LGBTQ users from targeted harassment while ensuring queer content and expression are not unfairly suppressed.

As debates continue around online safety, regulation, and free expression, the report suggests LGBTQ users remain especially vulnerable to policy shifts at major technology companies.

For now, TikTok remains the only platform to hold steady in the index, while X, YouTube, Facebook, Instagram, and Threads all continue to decline — raising further questions about how social media companies balance growth, moderation, and user safety in an increasingly polarised digital world.

Share the Post:

Latest Posts