Meta has announced significant changes to its content moderation policies, triggering controversy over relaxed restrictions on speech related to immigration, gender, and sexual orientation.
The revisions include updates to the “Hateful Conduct” policy, raising concerns that the platform now permits content previously considered discriminatory or harmful.
In a blog post, Joel Kaplan, Meta’s newly-appointed Chief Global Affairs Officer, explained the rationale for the changes. Kaplan stated that Meta aims to align its policies with what is permissible in public spaces like television or political debate. “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms,” he wrote.
CEO Mark Zuckerberg echoed this sentiment in an accompanying video, criticising existing rules as being “out of touch with mainstream discourse.”
One of the most contentious updates is a revision that permits users to make “allegations of mental illness or abnormality” based on gender identity or sexual orientation. Meta claims this allowance reflects “common non-serious usage” and aligns with ongoing political and religious debates about topics like transgender rights and homosexuality.
Critics argue that this change opens the door to harmful rhetoric, such as falsely labelling LGBTQ+ individuals as mentally ill—a narrative long debunked by medical experts and human rights advocates. Meta has not clarified whether it plans to address misuse or harmful impacts of this policy globally.
The overhaul includes several other updates:
- Immigration and COVID-19 Blame: Restrictions on targeting individuals based on protected characteristics—such as race or gender—when linking them to COVID-19 have been removed. This raises concerns about potential scapegoating of groups, such as Chinese people, for the pandemic.
- Gender-Based Exclusions: Meta now permits content advocating for gender-based job restrictions in fields like military, law enforcement, and teaching, provided the arguments stem from religious beliefs.
- Exclusive Spaces: Discussions about gender-specific spaces, such as bathrooms or schools, are now explicitly allowed under the new guidelines.
Meta has also removed language from its “Hateful Conduct” policy warning that hate speech could “promote offline violence.” This clause, introduced in 2019, was previously a key acknowledgment of the real-world harm linked to online rhetoric. The company retains a prohibition on content that “incites imminent violence or intimidation,” though critics worry this change signals a downplaying of risks.
While relaxing many restrictions, Meta has preserved bans on Holocaust denial, Blackface, and dehumanising comparisons (e.g., comparing individuals to animals or pathogens). The platform maintains a list of protected characteristics, such as race, ethnicity, gender identity, and sexual orientation, and still prohibits severe attacks, such as calling immigrants criminals or immoral.
However, the changes have sparked widespread criticism, with advocates questioning whether Meta’s desire to foster “debate” will enable harmful speech under the guise of free expression.
Experts warn that these updates could lead to increased discrimination and harassment. Critics argue that while Meta aims to position itself as a neutral platform, it may inadvertently legitimise harmful rhetoric.
With global implications for marginalised groups, including LGBTQ+ communities and immigrants, Meta’s decision underscores the ongoing debate over the balance between free speech and protecting users from harm.