Advertisement

Meta, the tech giant behind Facebook, Instagram, WhatsApp, and Threads, has admitted to mistakenly removing LGBTQ+ posts from its platforms, attributing the issue to a “technical error.”

In November, several Facebook users, including LGBTQ+ community members and organisations, reported their non-offensive posts were removed without explanation. This included posts from New Zealand’s YOUR ex and Australian LGBTQ+ publishers QNews and OutInPerth, as well as promotional content for QLife Australia, an LGBTQ+ counselling service, shared by groups like Black Pride Western Australia and Out South West.

Those affected received warnings suggesting their posts violated Meta’s cybersecurity standards, claiming they had “shared or hosted malicious software.” However, Meta has since clarified that the posts were removed in error.

Advertisement

A Meta spokesperson apologised for the inconvenience but did not elaborate on the cause of the issue or outline measures to prevent similar incidents.

Meta’s Transparency Centre notes the company uses artificial intelligence and machine learning tools to detect and remove content deemed to violate its standards.

Graeme Watson, editor of OutInPerth, criticised Meta’s lack of communication. Speaking to ABC, he said, “Meta has given us no information beyond a little pop-up that says you’re a problem, we’ve taken away your posts, stop being a problem.” He added, “We’d definitely love to hear from Meta. There’s a massive number of community groups and organisations who are all wanting an answer.”

Meta’s handling of LGBTQ+ content has come under scrutiny before. In October, the company faced backlash from its oversight board for failing to promptly remove a graphic video showing two men in Nigeria being attacked for their sexuality. Despite violating four community standards, the video remained online for five months, amassing 3.6 million views.

The oversight board condemned the delay, citing the potential harm caused by exposing the men’s identities in a hostile environment for LGBTQIA+ individuals. In response, Meta admitted the video was “left up in error.”

While Meta continues to rely on AI tools for content moderation, these repeated incidents have left many questioning the efficacy and accountability of the platform’s systems.

Advertisement