
Follow TNM's WhatsApp channel for news updates and story links.
The social media platform Bluesky, which is seen as a more democratic and decentralised alternative to X or Twitter, needs stronger moderation policies to deal with Islamophobia, casteism and racial hate speech on the platform, according to a report by Equality Labs. The report makes several recommendations to strengthen Bluesky’s content moderation policies, including adding caste as a protected category and developing lists of slurs specific to South Asian linguistic and cultural contexts.
South Asian feminist organisation Equality Labs, which examined posts on Bluesky that contain casteist, racial, and religious hate speech, recommends that the platform needs more “culturally competent” moderation that is informed of local contexts, “especially for global communities that have long been underserved or misrepresented.”
Bluesky started in 2019 as an experimental project within Twitter before eventually breaking away as an independent company in 2021. After Elon Musk’s acquisition and rebranding of Twitter as X in 2022, Bluesky was launched initially on an invite-only basis and was later publicly launched in 2024.
With X seeing reduced content moderation under Elon Musk, especially with regards to hate speech and misinformation, Bluesky emerged as an alternative with more customised and community-driven content moderation tools. While there have been reports of an ‘exodus’ from Twitter to Bluesky after Musk’s takeover, X’s user base is reportedly still 65% larger than its closest competitor, Meta’s Threads, and 10 times larger than Bluesky, which currently has over 37.7 million users
The Equality Labs report flags several posts containing casteist and Islamophobic content that are still up on Bluesky. These include posts suggesting that Muslim men are claiming to be Hindus on matrimonial sites to target Hindu women for ‘love jihad’, a baseless, false idea propagated by right-wing groups.
Other posts portray Muslims as “terrorists” intending to kill entire religious groups or as a community that has incestuous relationships. The platform also has several sexually explicit posts targeting Muslim women and posts containing slurs against Dalits and Bangladeshi Muslims.
South Asian users of social media, with many caste and religious minorities, have “unique needs and concerns that current moderation frameworks often fail to address,” said the Equality Labs report.
Bluesky says that, unlike other social media platforms that have consolidated “power in the hands of a few corporations and their leaders”, it was created to put users and communities in control of their social spaces online. It calls its approach to content moderation “stackable” – apart from Bluesky’s own moderation team that upholds its community guidelines, it also asks users and communities to develop their own moderation systems.
While commending the benefits of allowing users to build their own customised, protective environments, Equality Labs argued that safety cannot be outsourced solely to users and called on Bluesky to “directly acknowledge the presence of Islamophobic, racist, and casteist content on its network.”
With examples of caste and religion-based hate speech on the platform, the report asked Bluesky to strengthen its content moderation policies, “especially as many users have migrated to Bluesky from other platforms in search of a safer, less discriminatory, and less violent online environment.”
Recommendations made in the report include adding caste as a protected category in its moderation policies, like Facebook, Twitter and TikTok already have.
The report also calls for clear guidelines on what constitutes harmful content, including casteist slurs, religious or racial epithets, and dog whistles, with consistent enforcement, including account suspensions and content removal.
“The development of comprehensive slur lists specific to South Asian linguistic and cultural contexts is critical,” the report said.
Content moderators should be trained in cultural sensitivity to address the local contexts of caste, religion, racial discrimination, and gender and also hire more human moderators with contextual expertise, the research report said. It noted that automated detection tools often fell short in identifying hate speech when users modified spellings or used coded language.