After facing flak for the spread of hate speech on its platform in countries experiencing conflict, Facebook has introduced new measures to remove content and accounts that violate its policies and limit the number of forwards on Facebook Messenger in some places.
In Myanmar, Facebook has started to reduce the distribution of all content shared by people who have demonstrated a pattern of posting content that violates its Community Standards.
If it proves successful in mitigating harm, Facebook may roll out this method in other countries as well, the social networking giant said in a statement on Thursday.
"In cases where individuals or organisations more directly promote or engage violence, we will ban them under our policy against dangerous individuals and organisations," said Facebook's Samidh Chakrabarti, Director of Product Management, Civic Integrity; and Rosa Birch, Director of Strategic Response.
"Reducing distribution of content is, however, another lever we can pull to combat the spread of hateful content and activity," the social networking giant said.
Facebook said it is also making fundamental changes to its products to address virality and reduce the spread of content that can amplify and exacerbate violence and conflict.
"In Sri Lanka, we have explored adding friction to message forwarding so that people can only share a message with a certain number of chat threads on Facebook Messenger," Chakrabarti and Birch wrote.
This is similar to a change Facebook made to WhatsApp earlier this year to reduce forwarded messages around the world.
Facebook said it constituted a dedicated team to proactively prevent the abuse of its platform and protect vulnerable groups in future instances of conflict around the world.
The team is focusing on three key areas: removing content and accounts that violate its Community Standards, reducing the spread of borderline content that has the potential to amplify and exacerbate tensions and informing people about the products and the Internet at large.
"To address content that may lead to offline violence, our team is particularly focused on combating hate speech and misinformation," Chakrabarti and Birch wrote.
Facebook said it has also extended the use of Artificial Intelligence (AI) to recognise posts that may contain graphic violence and comments that are potentially violent or dehumanising, so we can reduce their distribution while they undergo a review by our Community Operations team.
"If this content violates our policies, we will remove it. By limiting visibility in this way, we hope to mitigate against the risk of offline harm and violence," it added.