Stressing that it is determined to do more to keep people safe across its services, Facebook said it has identified some key areas where it has to do more to keep its platforms sanitised.
"We still face legitimate scrutiny, but we're not the same company we were even a year ago," Facebook said in a blog post on Monday.
When it comes to political interference on its platform, the social media giant said it is committed to bringing greater transparency to the ads people see on Facebook.
"This is particularly true with ads related to politics. All political ads on Facebook and Instagram in the US must now be labelled - including a 'paid for by' disclosure from the advertiser.
"We also launched a searchable archive for political content that houses these ads for up to seven years. We've since expanded this feature to Brazil and the UK, and will soon in India," said the company.
Beyond political and issue ads, people can now see every ad a Page is running -- even if the person wasn't targeted. People can also filter ads by country and can report an ad to Facebook.
"We have introduced new policies requiring advertisers to specify the origin of their audience's information when they bring a customer list to us," Facebook informed.
"When something is rated 'false' by a fact-checker, we're able to reduce future impressions of that content by an average of 80 per cent."
The company said it is now detecting 99 per cent of terrorist-related content before it's reported, 97 per cent of violence and graphic content, and 96 per cent of nudity.
On users' privacy, Facebook said: "We know we didn't do a good enough job securing our platform in the past.
"We now have over 30,000 people working on safety and security -- about half of whom are content reviewers working out of 20 offices around the world."
On regulation, the company said it agrees with the demand from various governments to regulate the Internet.
"We're working with governments to improve the safety of our platform, including a recent initiative with French regulators to reduce hate speech.
"We're establishing an independent body which people can use to appeal Facebook decisions involving potentially offensive content," said Facebook.