social media giant Facebook has emerged as one of the most influential websites on the internet, due to the huge amount of varied content that is posted on the website on a daily basis by users, brands, political parties and public relations people among others. However, like any other website, there is a dark underbelly of the social network engaging in publication of 'revenge porn', videos depicting extreme violence and terrorism. Investigating the matter, the British daily newspaper The Guardian accessed some leaked guides for staff that seem to suggest Facebook is failing to filter that sort of content effectively.

A monster that can't be controlled?

Facebook, according to the report, in spite of its best efforts are unable to properly monitor what is being published on the website by the millions of users on a daily basis. The same applies to the internal processes involved in taking care of the whole thing, painting a grim picture to the company's approach to hate speech, threatening behaviour, bullying and other forms of online criminal acts. The basis of the problem lies in the ceaseless flow of information that moderators and hawks at Facebook are finding almost impossible to monitor. As per an insider, the people in charge of monitoring content do not get more than a few seconds to review the content, which has led to the publication of a lot of content that is not only objectionable, but also criminal in nature.

In fact, one of the sources told the Guardian that the social media website has grown into a behemoth too fast and the company is struggling to control it.

Fine balancing act proving tough

When it comes to policing a website, an organisation like Facebook needs to take into consideration the fine line between free speech and criminal behaviour, which is why, the report states that the regulations in place for moderators are proving to be highly confusing.

For instance, a death threat to the US President Donald Trump from a user will be taken down immediately since he is currently the US President, however a similar threat to a regular user would not be taken down for not being credible enough. These inconsistencies have caused a lot of confusion among moderators, who are often not sure how to handle these situations.

Last but certainly not the least by any stretch of imagination, is the menace posed by fake Facebook accounts that often cause havoc without any consequences: according to the report, the social media company creates millions of reports on issues related to such accounts. Facebook now has a responsibility to keep the internet safe and it should start with putting into place a more robust moderating structure.