For a long time, Facebook has been one of the most popular platforms for terrorists who want to spread their propaganda through extremist Content. The company had been asked by security officials of several nations to tackle this problem in the past and yesterday, the company revealed how it has set in motion a framework that takes down such content before it spreads.
Facebook cracks down hard
Facebook had been identified as one of the platforms that had become the favorite of extremists, who wish to spread their poisonous propaganda using memes, gruesome videos or hate speech.
As a result, it was only a matter of time before the company cracked down on such content.
Although it did have such a framework in place, it did not prevent such posts from becoming viral and reaching a big audience. After all, close to two billion people use the website. As the company stated in their post yesterday, "We want to find terrorist content immediately, before people in our community have seen it. Already, the majority of accounts we remove for terrorism we find ourselves,"
In order to tackle this issue, the company has now deployed a dedicated 150 member team that is solely involved in identifying extremist content and taking it down before it spreads. Most of the offending accounts are suspended as well.
In this fight, the company is using Artificial Intelligence in a big way. For instance, if a meme or a video had originally surfaced from a terror organisation, then the 'image matching technology' will flag it immediately and the user would not be able to publish it.
A reassuring development
As of now, the counterterrorism team at Facebook is primarily concerned with the content shared by sympathisers of Al Qaeda, ISIS and terror groups belonging to the same network.
It is a well-known fact that the Internet is a big part of the global recruitment and indoctrination efforts of these groups. The presence of advanced artificial intelligence systems has also been a big part of the crackdown but Facebook did say in their post that the human element is still crucial in their efforts.
The company is also trying to develop technology that would allow it to identify similar language patterns between extremist posts that have already been removed and new posts that are posted ever second.
Last but certainly not the least, Facebook also checks out if a group of accounts are acting together in spreading this sort of content and this often helps the company in getting rid of such troublesome users in bulk.