Facebook's officials have announced that the social network will use an artificial intelligence software to help identify and eliminate extremist messages, according to The New York Times. The company made this decision after they were accused of ignoring extremist messages.

Facebook's fight against extremist content

Responding to complaints about Facebook's inability to control extremist messages on the social media platform, officials from the social network have announced that they will develop programs based on artificial intelligence with the purpose of detecting extremist content.

Monika Bickert, the director of Facebook's Global Policy Management Department, said she hoped that this program would grow very much over time.

The Artificial Intelligence software will largely be used for this purpose with the help of the human moderators who will review content and make a decision on a case-by-case basis. The developers hope to improve this system over time.

One of the features of this technology covers the identification of the content that violates the terms of use of the social network, such as messages posted by extremist groups, messages about crimes or racist messages.

Facebook's representatives have announced that their systems which use artificial intelligence will be able to recognize keywords or phrases used by terrorist organizations.

The same system will identify users who visit pages or groups that promote extremist messages or who create false accounts in order to promote this kind of messages.

Facebook decided to make to take this kind of action after they have been criticized in the past for not making enough effort to monitor the extremist messages.

Facebook asked to monitor extremist groups

In May this year, British Prime Minister Theresa May urged the companies who own social networks, including Facebook, to contribute more intensely to monitoring and stopping extremist groups.

After the terrorist attack from Manchester which ended up with 22 deaths, the British Prime Minister said she doesn't want extremist ideology to have enough space to grow and this space is provided by the internet and social networks.

J.M. Berger from the International Centre for Counter-Terrorism from Hague says the biggest challenge for companies like Facebook is defining the terrorist messages.

Berger said the problem lies in how they are going to define what is extremist and what is not extremist. For example, if users talk about Al Qaeda or about the Islamic State, this doesn't automatically mean they are jihadists.