Facebook is increasingly using AI to combat dangerous and violent content on its platform, but, so far this approach hasn't been able to give the desired results. Despite the claims of Facebook's CEO Mark Zuckerberg, after the Christchurch attacks and even in Myanmar's case, the AI system wasn't able to go after violent content as powerfully and quickly as it was expected. The Christchurch's 30 minutes live-streaming itself was reported twelve minutes after it had ended.

While, as reported by TechCrunch, Facebook was unable to block 20% of the videos that were uploaded later on the site.

Now indirectly admitting the negligence, Facebook has introduced 'one strike' rule on May 15 to prevent this kind of incidents.

What is Facebook's 'one strike' policy?

As told by Facebook's Vice President of integrity Guy Rosen, "Anyone who violates Facebook's content policy will be restricted from using Facebook LIVE service for a period of 30 days on his/her first offense".

For instance, if someone shares a violent clip or a link to a statement published from a terrorist organization will get banned immediately from using Facebook LIVE for a set period of time.

Moreover, the offender will also experience some other restrictions. For instance, he/she won't be able to take out ads on the social network.

However, Facebook hasn't informed about the exact duration of these restrictions or what kind of an offense would cause a permanent ban from Facebook.

How much Facebook is investing in image/video analysis technology?

Apart from the 'one strike' policy, Facebook has also announced that is will invest $7.5 million on new researches which will proceed in collaboration with leading academics from three universities.

These include Cornell University, University of Maryland and University of California, Berkeley. Facebook said that it aims to broaden this initiative by adding more academics from other universities in the future.

The main emphasis of these researches will be on enhancement of image/video analysis technology along with finding those people who deliberately manipulate media and separate them from those who so unwittingly.

Is Facebook's current investment sufficient to tackle Deepfakes?

Facebook seems confident that these researches will also help to combat Deepfakes in future. However, considering the objectives of this initiative, the size of the initial investment seems insufficient. This is also acknowledged by Rosen in his Blog post. He said, "This is an area we need to invest more".

Facebook's announcement came just before the meeting of world leaders including New Zealand's Prime Minister Jacinda Ardern, who called on tech companies to pledge that they will do more efforts to fight violent content.