Freedom Of Speech and Censorship are always sensitive topics, but they shouldn't be. We should be able to differentiate between the two, quite easily. However, that is not the case. Thanks to the intense and aggressive PC culture and the push-back it received from the equally radical and irrational far-right, some form of moderation on the internet doesn't seem like too bad of an idea.

On the other hand, who is to determine what is hate speech and what is not? Aren't we all too biased to be the judge of that? The world wide web is much different than it used to be.

Some things are better than before, but we have taken steps back in quite a few arenas, with the most important ones being freedom of speech and privacy.

What is YouTube trying to do?

After it became evident that websites like reddit are censoring and manipulating the user-made content, under the pretense of eliminating hate speech, YouTube seems to be taking a step in the same direction. Gizmodo published an article about a change in YouTube's content policies and the internet company itself has written and published a blog post on this.

Some things, however, remain unclear. For example, YouTube states that if a flagged video doesn't break policy, but contains “controversial religious or supremacist content,” it will be put in a “limited state.” This means that the video will not be monetiezed, included in the suggested videos list, and users won't be able to like or comment on it.

One can't help but ask: What's the difference between doing this and flat-out removing the video from YouTube? There seems to be none.

What's controversial and what isn't?

An even more important question is: What's controversial and what isn't and who can objectively determine that? At a glance, it seems like YouTube's new rules are inconsistent, which means that there is a large gray area for the website's employees and visitors to determine what's appropriate and what isn't.

This can only lead to a single conclusion: avoiding unnecessary censorship will be almost impossible.

The cherry on top is this: if a video is flagged as inappropriate and a YouTube user tries to access it, they will be redirected to a different video. So, for example, if someone wants to watch a YouTube video about terrorism, they will be directed to watch counter-terrorism videos instead.

This sounds great on paper, but if we assume that mistakes will be made and that personal biases will prevent YouTube's employees from being 100% objective, doesn't that mean they will censor videos they simply disagree with? Won't YouTube users flag videos they personally disagree with?

Perhaps we need to find a more efficient way to combat hate speech online but policing free thought and promoting censorship is never good and it looks like that's what could happen with YouTube.