With the capacity to break time, location, and language barriers, the internet has become one of the most effective avenues for people to share information and connect to other people across the globe. However, such freedom to express one’s thoughts online has also become one of the reasons for the emergence of hate speeches and hate crimes.
The news never falls short of stories about crimes that are caused by hateful messages and comments posted online. Mass shootings, child endangerment, violence against the Asian and Black communities are a few examples of horrendous acts that are being recorded and shared on various social media platforms. Perhaps, the most recent hate crime committed took place a year ago. 51 Muslim worshippers were left dead in Christchurch mosques after a single gunman entered and attacked all people inside the mosques. That incident alone has led to an outburst of support and sympathy for the people in the Muslim community.
In today’s generation where word-of-mouth plays a powerful role in how individuals influence others, raising awareness about the context of hate speech, hate crime, and cyberbullying has become even more crucial. The need to protect vulnerable individuals against harmful behaviors is instrumental in possibly reducing the occurrence of such hate crimes.
What does hate speech mean? Let’s define it.
By definition, hate speech is a form of expression posted publicly to promote, incite hatred and discrimination against other people based on their sexual orientation, political standpoint, religious beliefs, skin color, disability, and/or race. With freedom of expression, hate speech can tarnish human rights and violate laws imposed by the government to protect the democracy of a certain individual or community. The failure to take a stand against hate speech can lead to more violence and harm being carried out to the usual targets of such baseless and inhumane treatment.
The term hate speech has no legal definition under the United States law (the same goes with related terms such as rudeness, discrediting remarks, and other types of unsavory and condemning speeches). In general, hate speech is defined as any form of words or expressions with the intention to humiliate, defame, or justify hatred against a person or a community based on their personal points of view of the world around them.
Launched in May 2016, the EU Code of Conduct against illegal hate speech online or simply “The Code” was implemented by the European Union along with the four internet giants (Microsoft, Youtube, Facebook, and Twitter) with the goal of monitoring and stopping hate speech (racism and xenophobic) on social media and other online platforms for the good of the entire internet community.
The goal of implementing such an ambitious move is to make sure that all requests to remove a particular online content are addressed as quickly as possible. Platform owners, whether prominent or not, must have a systematic process in assessing these types of UGC-related concerns.
For instance, when a company receives a request to remove a content on their platform that shows racism, they should take action immediately while ensuring that they still adhere to their community’s rules and guidelines. They may also use the national laws imposed by the EU law on the fight against racism and xenophobia. Following that, the content in question should be taken down within 24 hours or less upon receiving the request while making sure that the fundamental principle of freedom of speech is duly preserved in the process.
The Code of Conduct was made possible by the close collaboration between the European Commission, IT platforms, civil society organizations, and national authorities. Each sector works hand in hand within the frameworks of the High-Level Group to ensure that they have the arsenal to push the battle against racism and xenophobia.
Regulating hate speech online is one of the most pressing issues in the US for quite some time now. The hate speech legislation in the US constantly divides people on whether to normalize hate speech or categorize it as a crime punishable by law. In the US, hate speech is protected by the First Amendment and the court even extends this protection by requiring the governing sectors to be stringent in fortifying verbal debates about public threads and issues regardless of whether some expressions incite offensive and unsavory speech.
As of now, under the authority of the First Amendment, hate speech can only be categorized as a crime when a hateful speech directly leads to physical violence (or any other form of harmful act) against a particular person or community.
As opposed to the United States, Australia has a different approach when dealing with hate speech. The hate speech regulation process in Australia covers—but not all—forms of hate speech and distasteful remarks.
One of the most notable hate speech laws in Australia is the Racial Discrimination Act (section 18C included), which cites that a person who is deemed to be showing hostility by offending, humiliating, or intimidating another individual because of their race, ethnic origin, and skin color (take note that religious standpoints, sexuality, and gender are not included) is considered unlawful. Section 18D, on the other hand, allows exceptions if and only if a speech leads to a healthy and reasonable interaction among parties that can be proven instrumental in contributing to academic, artistic, and other public interests’ growth.
Should an individual fail to adhere to the said civil law, they will be subjected to a complaint to the Australian Human Rights Commission, and the attacker (or vilifier in this case) and the victim must come into agreement on whether to push the case or not. If, in such a rare occasion that the case escalates, the opposing parties may proceed to a court-level agreement.
An intriguing question as it seems, the answer lies within a grey area.
Hate speech may tarnish a person’s composure and dignity, distort an individual’s self-image, and in extreme circumstances, may lead people to cause violence onto others. At the same time, prohibiting the use of dangerous and vilifying language does cross the boundaries of what is considered freedom of expression.
Various countries have different approaches when it comes to addressing hate speech. In Indonesia, blasphemy is a crime. In South Korea, criticizing political candidates months prior to the election is prohibited. Even the Philippines have the Anti-Bullying Act of 2013 that protects people against hate speech and cyberbullying.
Prohibiting hate speech is quite a controversial issue. People have different standpoints in life, and each individual perceives things differently. The context of banning offensive messages is a literal tug-of-war between people with different perspectives.
So, as a citizen of the internet, how can you regulate hate speech in your own way?
With social networking platforms transforming the very context of communication, the emergence of hateful messages and online content is inevitable. Thankfully some advancements in technology are also designed to protect people from hate speech attacks while preserving healthy and wholesome interactions among online users and communities.
Of these innovations, the use of content moderation is one of the most remarkable countermeasures to safeguard individuals against vilifying speeches across online platforms. Moderation enables platform owners and users to work hand-in-hand to get rid of unwanted UGCs that can provoke people to commit hate crimes. With an active content monitoring system in place, platform owners can foster an online ecosystem free of hate speech and derogatory remarks. People can engage with healthy interactions about people’s preferences and standpoints. Most importantly, a positive interactive atmosphere is maintained, and both users and platform owners can thrive in this ever-growing space of uncertainty.