YouTube, like any other social media platform, traffics in speech, and by default, promotes and supports free expression, community sharing, and content creation. Internet companies, therefore, make daily decisions on what to publish, which content to flag and how to respond adequately and efficiently to reports, comments and complaints. While this requires an elevated level of critical analysis and exerts pressure on those who filter the content to uphold the website’s values, laws and ethos, mistakes can be made, and the online community must challenge and debate those who decide what is appropriate to stay online and what should be censored.
Additionally, the online world is a world of its own where no borders exist and free movement is a norm, bringing cultures and communities together through dialogue and content. Social media discourses are tested, however, when the boundaries set by each culture, country, and community offline clash with the independence exercised online. In order to maintain an equilibrium, YouTube, Facebook, Google, etc have created policies which cater to universal community guidelines regarding restriction and censorship. YouTube describes these policies as “common-sense rules” and has outlined these specific rules – which in any case must be adhered to – on its website including, but not limited to, “nudity or sexual content,” “harmful or dangerous content,” “violent or graphic content,” “threats,” and “hateful content.” Any video which or user who disrespects these guidelines, crosses the line, or is reported or flagged by other users who deem content as offensive or discriminatory is referred to the YouTube staff which determines whether or not the video/user violates the Community Guidelines, and if so, videos may be removed, accounts penalised or even terminated.
“The majority of the world’s internet users encounter some form of censorship,” also referred to as ‘filtering, “but what that actually looks like depends on a country’s policies and its technological infrastructure,” writes Eric Schmidt on web censoring in The Guardian.
In late March, YouTube altered some of its classifications of various LGBT+ themed videos ranging from vlogs on sexuality to music videos by openly gay artists such as, Tegan and Sara. Several of these LGBT+ bloggers and musicians accused YouTube of hiding their content through the “Restricted Mode” feature. Restricted Mode is an optional feature that, according to Google (YouTube’s owner) is used with “community flagging, age-restrictions, and other signals to identify and filter out potentially inappropriate content.” What has been addressed as a mishap in technological infrastructure may have actually raised issues of the sensitive nature of online censorship, free speech, and the use of the internet/social media as a platform for expression.
Whether or not this was indeed a ‘tech error,’ this process is incredibly troubling because it implies that there is some kind of bias within the Restricted Mode mindset as it undoubtedly considers LGBT+ not ‘family friendly’ or ‘inappropriate.’ If the aforementioned YouTube Community Guidelines are taken into consideration, a large portion of the videos restricted, do not, in any circumstance, fall under “harmful or dangerous,” “hateful,” or “violent.” Among some of the videos blocked was a lesbian couple reading their wedding vows to one another and YouTuber NeonFiona‘s videos that included “gay,” “lesbian” or “bisexual” in the titles, whereas her video “An Honest Chat About Being Single” – that actually discusses sex – was not restricted. In response to YouTube’s Restricted Mode incident, high-profile LGBT+ content creators migrated to Twitter to post their opinions on the matter, using the hashtag #YouTubeIsOverParty. Outraged by this occurrence, they discussed and debated why YouTube would allow such restrictions to take place.
The rule advocating restrictions on ‘hateful content’ is particularly relevant to the story of YouTube ‘accidentally’ censoring hundreds of thousands of LGBT+ related videos. The guideline for ‘hateful content’ states:
Our products are platforms for free expression. But we don’t support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics. This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line.
The irony is that YouTube did just that: it restricted a protected group of individuals who post on specific LGBT+ topics ranging from sexual orientation to gender identity. The majority of these videos that were restricted did not contain nudity or sexually explicit images, neither did they promote violence, exhibit threats, or discuss topics in a hateful and discriminatory manner. This could be interpreted as a sexualisation of LGBT+ content, thereby pigeonholing it as not age-appropriate, harmful and dangerous towards the YouTube community and the wider community at large. When individuals report and flag such videos, YouTube’s algorithm, which processes these reports, creates damaging biases.
When videos like Robin Thicke’s “Blurred Lines” and Nicki Minaj’s “Anaconda” glorify non-consensual sex, inadvertently sexualise and fetishise women’s bodies, and promote and normalise compliant and submissive womanhood and are not blocked or restricted but, in fact, are viewed by millions of YouTube users, this technical error surrounding LGBT+ content crosses the line and needs to be flagged itself.
The Internet was once deemed as birthing the death of censorship; now it is being used as a restriction and surveillance tool. In the off-chance of this being a coincidental mistake, – and yes, YouTube has fixed the ‘tech error’ – this is still a matter that should not be ignored. To what extent can we blame censorship on a technical error or a machine misreading tags and titles? What is so inappropriate about LGBT+ content?
In this digital era, where the majority of internet users receive and retain information online, such occurrences reveal that we might be heading into an anti-information age.
Dominic is a Greek/American writer & editor; English and Theatre Studies Graduate