Please enter some search terms
Youtube will demonetise and age-restrict ElsaGate videos

What governments can learn from Google's and YouTube's ethical fail

Peppa Pig weeps as a dentist shoves a needle into her mouth, then screams as he brutally extracts her teeth. 

If your child uses YouTube without supervision, there’s a good chance they have watched this animated video. Or the one where Peppa is attacked by zombies, in the dark. Or the one where Frozen’s Elsa is burned alive. 

These videos will have been interspersed between other disturbing videos featuring bizarre, repetitive footage strung together by algorithms rather than human content creators. These meaningless fever-dreams show eggs being unwrapped by a disembodied set of hands, or costumed superhero characters with unsettling faces marching across the screen.  

They are off-putting, and intuitively not suitable for children. 

Such videos, from the most violent to the most low-quality - carefully game YouTube’s algorithm to target pre-schoolers and have done so since 2014. They earn large sums of money for the perpetrators who upload them, generating millions of views from their target audience. YouTube’s parent company, Google, also benefits from the advertising revenue they generate.   

The problem has been called Elsagate, a neologism based on an early example of the problem involving (again) Frozen’s beloved character Elsa. 

Recently, YouTube owner Google announced that it has removed almost 60 million videos that it considered “hateful”, spam or otherwise in violation of its terms of use. Of these, 279,600 were removed for “child safety” reasons. Google has also previously announced that it will demonetise and age-restrict Elsagate videos. There can be little doubt that Google does not want these types of videos on its platform. But as YouTube’s scale has ballooned out, it has run into an ethical problem that goes to the heart of how it has designed its systems. And this is something Government needs to care about.

Governments and the ethical challenges of analytics and AI 

Governments around the world have been developing more sophisticated operational analytics to deal with volumes that have gone beyond what a purely human workforce can manage. The policy motive for democracies like Australia is generally in keeping with government’s role of ensuring the right people get the right services at the right time, that community safety is upheld, or that people and organisations stay compliant with regulation.  

The tools government is using - sophisticated algorithmic logic and more recently, specialised AIs and machine learning -  have been part of the private sector’s toolkit for a while, helping keep us buying more goods or (in YouTube’s case) keeping us glued to our iPad screen as the software works out what makes us tick and what will keep us engaged. As governments’ digital transformation agendas continue to tick along, these same tools are now being put to use by the public sector for pro-social motives.  

Controlling the bad stuff 

YouTube, as far as we know, relies on a handful of methods to identify and de-monetise or remove “bad” videos. These include: 

  • Algorithms, which pick up and flag many but not all violations 

  • User reporting of bad content – and for Elsagate, this means that by the time an adult sees and chooses to report a video, it has probably been watched by dozens to tens-of-thousands of young children 

  • A human workforce addressing violations, reportedly numbering around 10,000 humans 

So why was I able to open YouTube and, in two reasonably benign searches over 60 seconds, find a channel with horrific pre-school-targeted animations with thousands of views on some of the worst videos (“story for kids”)?

The answer is that these measures are a finger in the dyke of a problem woven into the foundations of its business model. YouTube has created a platform that priortises getting as much content up as quickly as possible. It has used automated moderation as a primary treatment of the problem of flagging bad videos. And this recipe made Elsagate a catastrophic ethical failure that just doesn't have a good solution without unpicking those foundations.

Imperfect solutions to a terrible problem

After the mass-shooting in Las Vegas early in 2018, Youtube’s trending tab featured a hateful video accusing one of the survivors of being a “crisis actor”, and the shooting itself as “fake news”. The algorithm that should have flagged it as a hateful violation of its terms of use missed the video.  

After it was taken down, YouTube argued that it was impossible to have humans policing its Trending pages in every country because of the sheer volume of videos that cycle through every hour. 

Peppa Pig and her friends
Peppa Pig is a popular children's television program based on the adventures of a loveable, pig called Peppa. Its five-minute cartoon episodes are shown in over 170 countries.

In a subsequent interview with CNBC, Christo Wilson, an assistant professor from Northeastern University, called this an “absurd" excuse. “YouTube implemented the Trending algorithm," said Wilson, "and if it is updating too fast to moderate, then the solution is to simply slow it down. This is a technical change that is well within YouTube's control." 

Wilson followed this with a point that I think cuts to the heart of the issue: "If Trending videos were currently being picked by a team of people, those people would be getting fired after today," she said. "Why do we expect less from an algorithm?" 

Why indeed? 

Before it roars: the social licence perspective  

As governments move into the digital age, algorithms are going to become a part of how they do business. What we consider to be “future tech” now will be commonplace in 5 – 10 years.  

Now is the right time to engage, not with the technical challenge of sophisticated algorithmic/AI decision-making and predictive analytics, but rather with the ethical framework that they will service.  

It’s time to begin asking how it aligns to public service values. 

How much harm will society accept from AIs and algorithms in exchange for the public benefit of their use? What social licence can governments expect to retain if an ethical failure is unintentionally woven into the foundations of digital regulation or digital service provision? 

It’s hard to say, but more than a few countries are going through tumultuous experiences already which are shaking social license and trust, despite genuinely good intentions by the public sector. This is a problem that is slowly, ponderously, waking up. The time to deal with it is before it roars. 

 

WANT TO KNOW MORE ABOUT THINKPLACE? JOIN OUR ONLINE COMMUNITY OF GLOBAL CHANGE_MAKERS!

CIPI banner for ThinkPlace
Share article: 
Author
Darren Menachemson's profile'
Darren Menachemson
Sector: 
Services: 
Share

Want to stay up to date with our work and ideas?

Sign up for our monthly newsletter

Sign up