The horrible murder of 50 people last week in New Zealand was live streamed on Facebook. Over 1.5 million videos of the event were removed.
Facebook’s own artificial intelligence was incapable of detecting the stream, and no one out of the 200 people who saw the live video reported it to moderators. The first report came 12 minutes after the livestream ended.
Actions to report Facebook videos are unclear. Users must know to click on the ellipses and then proceed to complete a set of actions before the report is submitted to human reviewers. Like Facebook, YouTube also fails at moderating content. It relies on both human moderators and computer software. Facebook has 2.3 billion users. YouTube has 1.5 billion users. It is a complicated task to “moderate the internet”.
'We have a lot of work ahead of us'
Neal Mohan, YouTube’s Chief Product Officer, shared in an interview that a video was being uploaded every second about the massacre to their platform and they struggled to take them down.
“We’ve made progress, but that doesn’t mean we don’t have a lot of work ahead of us, and this incident has shown that — especially in the case of more viral videos like this one — there’s more work to be done.”
Looking back to when this all began: In 2007 is when social platforms directly began giving people, and businesses a voice for the first time. Content creation and distribution was no longer exclusively for professional journalists, publishers or bloggers - content could be authored and distributed by anyone. This was the birth of the Social Web.
Fast forward ten years and the Social Web has become the Unsafe Web. Unsafe for our data, our attention and our society.
The Unsafe Web
An investigative report by The Times of London two years ago revealed that the world’s biggest brands like BMW, Jaguar and Vodafone were unknowingly funding extremists and supremacist groups through placing their advertisements on YouTube. The video platform’s business model is such that content producers receive a share of the advertising revenue. Terrorist groups are among the producers of content on YouTube.
In the aftermath of the attack last week in New Zealand, two of the largest advertiser groups in New Zealand came together to release a statement urging businesses to rethink spending their ad dollars with Facebook.
According to the New Zealand Herald, Burger King, ASB Bank and telco Spark, are all considering pulling their advertising from Facebook and YouTube. New Zealand's state-owned Lotto told Reuters it has already pulled advertising from social media.
Earlier this year AT&T, Disney and Epic Games pulled advertising from YouTube over the concern that their ads were running alongside videos on which pedophiles were leaving objectifying comments.
I predict that consumers will soon begin to hold advertisers accountable for where their ad dollars go. This is a new reality that advertisers did not have to deal with before, as it is not easy to manipulate trusted publishers, which is where historically all of advertising used to sit.
The Trusted Web
Today publishers and content sites are the heart of the Trusted Web. The 2019 Edelman Trust Barometer revealed that trust in news media has skyrocketed over the last 12 months and it will continue to grow.
With trust dramatically shifting away from social platforms and back towards the trusted publisher sites, advertisers now have the opportunity to align themselves with the right side of history. Do they want to be funding the platforms that are being used to spread hate in our world or as funding the publishers who we trust to inform us with the truth?
* Kunal is the CEO of Polar, a technology partner to over 100 of the world’s largest media companies and has offices in New York, Toronto, London and Sydney. Publishers use Polar’s technology solutions to offer innovative digital advertising solutions to brands and agencies. He is respected as a global thought-leader in the digital advertising and publisher industry. To learn more, visit polar.me.