Regulation

Calls for ‘super regulator’ of social media

Can we still trust social networks to regulate themselves?

Social networks use algorithms to promote content and encourage people to engage and stay on the network for longer. But as the aftermath of the New Zealand terror attack highlighted, this becomes an issue when the content is something that should never be shared.

Facebook and other social networks depend on their users to report content that may be abusive, harmful or criminal. Their networks are too vast to pre-moderate in real-time, even with the assistance of automation. But what happens when people don’t flag inappropriate content?

A clash of human and automated behaviours

While social media is frequently a force for good, it’s also used to abuse, spread propaganda and groom. These are issues created by societal problems, but the consequences of these problems are facilitated and amplified by social networks. Social networks that may lack the guidelines to crack down on all forms of toxic content on their network.

After the New Zealand attacks, deputy leader of the Labour Party in the UK, Tom Watson, came out strongly against the social media platforms that not only facilitated the livestream of the shooting but took too long to remove the video from circulation.

According to Bloomberg, Watson said:

“The big social media platforms lost control. They failed the victims of that terrorist atrocity. They failed to show any decency and responsibility. Today must be the day when good people commit to take back control from the wicked, ignorant oligarchs of Silicon Valley.”

The UK’s Home Secretary, Sajid Javid, also called on the owners of social media platforms to do more to prevent and remove terrorist content from their sites.

Whether it’s out of shock, curiosity, or the need to be the first to know and share news, there are many people out there who will watch, share, like and comment on things like terrorist attack footage.

The first person to ‘flag’ the livestream of the New Zealand attack did so 29 minutes after the stream had started (12 minutes after the attack and stream ended). Around 200 people watched the attack as it happened and did nothing to alert Facebook. So there’s an obvious problem here with Facebook’s approach of relying on reporting.

The platforms can’t completely rely on technology either. The algorithms that social networks use to promote the content that’s getting the most engagement worked against them in this instance. As more people viewed, liked, shared or commented on the various videos of the attack, the automated systems recommended the videos to even more people – causing it to spread instead of hiding it or taking the videos down.

It’s not like the networks weren’t trying. In the first 24 hours after the attack, Facebook removed 1.5 million videos. However, people started uploading edited versions of the video – and even minor alterations that meant that the video could get past the network’s detection technology.

A call for change

The issues experienced with removing the video, as well as other recent issues with problematic, abusive and illegal content on social media, has led to calls for the regulation of social networks.

As they stand, social networks are self-regulating. They are using guidelines, moderation and automated technology to make their platforms as safe as they can. However, many people have complained that these guidelines don’t do enough to make social media a healthy or safe place to be.

In the UK, the House of Lords has called for a new Digital Authority to oversee the ICO, Ofcom and ASA as regulators of the internet. It would also report. It would operate using ten principles, to include: the right for people to be as protected online as they are offline, the need for big businesses in the digital space to be transparent, and that children should be protected online.  

The UK government is releasing a white paper on online harms (expected by the end of March) which could include plans to regulate social media.

Regulation may be part of the solution, but we need to do more

Give people a method to communicate and human behaviour will guarantee that some of those people will use it to commit crimes, bully, harass and spread hate. What we’re dealing with isn’t just a technological problem. It’s a social one.

Yes, regulators probably should have the ability to guide social networks, but social networks also need to do more themselves. They need to refine their algorithms and create better guidelines. They need to be more responsive when people report issues. And they need to use human intelligence to understand when they need to act, quickly.

It’s also up to all of us – ordinary people, parents, schools, industry and regular internet users – to modify our online behaviour. Whether it’s searching for a reliable source before sharing shocking news, or remembering that the username you’re about to hurl abuse at is (usually) a real person who will be affected by your actions, we all need to think more about what we do, say, share and like online.

Leave a Reply

Your email address will not be published.

EnglishUSA
Contact Us
close slider