Social networks have long taken the stance that their platforms are just methods of communication – like the telephone – and that (apart from basic community standards) they shouldn’t be held responsible for what people post there.
Mark Zuckerberg took a baseball bat to this stance in 2018 when he agreed with the US Congress that Facebook was, in fact, responsible for the content posted on it. But once you accept responsibility, it becomes your job to fix things.
And that’s what Facebook is trying to do now.
Facebook is taking a definitive stand against racism and extremism
A report by the Global Research Network on Terrorism and Technology found that after Facebook removed Britain First in 2018 most of its 1.8 million followers haven’t bothered to follow it over to an alternative network, Gab (where it only has 11,000 followers).
By banning hate groups and their leaders, the report says that it leaves them without a way to access a large number of potential members, and it ensures that they can’t direct people from Facebook to platforms that they own.
Since that ban, Facebook has banned several other far-right and extremist groups and individuals from the network, Stephen Yaxley-Lennon (aka Tommy Robinson) being one of the most recent to get a permanent ban.
Facebook is fighting against the manipulation of its users through ads
Facebook has been embroiled in controversy about the power of its platform, and the ads on it, to influence elections and social instability across the world.
There were accusations of a lack of transparency around the Leave campaigns ads in the run-up to the Brexit referendum and concern over ads purchased by Russia to influence the 2016 Presidential Election in the US.
In June, Facebook announced that it was rolling out ad transparency tools globally.
Advertisers that want to advertise about issues relating to politics, elections or social issues will go through an authorisation process. The ads will also be available on an ad archive for seven years – allowing people to see how much was spent on the ad and who viewed it. However, The Guardian has reported that there are “major bugs” in the tool.
Facebook has started to protect its users by demoting sensationalised health claims
Like other social networks, Facebook has had issues with posts that promise miracle solutions to people’s health problems.
In a bid to ensure that users can trust the content they see on the platform, Facebook announced that it would start to demote these posts.
Facebook is trying to stop the spread of the hate speech that’s fuelling conflicts
In June, it announced plans to tackle the spread of hate speech in Sri Lanka and Myanmar. Facebook has been used to spread rumours and hate speech in the region as ethnic and religious groups clash – now it’s worked out a way it can stem the flow of vitriol.
For example, Messenger users in Sri Lanka will only be able to forward a message a fixed number of times, and it’s starting to reduce the distribution of content from frequent violators of Facebook community standards in Myanmar. It’s also using AI to detect graphic images of violence.
Is there more that Facebook could be doing? Of course! And as artificial intelligence develops, we’ll start to see the platform introducing new ways for it to keep abusive, manipulative and fake content off the platform.