{"id":4281,"date":"2018-05-03T18:27:23","date_gmt":"2018-05-03T18:27:23","guid":{"rendered":"https:\/\/thesocialelement.agency\/us\/?p=4281"},"modified":"2018-05-03T18:27:23","modified_gmt":"2018-05-03T18:27:23","slug":"facebook-community-guidelines","status":"publish","type":"post","link":"https:\/\/thesocialelement.agency\/us\/facebook-community-guidelines","title":{"rendered":"What's Next with Facebook's New Community Guidelines?"},"content":{"rendered":"

Facebook has finally published its community guidelines for the first time, as it attempts to rebuild the confidence with its 2.2 billion users. The policy covers what should and shouldn\u2019t be published on its platform and tries to distance itself from fake news, hate speech and opioid sales.<\/span>
\n\"Facebook
\n <\/p>\n

Moderation by AI<\/h2>\n

It is understood that while Facebook has always had community guidelines, its internal moderation team – believed to be 7,500 people – were confused about what should be allowed. During the <\/span>recent appearance<\/span><\/a> of Facebook CEO Mark Zuckerberg at Congress, he said AI was the best way to deal with the inappropriate content issue, rather than using moderators. It is thought that the Facebook internal team focuses on flagged content.<\/span>
\nWhile there is some room for machine-assisted moderation, Facebook cannot purely rely on AI. The big failure of AI is the inability to understand the context of a conversation and spot when people are being sarcastic. Natural Language Processing has, to date, had difficulties in looking at language, context and reasoning. When users make up their own codes to get around talking about inappropriate topics, it is going to take the technology time to catch up with this. The human moderator is likely to spot this\u00a0quickly as it can understand the context.\u00a0<\/span>
\n <\/p>\n

The Nitty Gritty: Facebook’s Community Standards<\/h2>\n

Facebook\u2019s community standards released on 24 April are based on the following principles:<\/span><\/p>\n