Testing Facebook Pages: The New Profanity Filter – How It Works

When it comes to Facebook, there’s only one to check out their changes: test, test and test. For which you will need access to at least three profiles:
1. Page admin
2. User posting comments
3. Other user.
My tips? Use three different browsers for each log in, and keep notes as you go. Oh, and screenshots. Then you can fill in any other members of your team who need to manage Facebook pages (and check they have the same results, following the steps you took). Banal doesn’t even begin to describe it. It’s even slightly dull. But that’s how it has to be.
Following my recent post on Facebook’s new built-in profanity filter, our community management team at eModeration had some questions.  Do you have to put any words in the customisable list? How strong is the filter on the two available settings? Does it work on character strings? How quickly does it work? Does a user know their comments has been deleted? Posts as well as comments? etc etc.
I’ve just done a little testing, and here is how it went:
The test
Posts

  1. I set the Page to ‘strong’, with no words in the custom list.
  2. I posted the F word. It appeared.
  3. After a little while (2 mins or less?) it vanished from the front – but not when I was viewing the page as the profile who had posted the message. It looked to me as though it was still on the wall. However, the admin profile and ‘another user’ profile couldn’t see it on the wall at all. The admin could see it in the only in the ‘hidden posts’ tab).
  4.  ‘Damn’ in a post got picked up, same as with the F word.
  5.  I set the filter to ‘medium’ and posted “Damn and blast”. It stayed on the wall.

Comments

  1.  Posted a comment with the F word in it. It was not treated in the same way as a post – it stayed in the admin wall view, and in the view of the account which had posted it – but disappeared from any other user’s view. It did NOT appear in the hidden posts tab.

My conclusions?

  1. Words entered into the blacklist are in addition to the preset filters that Facebook has set up
  2. The profanity filter is more stringent when set to ‘strong’ than ‘medium’
  3. Whilst taking both from the view of other users, only flagged posts go to the ‘hidden’ tab in the admin view. Comments stay (greyed) on the wall. Neither can be seen by other users after flagging.
  4. Hidden posts can be reinstated (via a dropdown feature – see image below)
  5. Unlike deleted content, users whose posts or comments are hidden by the profanity filter can still see them on the wall, whereas other users can’t. This is a good anti-spam measure.
  6. The filter is set to recognise strings of characters within words, such a ‘Scunthorpe’. Therefore it is likely to hide a number of false positives, which – in the case of comments, cannot be reinstated.
EnglishUSA
Contact Us
close slider