Facebook’s advertising practices are under increased scrutiny.
The criticism of Facebook’s advertising policies shows no sign of abating, particularly since recent revelations about fake news sites operating out of Russia posting ads on its platform.
Investigative news organisation ProPublica published a report yesterday (14 September) that shows advertisers were able to direct their ads to the news feeds of close to 2,300 people who had expressed interest into such topics as: ‘How to burn jews’, ‘Jew-hater’ and ‘History of why Jews ruin the world’.
Swift approval of ads from Facebook
ProPublica then performed a test, paying $30 to target the users that had liked these anti-Semitic topics. The journalists targeted the users with promoted posts, and, within just 15 minutes, the three adverts had been approved by Facebook. ProPublica then asked Facebook for comment, but the offensive categories were swiftly taken down.
Rob Leathern, product management director at Facebook, discussed the removal of the offending targeting fields: “We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”
Although the anti-Semitic categories presented to the ProPublica journalists represented too few users to enable an ad campaign on their own, Facebook did recommend additional categories such as ‘Second Amendment’, suggesting a link between those who hold anti-Semitic views and those who have an interest in firearms.
Facebook changing ad policy
According to Bloomberg, Facebook will no longer allow advertisers to target users by how they describe themselves on their profiles. These bigoted categories were, in fact, gleaned from the information users put up on their profiles about their field of study, job title and education.
The company said: “The self-service system had automatically been populating interest categories based on what community members post about themselves.
“Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes.
“However, there are times where content is surfaced on our platform that violates our standards.”
Many are criticising Facebook’s policy of automatically creating categories for advertisers, as it means that offensive categories could go relatively undetected unless brought to the attention of the company or the wider media, sometimes months after being in regular use.
Continued incidents such as this cement the need for Facebook’s general policy line to become more proactive than reactive, in order to extinguish potentially harmful movements and ideas. In the wake of the events at Charlottesville last month, after which Mark Zuckerberg made an anti-hate statement, the online juggernaut must hold itself to account.