October 2, 2017
ByJoel Kaplan, VP Global Policy
Last month we announced plans to share with Congress the ads that appear to have come from a Russian entity known as the Internet Research Agency. We found more than 3,000 of these ads, which ran between 2015 and 2017. Many appear to amplify racial and social divisions. Today we are delivering those ads to congressional investigators and explaining more about the steps we’re taking to strengthen our ads policies and enforcement.
All of these ads violated our policies because they came from inauthentic accounts. We are sharing these ads with Congress because we want to do our part to help investigators gain a deeper understanding of Russian interference in the US political system and explain those activities to the public. These actions run counter to Facebook’s mission of building community and everything we stand for.
Last week, CEO Mark Zuckerberg talked about the steps we’re taking to prevent this abuse of our platform, while still promoting legitimate discussion of social issues and honest civic debate. Today we’re sharing further details about the updates we’re putting in place, plus new steps to improve review and enforcement of ads and ad accounts (steps 2 and 3 below):
- Making advertising more transparent. We believe that when you see an ad, you should know who ran it and what other ads they’re running – which is why we show you the Page name for any ads that run in your feed. To provide even greater transparency for people and accountability for advertisers, we’re now building new tools that will allow you to see the other ads a Page is running as well – including ads that aren’t targeted to you directly. We hope that this will establish a new standard for our industry in ad transparency.We try to catch content that shouldn’t be on Facebook before it’s even posted – but because this is not always possible, we also take action when people report ads that violate our policies. We’re grateful to our community for this support, and hope that more transparency will mean more people can report inappropriate ads.
- Strengthening enforcement against improper ads. We use both automated and manual review, and we’re taking aggressive steps to strengthen both. Reviewing ads means assessing not just the content of an ad, but the context in which it was bought and the intended audience – so we’re changing our ads review system to pay more attention to these signals. We’re also adding more than 1,000 people to our global ads review teams over the next year and investing more in machine learning to better understand when to flag and take down ads. Enforcement is never perfect, but we will get better at finding and removing improper ads.
- Tightening restrictions on advertiser content. We hold people on Facebook to our Community Standards, and we hold advertisers to even stricter guidelines. Our ads policies already prohibit shocking content, direct threats and the promotion of the sale or use of weapons. Going forward, we are expanding these policies to prevent ads that use even more subtle expressions of violence.
- Increasing requirements for authenticity . We’re updating our policies to require more thorough documentation from advertisers who want to run US federal election-related ads. Potential advertisers will have to confirm the business or organization they represent before they can buy ads. As Mark said, we won’t catch everyone immediately, but we can make it harder to try to interfere.
- Establishing industry standards and best practices. In order to fight threats like these, we’re all going to need to work together. We are reaching out to leaders in our industry and governments around the world to share information on bad actors and make sure they stay off all platforms.
We care deeply about the integrity of elections around the world. We take responsibility for what happens on our platform and we will do everything we can keep our community safe from interference.
Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.