As advertisers begin shuffling more money into digital ads, there are necessary precautions they must take. With the shift from traditional to richer ad formats comes increased risk. A differentiator between the digital and traditional model is predictability. With traditional TV and older forms of media, advertisers know exactly where ad budgets are going. They pay for a specific placement on a specific medium against specific content, in return for viewability by a large, but mostly anonymous audience. The adoption of digital advertising has led to higher reward accompanied by some level of risk or potential uncertainty. Digital platforms give brands the opportunity to design campaigns that reach target audiences with forecasted outcomes. Whether a home goods brand is focused on selling a new product to Baby Boomers or a clothing brand wants to reach millennials to cultivate brand awareness, ads on digital channels can help your brand effectively and efficiently reach key audience segments with a level of certainty that isn’t possible with older, traditional models.
Brands strive to maximize their ad dollars, but they also need to preserve and protect their reputation. Ads don’t live in a vacuum on social platforms. They can appear next to user-generated content and can prompt conversation and interaction, which requires brands and platforms to take extra safety measures. However, even traditional media formats are not entirely brand-safe. Advertisers have been pulling ad spend on a number of TV networks as a result of commentary on major programming that does not align with a brand’s image and values.
Advertisers can find themselves in the game of prediction — predicting where ads will be placed and ensuring that the content the ads appear alongside is brand-safe. However, platforms have been investing significant amounts of money and human capital to quell uncertainty and risk for brands activating digitally.
Digital Platforms Are Buckling Down on Safety
At face value, digital ads can appear to be complex when assessing brand safety, but the leading platforms have been taking serious and consistent measures to ensure ads are reaching maximum potential while remaining brand-safe.
Facebook has been investing heavily in human capital to manually review user-generated content (UGC) that generates ad revenue. They have also implemented changes to Audience Network, which lets you extend your ad campaigns beyond Facebook’s walls to reach a larger audience on mobile apps, websites and videos, using the same Facebook targeting and measurement tools. On Audience Network, you can use category blocking and block lists to prevent your ads from appearing within entire categories of sites (such as dating or gambling sites), as well as specific URLs and app domains.
Instagram, like Facebook, allows category block, block lists, and placement opt-out on its API. In addition, Instagram is focusing on eliminating fraudulent activity and is taking steps to remove fake accounts and bots.
Twitter has been cracking down on bots and fraudulent accounts to minimize ad fraud. They have also been expanding video partnerships to attract ad dollars that will be spent on scheduled live programming from publishers and media giants, ensuring brands’ advertisements appear alongside brand-safe premium content. In addition, brands can decide what videos they opt-in to for pre-roll and mid-roll ads if your brand uses Twitter Amplify, as well as on its livestreaming platform, Periscope, using Twitter’s API. Brands can flag particular users that reply to ads with offensive tweets so the platform can potentially restrict or suspend users posting offensive content. In addition, brands can upload lists of twitter users that frequently interact with their ads offensively to exclude them from their target audiences. Brands can also use keyword exclusions to remove users that tweet words the brand identifies as offensive from their target audience segments.
YouTube has been implementing new measures to tackle brand safety. Over 400 hours of UGC are uploaded every minute on YouTube, and over one billion hours of video are watched each day on the platform. YouTube has been focusing heavily on monitoring where ads can be served. They have been reforming their policies and improving their content classifications to better identify content that is offensive. They also have added three new Sensitive Subject exclusions in AdWords and allow account-level placement exclusions in AdWords to make it easier to exclude content across campaigns. YouTube restricts users with less than 10,000 lifetime channel views from monetizing their videos. The platform has also been investing in human capital to monitor UGC and ensure their policies and controls are enforced. YouTube and Google recently took four new steps to combat terrorism online: they are increasing video technology analysis to identify extremist content ; they will be greatly increasing the number of independent experts in YouTube’s Trusted Flagger program ; they will be taking a stronger stance on videos that don’t clearly violate policies, but are still considered offensive; and they will expand counter-radicalization efforts by promoting videos against hate and radicalization toward potential ISIS recruits.
Aside from set community policies and guidelines Snapchat has in place, the platform is open to anyone as long as they agree to Snap’s terms. Users self-select who to follow. Snapchat also has a dedicated trust and safety, spam & abuse teams that are actively investing in tools, like vision learning and in-app reporting of abuse, to keep the platform safe. Snapchat has built a unique platform with unique users, giving media outlets, such as the New York Times and ABC, a new medium to create mobile-centric, vertical content for, and advertisers the comfort of advertising against traditional media content they’ve relied on in the past. In addition, Snap Ads API partners like SocialCode have the option to run ads only on Publisher Stories and Our Stories.
Pinterest’s policy on acceptable content serves as a safety mechanism. The platform actively disapproves branded and organic content that does not follow its strict community guidelines. These guidelines prohibit an array of offensive content, ranging from violent images to hate speech and discrimination, minimizing the opportunity for ads to appear alongside offensive content.
LinkedIn has it easier when it comes to content control. Because the platform primarily caters to professional networks, users tend to self-police the content in their feeds. In fact, the Business Insider Intelligence Digital Trust Ranking places LinkedIn at the top of all digital platforms. Users on the site draw lines themselves because they know the audience they are catering to includes coworkers and professional colleagues. In addition, LinkedIn’s policies explicitly restrict users from posting offensive content, and the platform has a dedicated Trust and Safety team that patrols the site looking for content that violates the User Agreement or Advertising Guidelines.
Brands are pressuring platforms to protect their image if they want to continue running their ads, and the platforms have been responding quickly and efficiently. It is virtually impossible to absolutely ensure an ad will not appear alongside potentially unsafe content if it is user-generated, but platforms are pouring serious dollars into scaling their brand safety measures.
Advertisers must develop a risk-based approach when preparing their media plans. Activating on digital provides you with sophisticated tools and the ability to optimize advertising budgets in real-time. A smart digital strategy assesses the risk-benefit analysis of digital ads, takes advantage of safety mechanisms offered by digital platforms, develops proactive safety measures, and capitalizes on the value digital activation offers over traditional models of advertising.
Click here to download the full brand safety guide which will teach you how to adapt your advertising strategy for the digital world, focusing on ad quality, brand safety and consumer expectations.