Skip to main content

Major brands suspend Instagram advertising as child predator tests reveal problematic ad placement

Two major dating app companies have suspended Instagram advertising after tests mimicking the behavior of child predators led to ads being served alongside sexually explicit material.

Other brands affected include Disney, Pizza Hut, and Walmart …

All major brands have controls on Meta and other social media networks which are intended to ensure their ads don’t appear adjacent to inappropriate content, which typically includes hate speech and sexually explicit material.

Tests independently conducted by the Wall Street Journal and The Canadian Centre for Child Protection found that major brand ads could be served alongside sexually explicit images, when they aimed to replicate the behavior that a child predator might engage in on Instagram. Namely, searching for images of child gymnasts, cheerleaders and similar content, while also seeking out adult sexual content.

Both organizations recorded recommendations made, and ads served, to those accounts.

Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands […]

In a stream of videos recommended by Instagram, an ad for the dating app Bumble appeared between a video of someone stroking the face of a life-size latex doll and a video of a young girl with a digitally obscured face lifting up her shirt to expose her midriff. In another, a Pizza Hut commercial followed a video of a man lying on a bed with his arm around what the caption said was a 10-year-old girl.

While the tests were admittedly modelled on the behavior of a tiny minority of Instagram users, the WSJ reports that it found tens of thousands of accounts matching this profile, and saw similar content when it followed those accounts.

Two dating app brands have suspended their ads across all Meta platforms.

Following what it described as Meta’s unsatisfactory response to its complaints, Match began canceling Meta advertising for some of its apps, such as Tinder, in October. It has since halted all Reels advertising and stopped promoting its major brands on any of Meta’s platforms. “We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content,” said Match spokeswoman Justine Sacco.

Robbie McKay, a spokesman for Bumble, said it “would never intentionally advertise adjacent to inappropriate content,” and that the company is suspending its ads across Meta’s platforms.

Other brands have said that Instagram parent company Meta is paying for independent audits to be carried out to determine whether inappropriate ad placement is placing their brands at risk.

Meta calls it ‘a manufactured experience’

In a statement to 9to5Mac, Meta said the experience was “manufactured.”

We don’t want this kind of content on our platforms and brands don’t want their ads to appear next to it. We continue to invest aggressively to stop it – and report every quarter on the prevalence of such content, which remains very low. Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions.

These results are based on a manufactured experience that does not represent what billions of people around the world see every single day when they use our products and services. We tested Reels for nearly a year before releasing it widely – with a robust set of safety controls and measures. In 2023, we actioned over 4 million Reels per month across Facebook and Instagram globally for violating our policies.

The company told us that the prevalence of adult nudity and sexual activity was 3-4 views of violating content per 10,000 views of content on Instagram.

9to5Mac’s Take

As with a similar experiment on X – where accounts were created to follow hate speech, and Apple was among the companies whose ads were served adjacent to that content – there is no disputing the fact that the tests were carried out with the specific aim of finding out whether a problem exists in what might be termed edge cases.

However, the fact remains that real examples of these accounts do exist, and advertisers are promised that their ads will not be served alongside problematic content. The onus is on social media companies to keep these promises, even in the case of accounts held by the most unpleasant of individuals.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear