Sept 28 (Reuters) – Some major advertisers, including Dyson, Mazda, Forbes and PBS Kids, have suspended their marketing campaigns or removed their ads from parts of Twitter because their promotions appeared alongside tweets soliciting child pornography, the companies told Reuters. .
DIRECTV and Thoughtworks also told Reuters on Wednesday night that they have stopped their advertising on Twitter.
Brands ranging from Walt Disney Co. (DIS.N)NBCUniversal (CMCSA.O) and Coca-Cola Co. (KO.N) to a children’s hospital were among more than 30 advertisers who appeared on the profile pages of Twitter accounts selling links to exploitative material, according to a Reuters review of accounts identified in a new investigation into online child sexual abuse by the group. cyber security Ghost Data.
Sign up now for FREE unlimited access to Reuters.com
Some of the tweets included keywords related to “rape” and “teenagers,” and appeared alongside tweets promoted by corporate advertisers, the Reuters review found. In one example, a promoted tweet for shoe and accessory brand Cole Haan appeared alongside a tweet in which a user said they were “trading teen/kids content.”
“We are appalled,” David Maddocks, Cole Haan’s brand president, told Reuters after being notified that the company’s ads were appearing alongside those tweets. “Twitter is going to fix this, or we will fix it by any means we can, including not buying Twitter ads.”
In another example, a user tweeted looking for “young girls ONLY, NO boys” content, which was immediately followed by a promoted tweet for the Texas-based Scottish Rite Children’s Hospital. Scottish Rite did not respond to multiple requests for comment.
In a statement, Twitter spokeswoman Celeste Carswell said the company “has zero tolerance for child sexual exploitation” and is investing more resources dedicated to child safety, including hiring new positions to write policies and implement solutions.
He added that Twitter is working closely with its clients and advertising partners to investigate and take action to prevent the situation from happening again.
Twitter’s challenges to identify child abuse content were first reported on an investigation by tech news site The Verge in late August. Reuters reports here for the first time on the emerging pushback from advertisers critical to Twitter’s revenue stream.
Like all social media platforms, Twitter prohibits depictions of child sexual exploitation, which are illegal in most countries. But it allows adult content in general and hosts a thriving exchange of pornographic images, comprising about 13% of all content on Twitter, according to an internal company document seen by Reuters.
Twitter declined to comment on the volume of adult content on the platform.
Ghost Data identified the more than 500 accounts that openly shared or solicited child sexual abuse material during a 20-day period this month. Twitter failed to remove more than 70% of accounts during the study period, according to the group, which shared the findings exclusively with Reuters.
Reuters was unable to independently confirm the accuracy of Ghost Data’s finding in its entirety, but it did review dozens of accounts that remained online and requested “over 13” and “young-looking nude” materials.
After Reuters shared a sample of 20 accounts with Twitter last Thursday, the company removed an additional 300 accounts from the network, but more than 100 remained on the site the next day, according to Ghost Data and a Reuters review.
Reuters then shared the full list of more than 500 accounts on Monday after it was provided by Ghost Data, which Twitter reviewed and permanently suspended for violating its rules, Twitter’s Carswell said on Tuesday.
In an email to advertisers Wednesday morning, ahead of this story’s publication, Twitter said it “discovered that ads were being served within profiles that were involved in the public sale or solicitation of child sexual abuse material.” “.
Andrea Stroppa, founder of Ghost Data, said the study was an attempt to assess Twitter’s ability to remove the material. She said that she personally funded the investigation after receiving a tip on the matter.
Twitter’s transparency reports on its website show that it suspended more than 1 million accounts last year for child sexual exploitation.
He made about 87,000 reports to the National Center for Missing & Exploited Children, a government-funded nonprofit that facilitates information sharing with law enforcement, according to that organization’s annual report.
“Twitter needs to fix this issue ASAP, and until they do, we will stop doing any other paid activity on Twitter,” a Forbes spokesperson said.
“There is no place for this type of content online,” a spokesman for automaker Mazda USA said in a statement to Reuters, adding that in response the company now prohibits its ads from appearing on Twitter profile pages. .
A Disney spokesperson called the content “reprehensible” and said they are “redoubling our efforts to ensure that the digital platforms we advertise on and the media buyers we use strengthen their efforts to prevent such mistakes from happening again.”
A spokesman for Coca-Cola, which featured a promoted tweet in an account tracked by investigators, said it did not approve of the material being associated with its brand, saying “any breach of these standards is unacceptable and taken very seriously.” “.
NBCUniversal said it has asked Twitter to remove ads associated with the inappropriate content.
CODE WORDS
Twitter is not alone in facing moderation failures related to the safety of children online. Child welfare advocates say the number of known images of child sexual abuse has skyrocketed from thousands to tens of millions in recent years, as predators have used social media, such as Meta’s Facebook and Instagram, to groom children. victims and exchange explicit images.
For the accounts identified by Ghost Data, nearly all child sexual abuse material dealers marketed the materials on Twitter, then instructed buyers to contact them on messaging services like Discord and Telegram to complete payment and receive the files, that were stored. in cloud storage services such as New Zealand-based Mega and US-based Dropbox, according to the group’s report.
A Discord spokesperson said the company had banned a server and a user for violating its rules against sharing links or content that sexualizes children.
Mega said a link referenced in the Ghost Data report was created in early August and removed shortly after by the user, which it declined to identify. Mega said that he permanently closed the user’s account two days later.
Dropbox and Telegram said they use a variety of tools to moderate content, but did not provide additional details on how they would respond to the report.
Still, the backlash from advertisers poses a risk to the business of Twitter, which makes more than 90% of its revenue by selling digital ad placements to brands looking to market products to the service’s 237 million daily active users.
Twitter is also fighting in court against Tesla CEO and billionaire Elon Musk, who is trying to back out of a $44 billion deal to buy the social media company over complaints about the prevalence of spam accounts and their use. business impact.
A team of Twitter employees concluded in a report dated February 2021 that the company needed more investment to identify and remove large-scale child exploitation material, noting that the company had a backlog of cases to review for possible reporting. to law enforcement.
“While the amount of (child sexual exploitation content) has grown exponentially, Twitter’s investment in technologies to detect and manage the growth has not,” according to the report, which was prepared by an internal team to provide insight. general about the state of the children. exploitation material on Twitter and receive legal advice on the proposed strategies.
“Recent reporting on Twitter provides a momentarily outdated look at just one aspect of our work in this space, and is not an accurate reflection of where we are today,” Carswell said.
Traffickers often use keywords like “cp” for child pornography and are “intentionally as vague as possible” to avoid detection, according to internal documents. The more Twitter cracks down on certain keywords, the more users are pushed to use obfuscated text, which “tends to be more difficult for (Twitter) to automate,” the documents say.
Ghost Data’s Stroppa said such tricks would complicate efforts to search for the materials, but noted that his small team of five researchers and no access to internal Twitter resources was able to find hundreds of accounts in 20 days.
Twitter did not respond to a request for further comment.
Sign up now for FREE unlimited access to Reuters.com
Reporting by Sheila Dang in New York and Katie Paul in Palo Alto; Additional reporting by Dawn Chmielewski in Los Angeles; Edited by Kenneth Li and Edward Tobin
Our standards: The Thomson Reuters Trust Principles.