A simple chatbot on Pornhub has intercepted millions of people searching for child abuse videos.
Over the past two years, Pornhub’s UK website has presented a pop-up chatbot to those searching for child abuse videos on the adult content site. The warning message has been triggered 4.4 million times by words or phrases linked to abuse. The pop-up blocks the content and warns that it is illegal and, in half of the cases, directs people to where they can find help.
The two-year trial was conducted by Pornhub in partnership with two UK-based child protection organizations. A new report from the Internet Watch Foundation (IWF) states that the pop-ups resulted in a decrease in the number of searches for material about child sexual abuse, as well as seeing more people seek help for such behavior.
“The actual raw numbers of searches, it’s quite scary high,” Joel Scanlan, a senior lecturer at the University of Tasmania who led the evaluation of the reThink Chatbot, told WIRED. “There’s a significant reduction over the length of the intervention in number of searches. The deterrence messages do work.”
Throughout the trial, there were 4,400,960 warnings triggered by searches on Pornhub’s UK website. In total, 99% of searches during the trial did not trigger a warning.
The chatbot itself asked those who triggered the pop-up a series of questions, allowing them to click buttons to answer or type out a response. It went on to explain that the content the user just searched for may be illegal and directed them toward charitable help services.
Searches for child sexual abuse material on Pornhub
Child sexual abuse material (CSAM) is illegal and Pornhub takes steps to remove millions of images and videos every year. The adult content site uses a list of 34,000 banned terms to track and block CSAM, according to a spokesperson for the company speaking to WIRED.
This came after a report from the New York Times back in 2020, exposing a damning number of child exploitation and nonconsensual videos hosted on the site. Since then, Pornhub has publicly made more efforts to combat illegal material as part of its user safety efforts – including the chatbot, designed and created by the Internet Watch Foundation (IWF), a nonprofit which removes CSAM from the web, and the Lucy Faithfull Foundation, a charity which works to prevent child sexual abuse.
2023 also saw the EU slapping stricter regulations, while in the States, North Carolina and Montana have both blocked the use of the site following age verification laws coming into play earlier this year.
The report states that there were 1,656 requests for more information made through the chatbot and 490 people clicking through to the charity’s Stop It Now website. Around 68 people called or chatted with Lucy Faithfull’s confidential helpline, the report says.
Featured image: Ideogram
The post “Pornhub AI chatbot intercepts millions hunting for child abuse videos” by Rachael Davis was published on 02/29/2024 by readwrite.com