Over 680,000 women have no idea their photos were uploaded to a bot on the messaging app Telegram to produce photo-realistic simulated nude images without their knowledge or consent, according to tech researchers.
The tool allows people to create a deepfake, a computer-generated image, of a victim from a single photo.
Sensity, a visual threat intelligence company headquartered in Amsterdam, discovered the Telegram network of 101,080 members, 70% of whom appeared to reside in Russia or Eastern Europe.
“This one’s unique because it’s not just people talking or people sharing content, it’s actually embedded in Telegram, and we have not found something similar,” said Giorgio Patrini, CEO and chief scientist at Sensity.
About 104,852 images of women have been posted publicly to the app, with 70% of the photos coming from social media or private sources. A small number of these victims appeared to be underage. The rest of the images were likely shared privately, researchers say.
Unlike the algorithms that make deepfake videos — including nonconsensual sexual videos — the Telegram bot doesn’t need thousands of images to work. It only needs one, “which is really a reason why so many private individuals are attacked, because only one profile picture from Facebook is enough to do this,” said Patrini.
This harassment appears to be happening without the knowledge or consent of the photographed women, a vast majority of who are private citizens rather than celebrities or influencers.
“As soon as you share images or videos of yourself and maybe you’re not so conscious about the privacy of this content, who can see it, who can steal it, who can download it without you knowing, that actually opens the possibility of you being attacked,” Patrini said.
Nina Jankowicz, author of How to Lose the Information War, said this app shows that deepfakes go beyond politics. For Jankowicz, worries about the national security implications of a convincing fake video sidestep the reality that the technology is largely deployed to abuse women.
“It’s really disturbing how accessible it is,” Jankowicz said. “Frankly, the thing that we’ve seen shared as evidence of the growing deep fake phenomenon, the little silly videos of Joe Biden that President Trump has shared, for instance, that stuff is far less sophisticated than what you’re talking about.”
Jankowicz said the app has huge implications for women everywhere, especially in more socially conservative countries like Russia. Victims could be at risk of losing their job and livelihood if a convincing but fake nude photo were made public. Some could face partner violence.
“Essentially, these deepfakes are either being used in order to fulfill some sick fantasy of a shunted lover, or a boyfriend, or just a total creepster,” Jankowitz said. “Or they’re used as potential blackmail material.”
The app and the ease of accessibility speak to larger themes of online harassment and abuse women face online — something Jankowicz has experience with firsthand.
“This is all part and parcel of the broader abuse and harassment that women have to deal with in the online environment, whether that’s just trolling or whether it’s the gendered and sexualized abuse coming from all sides of the political spectrum,” Jankowicz said. “It’s used as a weapon of trying to push women out of the public sphere. This is just an extension of that.”
While Patrini and his team didn’t find proof of these images being used for extortion of women, they fear that possibility is fast approaching.
Deepfake nudes often target celebrities, but this network appears to be more focused on people who are not famous. According to a poll that Sensity conducted on people using the tool, 63% of those using it said they were interested in women they knew.
“In the industry, at least, it is a known problem to some extent,” said Patrini, “but I really struggle to believe that at the level of private citizens it’s known by anyone.”
According to Sensity, “All sensitive data discovered during the investigation detailed in this report has been disclosed with Telegram, [Russian social media site] VK, and relevant law enforcement authorities. We have received no response from Telegram or VK at the time of this report’s publication.”
Patrini pointed out that so-called “porn bots” go against Telegram’s terms of service.
The bot has been advertised on VK, with Sensity finding activity on 380 pages on the site.
This tool, which BuzzFeed News is declining to name, allows people to produce deepfakes on cellphones, remotely generating the images before sending them back to the user. The bot, which only works on images of women, provides watermarked images at no cost, and images without watermarks for a fee of about $1.50. Customers can also earn money by referring others to the service.
“That’s the phenomenon of this technology becoming a commodity,” Patrini said. “No technical skill required, no specialized hardware, no special infrastructure or accessibility to specific services that are hard to reach.”
According to Sensity, seven Telegram channels using the bot had attracted a combined 103,585 members by the end of July, a year since the tool was launched, with the most-populous channel having 45,615 people in it.
Patrini reiterated that while many people fear how deepfake technology could be used in politics, its actual widespread use is the exploitation of women online.
“This is not a problem of a high system of democracy, at least primarily. This is not a problem only for public figures and celebrities, but it’s going to be a problem for everybody, unfortunately quite soon,” Patrini said. “It’s already today a problem for hundreds of thousands of people.”