Child sexual abuse content created by AI is increasingly appearing on the open internet, at a “tipping point,” according to the Internet Watch Foundation. The online safety organization said that illegal AI-generated content over the last six months exceeds the quantity it found throughout the previous year. Much of this content is in publicly accessible portions of the internet, not on the dark web.
According to Interim IWF CEO Derek Ray-Hill, these AI images are so sophisticated that, likely, the AI tools used have been trained on actual pictures of abuse victims. That is the thing, he said: the problem is not disappearing; it gets worse. An IWF analyst reports that the situation has evolved where law enforcement and safety organizations can no longer decide whether an actual victim in the photo needs help.
Read Also: OpenAI Bain Expand AI Partnership to Sell ChatGPT to Businesses
Between April and September, the IWF processed 74 reports of CSAM AI-generated child sexual abuse, more than the 70 reports the organization received in the past year. Multiple images could appear on a report on a single webpage. Some of the flagged content included pictures of the abuse victim as a minor and “deepfakes,” which are videos that make adult content resemble CSAM. Other examples include images of celebrities photoshopped to resemble minors and instances of manipulating children’s images for inappropriate content.
More than half of the flagged AI-generated material is from Russian and US servers, while Japan and the Netherlands host a great deal. IWF maintains a list of illegal content URL addresses, which it distributes to tech companies to intercept access to those websites. Intriguingly, 80% of the reports about illegal AI-generated images came from individuals who encountered them on public websites like forums or AI galleries.
But when it comes to issues like “sextortion,” Instagram has devised new measures to handle this. Sextortion refers to blackmail made by exposing nude photos taken after fooling the account holder of his private images. Under a new feature in the DM section that automatically blurs nude photos, Instagram recommends users be cautious before forwarding or viewing offending material. Users can view the blurry image, block the sender, or report the message.
This feature is currently enabled by default for teen accounts everywhere, and it works over encrypted messages but flagged pictures won’t be automatically reported to Instagram or the authorities. The option is available to enable the feature for adults. Additionally, Instagram has worked towards listing followers of known scammers to protect users from threats and exploitation.