In a precedent-setting case, a British court banned a sex offender from using AI tools. At the same time, the government announced a new law that will criminalize the creation of sexual "deepfakes" without consent.
A British court has barred a sex offender, who had generated over 1,000 pornographic images of children, from using AI tools for five years.
The ban is part of a court order to prevent sexual offenses and prohibits him from using text-to-image generators, which can produce realistic images based on written instructions, as well as "nudifier" websites for explicit "deepfakes."
According to court documents, Dover is specifically not allowed to use Stable Diffusion software, which reportedly has been abused by pedophiles to create hyper-realistic child abuse material.
This is the first known case of its kind in the UK and could set a precedent for future monitoring of individuals convicted of child pornography, The Guardian reports.
UK government cracks down on abusive deepfakes
Last week, the British government announced the creation of a new criminal offense that will make it illegal to create sexually explicit deepfakes of adults without their consent.
Those convicted will face prosecution and an unlimited fine. If the image is further distributed, they could even face a prison sentence.
The law aims to ensure that someone who produces a sexually explicit deepfake commits a criminal offense even if they do not intend to distribute it, but rather intend to cause fear, humiliation, or distress to the victim.
Existing offenses will also be tightened: If someone creates such an image and then distributes it, prosecutors could charge two offenses, potentially leading to an increase in the penalty.
Justice Minister Laura Farris called the creation of deepfake sex images "despicable and completely unacceptable," regardless of whether the images are shared or not. This is another example of the degradation and dehumanization of certain people, particularly women. The government will not tolerate this, she said.
Deepfake images have increased in recent years and are viewed millions of times globally every month, according to reports by the Internet Watch Foundation.