ÌìÃÀÍøÕ¾´«Ã½´«Ã½

From AirTags to AI Nudification: The Growing Toolkit of Technology-Facilitated Abuse

By Jason R.C. Nurse and Lisa Sugiura | May 14, 2026

The Grok to in images brought the issue of so-called technology-facilitated abuse to the fore. But it’s a problem that predates AI – with , , , and all used by abusers to control, harass or stalk their victims.

This abuse as tech has become in people’s lives, and as AI advances rapidly. But governments to make tech companies design systems that minimise misuse, and to hold them accountable when things go wrong.

Our has confirmed that technology misuse has increased and that its harms are significant. But governments and the tech sector are doing little to combat it – despite numerous examples of how tech can enable abuse.

Case 1: Smart Glasses

The of smart glasses – which look like normal eyewear but can do many things a smartphone does – has led to reports of secret filming. In some cases, videos , often attracting degrading and sexually explicit comments.

Meta has said its smart glasses have a light to show when they are recording and anti-tamper tech to make sure the light cannot be covered. But there appear to be workarounds.

In England and Wales, voyeurism legislation focuses on private spaces, and harassment laws do not specifically apply to targeted recording and online distribution. However, the UK Information Commissioner’s Office is after subcontractors were allegedly able to access intimate footage from customers’ glasses. This is in addition to a, which alleges Meta violated privacy laws and engaged in false advertising. Meta has said that it takes the very seriously and that faces are usually blurred out. It also discloses in its the potential for content to be reviewed either by a human or by automation.

Case 2: Bluetooth Trackers

Apple’s AirTags, and other devices built for tracking personal items, to stalk and harass people, . Apple released updates to so that potential victims would be alerted if an unknown device was traveling with them. But for many, this feature should have existed from the outset.

The law in England and Wales is clear that attaching tracker devices to someone without their knowledge is a . But , the ease of covertly monitoring people using these devices means people continue to be at risk.

Case 3: AI Deepfake and ‘Nudification’ Apps

Apps can now “, while AI is increasingly used to make . In January, several instances of xAI’s assistant to create sexualized photos of women and minors came to light. All it took to create the images were some .

, xAI decided to limit this feature. But the safeguards appear to apply only to and .

In February, the legal changes similar to the in the US, which will require tech platforms in the UK to remove non-consensual intimate images within 48 hours. Failure to do so will result in fines and services being blocked, and the law is likely to be implemented from summer.

Using automated technology known as “,” victims will only need to report an image once to have it removed from multiple platforms simultaneously. The same images would then be automatically deleted every time anyone attempted to reupload them. Nudification apps and using AI chatbots to create deepfake pornography in the UK.

But there is more to be done. Mitigating risks must be embedded at the design stage to prevent these images being created in the first place. The rise of romantic and sexual chatbots means this has become more urgent.

And beyond deepfakes and nudification, AI can also enable . This includes directly targeting someone with abusive content, or fake images or profiles that for so-called “.

Challenges Ahead

These issues must be prevented built into these technologies. This is what prioritising user safety should look like, after all. But often, these . Safety tools are only usually added , not built into platforms from the start.

Governments have allowed regulation to fall behind fast-paced developments. Tech companies have grown quickly, but laws and enforcement have not kept up. At the same time, police and legal systems are often under-trained or unclear on how to handle digital harm.

Even where there is regulation, such as the UK’s , penalties for platforms that allow abuse are often . The regulator Ofcom has issued only to tech companies on how to better protect women and girls on their platforms. Campaigners have called for this to be , with clear penalties for companies that do not comply, placing it on a level legal footing with child sexual abuse and terrorism content.

As AI advances, tech companies must prioritise system design that puts user safety first. But until governments enforce real consequences, the tech sector will be able to profit from harm while those using the platforms bear the cost.

This article is republished from The Conversation under a Creative Commons license. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts. The original article .

Related:

Topics InsurTech Data Driven Artificial Intelligence Tech

Was this article valuable?

Here are more articles you may enjoy.