{"id":426,"date":"2026-05-13T10:27:13","date_gmt":"2026-05-13T10:27:13","guid":{"rendered":"https:\/\/redzine.co.uk\/index.php\/2026\/05\/13\/from-airtags-to-ai-nudification-the-growing-toolkit-of-technology-facilitated-abuse\/"},"modified":"2026-05-13T10:27:13","modified_gmt":"2026-05-13T10:27:13","slug":"from-airtags-to-ai-nudification-the-growing-toolkit-of-technology-facilitated-abuse","status":"publish","type":"post","link":"https:\/\/redzine.co.uk\/index.php\/2026\/05\/13\/from-airtags-to-ai-nudification-the-growing-toolkit-of-technology-facilitated-abuse\/","title":{"rendered":"From AirTags to AI nudification: the growing toolkit of technology-facilitated abuse"},"content":{"rendered":"<figure><img decoding=\"async\" src=\"https:\/\/images.theconversation.com\/files\/732779\/original\/file-20260428-69-ov0dqe.jpg?ixlib=rb-4.1.0&amp;rect=408%2C0%2C3428%2C2284&amp;q=45&amp;auto=format&amp;w=1050&amp;h=700&amp;fit=crop\" \/><figcaption><span class=\"caption\"><\/span> <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-photo\/panoramic-shot-sad-crying-girl-holding-1502227280\">LightField Studios\/Shutterstock<\/a><\/span><\/figcaption><\/figure>\n<p>It\u2019s hard to overstate the impact that artificial intelligence has had since the release of generative AI platforms such as ChatGPT just three years ago. While they have led to countless advances in how we live and work, they have also been at the centre of controversies around domestic and sexual abuse.<\/p>\n<p>The use of the <a href=\"https:\/\/theconversation.com\/topics\/artificial-intelligence-ai-90\">AI<\/a> tool Grok <a href=\"https:\/\/theconversation.com\/how-ai-generated-sexual-images-cause-real-harm-even-though-we-know-they-are-fake-273427\">to remove<\/a> women\u2019s clothing in images brought the issue of so-called technology-facilitated abuse to the fore. But it\u2019s a problem that predates AI \u2013 with <a href=\"https:\/\/www.theguardian.com\/technology\/2022\/jan\/20\/apple-airtags-stalking-complaints-technology\">Bluetooth trackers<\/a>, <a href=\"https:\/\/refuge.org.uk\/news\/refuge-exposes-alarming-new-patterns-of-abuse-involving-wearable-technology\/\">wearable devices<\/a>, <a href=\"https:\/\/news.sky.com\/story\/chilling-surge-in-use-of-smart-speakers-and-baby-monitors-to-carry-out-domestic-abuse-mps-say-12933833\">smart speakers<\/a>, <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/cx23ke7rm7go\">smart glasses<\/a> and <a href=\"https:\/\/news.sky.com\/video\/how-women-are-being-stalked-using-secret-stalkerware-phone-apps-13272695\">apps<\/a> all used by abusers to control, harass or stalk their victims.<\/p>\n<p>This abuse <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/15248380221090218\">has worsened<\/a> as tech has become <a href=\"https:\/\/ec.europa.eu\/eurostat\/statistics-explained\/index.php?title=Digital_economy_and_society_statistics_-_households_and_individuals\">more embedded<\/a> in people\u2019s lives, and as AI advances rapidly. But governments <a href=\"https:\/\/www.bbc.co.uk\/news\/technology-66854618\">have struggled<\/a> to make tech companies design systems that minimise misuse, and to hold them accountable when things go wrong.<\/p>\n<p>Our <a href=\"https:\/\/journals.sagepub.com\/doi\/pdf\/10.1177\/17488958241266760\">own research<\/a> has confirmed that technology misuse has increased and that its harms are significant. But governments and the tech sector are doing little to combat it \u2013 despite numerous examples of how tech can enable abuse.<\/p>\n<p><strong>Case 1: Smart glasses<\/strong><\/p>\n<p>The <a href=\"https:\/\/cacm.acm.org\/news\/the-rise-of-smart-glasses\/\">growing availability<\/a> of smart glasses \u2013 which look like normal eyewear but can do many things a smartphone does \u2013 has led to reports of secret filming. In some cases, videos were <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/cx23ke7rm7go\">posted online<\/a>, often attracting degrading and sexually explicit comments.<\/p>\n<p>Meta <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/cx23ke7rm7go\">has said<\/a> its smart glasses have a light to show when they are recording and anti-tamper tech to make sure the light cannot be covered. But there appear <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/cx23ke7rm7go\">to be workarounds<\/a>.<\/p>\n<p>In England and Wales, voyeurism legislation focuses on private spaces, and harassment laws do not specifically apply to targeted recording and online distribution. However, the UK Information Commissioner\u2019s Office <a href=\"https:\/\/techcrunch.com\/2026\/03\/05\/meta-sued-over-ai-smartglasses-privacy-concerns-after-workers-reviewed-nudity-sex-and-other-footage\/\">is investigating Meta<\/a> after subcontractors were allegedly able to access intimate footage from customers\u2019 glasses. This is in addition to <a href=\"https:\/\/fortune.com\/2026\/03\/27\/meta-smart-glasses-filming-watching-workers-lawsuit-privacy\/\">a lawsuit in the US<\/a>, which alleges Meta violated privacy laws and engaged in false advertising. Meta has said that it takes the <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/c0q33nvj0qpo\">protection of data<\/a> very seriously and that faces are usually blurred out. It also discloses in its UK <a href=\"https:\/\/www.facebook.com\/legal\/uk-ai-terms\">terms of service<\/a> the potential for content to be reviewed either by a human or by automation.<\/p>\n<p><strong>Case 2: Bluetooth trackers<\/strong><\/p>\n<p>Apple\u2019s AirTags, and other devices built for tracking personal items, <a href=\"https:\/\/www.theguardian.com\/technology\/2022\/jan\/20\/apple-airtags-stalking-complaints-technology\">can be misused<\/a> to stalk and harass people, <a href=\"https:\/\/www.bbc.co.uk\/news\/newsbeat-65030359\">particularly women<\/a>. Apple released updates to <a href=\"https:\/\/support.apple.com\/en-gb\/119874\">AirTags and other trackable tech<\/a> so that potential victims would be alerted if an unknown device was travelling with them. But for many, this feature should have existed from the outset.<\/p>\n<p>The law in England and Wales is clear that attaching tracker devices to someone without their knowledge is a <a href=\"https:\/\/www.legislation.gov.uk\/ukpga\/1997\/40\/contents\">criminal offence<\/a>. But <a href=\"https:\/\/www.dorsetecho.co.uk\/news\/25079106.kevin-reid-sentenced-tracking-ex-wife-car-airtag\/\">despite convictions<\/a>, the ease of covertly monitoring people using these  devices means people continue to be at risk.<\/p>\n<figure class=\"align-center zoomable\">\n            <a href=\"https:\/\/images.theconversation.com\/files\/732786\/original\/file-20260428-85-5haeji.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=1000&amp;fit=clip\"><img decoding=\"async\" alt=\"woman checking in rear-view mirror of her car.\" src=\"https:\/\/images.theconversation.com\/files\/732786\/original\/file-20260428-85-5haeji.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip\"><\/a><figcaption>\n              <span class=\"caption\">jkjkjkjk.<\/span><br \/>\n              <span class=\"attribution\"><a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-photo\/woman-black-looks-rearview-mirror-car-2292013509?trackingId=c93c365f-950b-4da6-b656-e1ee59f9bb8e&amp;listId=searchResults\">Kannapon.SuperZebra\/Shutterstock<\/a><\/span><br \/>\n            <\/figcaption><\/figure>\n<p><strong>Case 3: AI deepfake and \u2018nudification\u2019 apps<\/strong><\/p>\n<p>Apps can now <a href=\"https:\/\/www.techtransparencyproject.org\/articles\/nudify-apps-widely-available-in-apple-and-google-app-stores\">\u201cnudify\u201d people<\/a>, while AI is increasingly used to make <a href=\"https:\/\/www.theguardian.com\/global-development\/2024\/mar\/01\/tech-bros-nonconsensual-sexual-deepfakes-videos-porn-law-taylor-swift\">non-consensual deepfake pornography<\/a>. In January, several instances of xAI\u2019s assistant <a href=\"https:\/\/theconversation.com\/grok-produces-sexualized-photos-of-women-and-minors-for-users-on-x-a-legal-scholar-explains-why-its-happening-and-what-can-be-done-272861\">Grok being used<\/a> to create sexualised photos of women and minors came to light. All it took to create the images were some <a href=\"https:\/\/www.wired.com\/story\/grok-is-pushing-ai-undressing-mainstream\/\">simple prompts<\/a>.<\/p>\n<p><a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/06\/grok-ai-fake-images-women-girls-undressed-uk-minister-liz-kendall\">After criticism<\/a>, xAI decided to limit this feature. But the safeguards appear to apply only to <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/ce8gz8g2qnlo\">certain jurisdictions<\/a> and <a href=\"https:\/\/www.lemonde.fr\/en\/pixels\/article\/2026\/01\/09\/grok-limits-ai-image-editing-to-paid-users-after-nudes-backlash_6749258_13.html\">certain users<\/a>.<\/p>\n<p>In February, <a href=\"https:\/\/www.gov.uk\/government\/news\/tech-firms-will-have-to-take-down-abusive-images-within-48-hours-under-new-law-to-protect-women-and-girls#:%7E:text=New%20law%20requires%20tech%20platforms,48%20hours%20or%20face%20fines.&amp;text=Tech%20companies%20will%20be%20ordered,girls%20from%20this%20distressing%20abuse\">the UK government announced<\/a> legal changes similar to the <a href=\"https:\/\/www.congress.gov\/crs-product\/LSB11314\">Take It Down Act<\/a> in the US, which will require tech platforms in the UK to remove non-consensual intimate images within 48 hours. Failure to do so will result in fines and services being blocked, and the law is likely to be implemented from summer. <\/p>\n<p>Using automated technology known as <a href=\"https:\/\/www.ofcom.org.uk\/online-safety\/safety-technology\/overview-of-perceptual-hashing-technology\">\u201chash matching\u201d<\/a>, victims will only need to report an image once to have it removed from multiple platforms simultaneously. The same images would then be automatically deleted every time anyone attempted to reupload them. Nudification apps and using AI chatbots to create deepfake pornography <a href=\"https:\/\/www.theguardian.com\/politics\/live\/2026\/jan\/12\/grok-x-nudification-technology-online-safety-labour-reform-tories-lib-dems-uk-politics-latest-news-updates\">will also become illegal<\/a> in the UK.<\/p>\n<p>But there is more to be done. Mitigating risks must be embedded at the design stage to prevent these images being created in the first place. The rise of romantic and sexual chatbots means this has become more urgent.<\/p>\n<p>And beyond deepfakes and nudification, AI can also enable <a href=\"https:\/\/theconversation.com\/ai-tools-are-being-used-to-subject-women-in-public-life-to-online-violence-271703\">harassment at scale<\/a>. This includes directly targeting someone with abusive content, or fake images or profiles that <a href=\"https:\/\/www.theguardian.com\/technology\/2024\/nov\/24\/ai-increasingly-used-for-sextortion-scams-and-child-abuse-says-senior-uk-police-chief\">impersonate victims<\/a> for so-called <a href=\"https:\/\/doi.org\/10.1093\/oxfordhb\/9780198812746.013.35\">\u201csextortion\u201d scams<\/a>.<\/p>\n<h2>Challenges ahead<\/h2>\n<p>These issues must be prevented <a href=\"https:\/\/www.theguardian.com\/technology\/article\/2024\/may\/20\/ai-chatbots-safeguards-can-be-easily-bypassed-say-uk-researchers\">with robust guardrails<\/a> built into these technologies. This is what prioritising user safety should look like, after all. But often, these guardrails <a href=\"https:\/\/www.wired.com\/story\/deepseeks-ai-jailbreak-prompt-injection-attacks\/\">have failed<\/a>. Safety tools are only usually added <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/20563051221144315\">after public pressure<\/a>, not built into platforms from the start.<\/p>\n<p>Governments have allowed regulation to fall behind fast-paced developments. Tech companies have grown quickly, but laws and enforcement have not kept up. At the same time, police and legal systems are often under-trained or unclear on how to handle digital harm.<\/p>\n<p>Even where there is regulation, such as the UK\u2019s <a href=\"https:\/\/www.legislation.gov.uk\/ukpga\/2023\/50\">Online Safety Act<\/a>, penalties for platforms that allow abuse are often <a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/cq68j5g2nr1o\">weak or unenforceable<\/a>. The regulator Ofcom has issued only <a href=\"https:\/\/www.ofcom.org.uk\/online-safety\/illegal-and-harmful-content\/a-safer-life-online-for-women-and-girls\">voluntary guidance<\/a> to tech companies on how to better protect women and girls on their platforms. Campaigners have called for this to be <a href=\"https:\/\/www.endviolenceagainstwomen.org.uk\/new-ofcom-vawg-guidance-is-welcome-but-more-is-needed-to-tackle-online-abuse\/\">made mandatory<\/a>, with clear penalties for companies that do not comply, placing it on a level legal footing with child sexual abuse and terrorism content.<\/p>\n<p>As AI advances, tech companies must prioritise system design that puts user safety first. But until governments enforce real consequences, the tech sector will be able to profit from harm while those using the platforms bear the cost.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/counter.theconversation.com\/content\/274468\/count.gif\" alt=\"The Conversation\" width=\"1\" height=\"1\" \/><\/p>\n<p class=\"fine-print\"><em><span>Jason R.C. Nurse receives\/received funding from The Engineering and Physical Sciences Research Council (EPSRC), The Research Institute for Sociotechnical Cyber Security, The National Cyber Security Centre (NCSC), and the UK Home Office. He is affiliated with Wolfson College, University of Oxford as a Research Member, CybSafe as the Director of Science and Research, and The Royal United Services Institute (RUSI) as an Associate Fellow.<\/span><\/em><\/p>\n<p class=\"fine-print\"><em><span>Lisa Sugiura receives funding from Home Office Domestic Abuse Perpetrators Intervention Fund <\/span><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>LightField Studios\/Shutterstock It\u2019s hard to overstate the impact that artificial intelligence has had since the release of generative AI platforms such as ChatGPT just three years ago. While they have led to countless advances in how we live and work, they have also been at the centre of controversies around domestic and sexual abuse. The [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-426","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/posts\/426","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/comments?post=426"}],"version-history":[{"count":0,"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/posts\/426\/revisions"}],"wp:attachment":[{"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/media?parent=426"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/categories?post=426"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/redzine.co.uk\/index.php\/wp-json\/wp\/v2\/tags?post=426"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}