The Age Verification law: Training a Nation to Use VPNs / a Pornography Privacy Nightmare

By Rhian Farnworth, with additional contributions by Martha Dimitratou.

It's a great time to be a Virtual Private Network (VPN) provider, as VPN downloads topped app-store charts in the UK last week. This followed the roll-out of strict Age Verification (AV) checks that were implemented on pornography sites on July 25, as part of a new iteration of the UK’s Online Safety Act

AV is designed to stop minors accessing adult and other harmful content online. This includes porn, eating disorder, suicide, and self-harm content. To counter minors browsing sites featuring this, the new law states that strict age verification measures must be introduced, utilising methods like:

  • Facial recognition and age-estimation tools

  • Uploading official ID’s

  • Credit card or mobile phone verification

  • Taking selfies or short videos for age confirmation

Next to adult content sites, platforms like Spotify have also started applying age checks to some music videos, and Reddit has added AV on certain subreddits, including r/uk_beer, r/stopsmoking, r/stopdrinking, r/sexual_assault, and r/periods. The intrusive measures raise serious concerns around privacy, anonymity, freedom of access to information, surveillance, and data security. The change is additionally an unsettling development for freedom of speech advocates, and anyone who cares about their ID being linked to their browsing habits. Digital footprints are forever. Uploading government issued or official ID’s to a multitude of sites when you just wanted a quick wank, is also a data breach just waiting to happen! But all puns aside; further concerns include limitations on accessing lawful content, stifling freedom of expression, surveillance of online habits, increasing censorship and content restrictions, and the somewhat concerning role of the growing ‘identity verification industry’ (IDV Industry), which we’ll unpack next.

Worth $12.9 Billion USD in 2024 alone, the IVD industry often utilises tools like facial recognition and facial scanning software to determine age, opening grave concerns about gendered, racial, and age-based bias being baked into their technologies. This paper on demographic bias in facial recognition systems, released earlier in 2025, shares that facial recognition technologies have ‘challenges for specific demographic groups, including females, Black individuals, and younger cohorts’, and ‘unique challenges in recognizing children’s faces due to structural changes with age’. Bias issues in facial recognition technologies is not a new phenomenon and are well-documented*; yet relying on technologies to gatekeep ‘harmful content’ being rolled out at scale which are flawed in their design, only exacerbates existing societal systems of oppression and bias further. While we recognise that of course not all IVI technologies rely on facial recognition techniques, by relying on techniques like these, further technological means of discrimination are opened.

Private IVD companies additionally have an increasingly prominent voice in the big tech landscape and across society, helping to shape and lobby for laws like AV to be implemented. With clear influencing abilities in matters directly affecting a nations’ browsing habits at scale, the role of IVD companies is also of note. During the development of the UK’s draft online safety bill, IVD company Yoti answered calls for input to the bill. Clearly  endorsed mandatory AV here, they further collaborated with the UK Age Verification Providers Association, further subtly influencing information in the UK Draft Online Safety Bill (Definition of Content Harmful to Children). This is not the first time private companies have involved themselves in the development of AV law in the UK, with AgeChecked Ltd, AVSecure, AVYourself and VeriMe starting legal action action against the UK government in 2020, to begin a judicial review after earlier age verification laws due to be be implemented were suspended. Such input blurs the role of technology providers and political influencing, lobbying for their own benefit as an IDV provider benefiting from the new bill.

Sensitive data privacy - like the privacy of your face, credit card, phone number, or selfies - is of additional grave concern. While the UK benefits from strict data protection mechanisms like General Data Protection Regulations (GDPR), over 90% of porn consumption happens on sites that are based in the United States. While they should in theory be GDPR compliant for UK visitors, it's challenging to monitor how GDPR compliant sites which are based outside of the UK or EU actually are. With sites requesting official ID’s or other confidential personal information, there is great potential for spoof sites to illegally collect identity information, commit identity theft, or even blackmail people who are misled to supply personal AV data to bad actors manifesting as legitimate sites.

What is the future of AV? Already, five EU countries have confirmed they will trial using AV methods, with Denmark, Greece, Spain, France, and Italy announcing they will be testing technical AV solutions, indicating this could be part of a wider trend to further control access to the internet. The UK has long-championed freedom of speech, an open internet, and access to information, yet the latest intensely move counters this and brings into conversation many digital rights issues that are more familiar in more authoritarian settings.

The AV law’s implementation reflects classic building blocks of authoritarian control of free speech and censorship. Controlling flows of information is a common first step in removing free access to information - AV has the potential to be the first of many future digital crackdowns on restricting access to digital content in the UK, and who has access to it. 

Training a nation to use VPN’s and the impact on independent adult sites

But will AV measures actually stop anyone from accessing pornography, or does it simply train a nation of young and tech-savvy digital-natives to use VPNs to access adult content?
According to 2025 statistics from GitNux, there are over 4.2 Million active adult content sites, over 1.5 billion daily online searches queries related to porn globally, and pornographic content accounting for 30% of all online data transfers. While individual adult websites are required to impose AV checks, this does not stop people from accessing porn through other mediums; social media, file-sharing, instant/direct messaging, search engines (Google receives over two million adult content searches per day, with over one million related to the word ‘porn’), over 50% of internet pornography is accessed through mobile apps (rather than browsers), proxies, via millions of smaller and less/un-regulated porn sites, or even via the dark web. With this in mind, it’s going to be very difficult to stop people accessing porn!

There are many easily accessible workarounds, like VPNs. Used to block a user's IP address location, VPNs allow a user to appear as if they were located in a different country, in this case, one without AV laws. This enables individuals to easily and freely circumvent AV measures by hiding their location. Remember the GDPR and data privacy issues mentioned earlier? If someone uses a VPN to pose as if they are located somewhere that is not GDPR-compliant, data privacy risks increase significantly, opening up individuals to risks of their personal data being stored without consent, sold, and a higher risk of database leaks. Another VPN issue is the VPN fallacy, a long-known workaround of AV checks (and one of the reasons earlier iterations of the AV law were suspended), quickly reducing AV to be essentially meaningless and circumventable. 

The knock-on effect targets independent porn sites and adult content creators, resulting in a loss of ad revenue by over 50% for smaller sites as VPNs automatically block ads. This can quickly equal a site shutdown, and the rapid decline of independent adult content platforms; sites that are already vulnerable to censorship, financial precarity, and algorithmic suppression. As ad revenue collapses under the weight of AV-driven VPN use and automated ad-blocking, many creators face shutdowns or are pushed to unsafe and unregulated corners of the internet. As noted by this great article in Pornbiz, porn sites are frequently cloned, their content stolen and republished on sites with very similar URLs, without any of the original credits. Such sites are published quickly, ripping off designs and content, and can disappear just as quickly, without ever implementing AV checks. 

Gizmodo has already published a handy article instructing people how to bypass Reddit age verification and there has been a surge in VPN recommendation guides being published recently. In response to the growing VPN literacy and use, UK Science Secretary Pete Kyle shared that while the UK won’t be banning VPN usage (for now), however the government will be looking “very closely” at how VPNs are being used. Definitely something to keep on the radar. The point is, AV checks or not - porn will always be available - and cracking down on mainstream and select smaller sites will not stop anyone from accessing it. 

Far from protecting the public, AV laws may also instead entrench power among a few major platforms, erode digital autonomy, and accelerate the ongoing erasure of marginalized voices online. Adult content creators and sites have long been perceived by wider society as less-tolerable, second class, and with other negative connotations, experiencing discrimination, suppression, and unequal treatment through social, legal, and digital infrastructures. This is not a new theme to ReproUncensored, sexuality educators, and those working with SRHR topics, as patterns of oppression in society are baked into technologies through automated decision making, blocking, suppression, and shadowbanning. The implementation of AV checks further back up this narrative, suppressing adult content sites and creators even further. 

Data Security and Data leaks - should we rely on individual sites to manage our sensitive data and information?

Recent unsettling database leaks like the Tea app’s unsecured cloud storage being hacked and published on 4chan does not instill confidence in relying on individual sites' to manage data security. Tea’s hack resulted in over 72,000 private messages, government-issue ID’s, and 13,000 site-verification selfies being published, demonstrating how easily very sensitive private and personal information can accidentally fall into the wrong hands. Tea Dating. Safety, is an anonymous dating safety app for women to share dating information about men and potential red-flag behaviour. Similar to some AV checks,Tea’s verification methods used selfies to prove users were female, with their privacy policy promising immediate deletion of the selfies. Tea advised users to use anonymous usernames when registering for the service, yet as 404 Media reported, it “was trivial… to find the real world identities of some users given the nature of their messages, which Tea has led them to believe were private.” While Tea’s database leak appears to have been down to atrocious database security, it demonstrates how easily very sensitive information, like AV check info, can be leaked in an instant. 

With many angles to consider, there appears to be more issues than solutions with AV law. While it claims to protect children, in practice AV does little more than push users toward circumvention tools, punish small and independent creators, and open the door to mass surveillance and data exploitation. As VPN use spikes and data leaks like Tea's become cautionary tales, it's clear that AV laws create far more risk than safety. Worse, they legitimize a growing digital infrastructure of identity verification, algorithmic suppression, and content policing, tools that disproportionately harm marginalized voices.

What’s unfolding is not a public safety measure. It is the normalisation of authoritarian digital controls, under the pretense of child protection. If we are serious about digital rights, online safety, and freedom of expression, we must demand more than surveillance solutions, and tokenistic measures that in reality, don’t work. This is a test of how far governments can go in restricting internet access and how easily we accept it.

Take Action:

For UK residents, an online petition requesting the government repeals the Online Safety Act has been launched, and at the time of publishing this article, has received over 495,000 signatures. Sign the petition here

_____________________________________________________________________________________________________

* For more on this and a great overview, check Gender Shades by Dr. Joy Buolamwini, Founder of The Algorithmic Justice League.

Next
Next

Documenting Censorship: Farsi/Dari SRHR Voices