Does the Internet Feed Off of Women’s Bodies? How algorithms, AI, and platforms profit from the exploitation and silencing of women

By Lily Wass

With the United Nations set to convene at the seventieth annual meeting of the Commission on the Status of Women, it’s nearly impossible to discuss any of their priorities—inclusive legal systems, elimination of discrimination, addressing structural barriers—without talking about algorithmic justice. 

A computer scientist, founder of the Algorithmic Justice League, and Black woman, Dr. Joy Buolamwini popularized the concept in 2016 when she discovered facial recognition software could not detect her face unless she wore a white mask. This individual observation was corroborated by her published findings that models have significantly higher error rates when recognizing dark-skinned female faces compared to light-skinned males. Since then, the urgency of algorithmic justice has only multiplied amidst the 2020s artificial intelligence (AI) boom. 

One of the first algorithms many people knowingly interacted with was social media. In the early 2000s, the connectivity these networks created was hailed as a new form of global “community” by early pioneers like Mark Zuckerberg of Facebook. Some twenty years later, the algorithm is less likely to show you wholesome content from your friends and family in chronological order, instead opting for stuff that makes you angry, matches your personal information and browsing habits, or is part of the ubiquitous ‘AI slop.’ The resulting online ecosystem bares less and less resemblance to authentic circles of human interaction. It’s more like a gamified universe of social distortion overlayed on a malleable human psychology, one that often forgoes quality and ethics for engagement.

On platforms designed to share images, what is amplified and scrutinized often ends up being the appearance of women. This is a less a bug in the system than an intrinsic design feature that dates back to the origin of social networks. Before creating Facebook, Mark Zuckerberg first put together a “hot or not” rating platform in his Harvard University dorm room. By scraping ID photos from the university’s directory, the game invited users to rank two pictures of female students against each other based on physical attractiveness. Before getting shutdown for privacy violations, the crude social platform’s virality among students—through the infringement of personal security, was a sign of what was to come. 

The globalization of Facebook took the engagement that interrogating the appearance of women reaps and turned it into a business model. In 2017, The Australianrevealed Facebook was pitching advertisers on the platform’s ability to identify the emotional vulnerability of its young audiences for marketing purposes. This includes running ads that promote extreme weight loss to the accounts of young girls, which Facebook whistleblower Sarah Lynn-Williams has stated in her memoir and testimony to United States Senate could be triggered when users deleted a selfie from their page—an algorithmic indicator of insecurity. At present, ads glorifying eating disorders continue to be approved for 13 to 17 year old audiences by the platform despite Facebook policy stating otherwise and can now be created using highly realistic image generators.   

Today, Meta—the parent company of Facebook, Instagram, and WhatsApp among others, has placed further bets on another technology that exploits. In the past year, they’ve sold 7 million pairs of their smart AI glasses, advertised as hands-free technology that analyze sound and visuals to interact with your surroundings. What the discrete recording feature has actually been used for is enabling men to secretly tape their interactions with women. Although the glasses are supposed to have a visible recording signal, simple workarounds leave the subjects of these videos unaware they’ve been recorded and posted online until discovering footage of themselves talking about their personal lives, where they live, or their phone number has gained thousands of views. These videos are often met with further harassment in the comments below, otherwise known as ‘dogpiling’, and have even led to in-person stalking of the victims. 404 Media has also documented numerous Instagram accounts where Meta AI glasses users upload themselves entering massage parlors and attempting to solicit sex, again to unsuspecting women in their own workplace.  

Despite having no ability to consent to this recording, justice for victims of non-consensual recording is limited by the fact that it’s generally legal to film others in public. The recency of this technology places it in a grey area that’s probably not covered by how nations govern online privacy. Meta recently indicated their intentions to continue the momentum on this project while regulation lags behind. The safety of women, not to mention the larger privacy of the public at large, could face further infringement by their plans to integrate facial recognition into the smart glasses “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” as quoted from an internal document of Meta’s Reality Labs. 

But even if one could avoid all public spaces, their privacy would still not be protected. AI is already being used to generate harmful depictions of women and their bodies, falsely portraying them in sexually and violently explicit scenarios. The fraudulent nature of these images and videos does little to undermine the damage they are capable of inflicting. 

Since X launched the chatbot Grok on their social media platform, lawsuits and investigations into the company have been launched in numerous countries, including a police raid on Elon Musk’s X office in France. The chatbot gained notice when it began following user instructions to “undress” women in photos posted online, among other sexually explicit image generation prompts. In less than two weeks, the tool was used to create three million sexualized images, including 23,000 depicting children. 

The generation of explicit and sexually degrading content is an emerging tactic of intimidation used by far-right extremists and authoritarian governments to silence and discredit the voices of women journalists, activists, and citizens. Hours after her death, Renée Good, the Minneapolis resident shot by Immigration and Customs Enforcement officer Jonathan Ross, was violated online by Grok users who created images depicting her “slumped over in her car, in a bikini.” Out of 100 journalists targeted by deepfakes, Reporters Without Borders reports 74% were women; in one case, Argentine President Javier Milei, a violent critic of media professionals, mocked and amplified deepfake pornography of journalist Julia Mengolini. The politically exiled Hong Konger and pro-democracy activist Carmen Lau, who is still wanted for HKD$1 million under the National Security Law for “incitement to secession” and collusion with foreign governments, was targeted by AI-generated images of her as a sex worker that were sent to her neighbors in the United Kingdom. 

The growing cases of abuse via generative models reflect an insidious flex of power and sexual violence traversing borders and internet networks; the reduction of a woman’s existence to a hypersexual depiction; and the ability to create lifelong trauma at little to no cost in a matter of seconds. 

The ramifications of a systemically unequal digital universe are vast and unknown—something we may only begin to understand when the real-world impacts are felt on a population level. To give an example, women are portrayed as significantly younger than men in millions of online images depicting social contexts and occupations. This digital misrepresentation of a gender division in the workforce may be training AI to render women as younger and less experienced than male counterparts when asked to generate resumes based on the same information. That same bias could also be reinforced on the employer side by models that have been found to favor resumes from older men and visualize high-paying occupations as overwhelmingly white and male. Already 90% of companies worldwide use some form of automation to screen and select job applicants. 

Likewise, as the work of Repro Uncensored overwhelmingly speaks to, algorithms have also become the ultimate gatekeepers of information by dictating what we know and whose voice is heard. This can include silencing the accounts of women, queer communities, and sexual and reproductive health providers. In 2025, Meta censored or fully removed the platforms of over 50 abortion and LGBTQ+ related organizations worldwide. The technology company has also shadowbanned the accounts of certified abortion pill providers and currently prohibits the discussion of sex education, including menstrual health and consent, with underage users of the Meta AI chatbot. Even when the user is located in a region with legal, accessible abortion services, Meta AI has refused to answer related queries. 

For global communities where access to informed decision-making about reproductive health is scarce or criminalized, reliance on American-based large language models that dominate the market may limit results to those that align with the Trump administration’s agenda to “prevent woke AI” rather than evidence. This includes decrying the training of AI models on aspects of gender, race, and diversity, equity, and exclusion.  

What this accumulates to is a longstanding pattern that the hardware and apps we rely on for information and interaction embed algorithmic exploitation of women into their functionality and business model. The evidence can be damning, but justice for those harmed—by exacerbated mental disorders, sexual harassment, job discrimination, or censorship, lag concerningly behind. The principles of responsible AI are largely being written by the tech companies who have the most to gain from their deregulation, in collusion with the authoritarian regimes seeking to exploit AI’s misuse to rewrite reality. Since the birth of online media, sacrificing the interests of women has been seen as necessary for technology to advance. The status of women today is one of survival within the algorithmic distortions of gender inequality, bearing the burden for those who profit from the currency of artificial bodies, minds, and voices that we are told represent us.  

Next
Next

Feminists Writing Senegal Into the Internet: Inside the Wikimedia community building free knowledge and visibility