AI Chip Market Size (2023 to 2034)
AI Porn Statistics (2025-2030): Key Data, Trends & Facts
AI-generated pornography has exploded from a niche curiosity to a global crisis in under three years. Deepfake porn videos increased 550% between 2019 and 2023, with 98% of all deepfakes online being pornographic and 99% targeting women.
AI-generated pornography: the numbers behind the fastest-growing content crisis
Nudification apps have been downloaded 705 million times, dedicated deepfake porn sites attract 34.8 million monthly visitors, and Europol has warned that up to 90% of online content could be synthetically generated by 2026.
The technology’s rapid democratization — it now takes less than 25 minutes and zero dollars to produce a 60-second deepfake porn video from a single photo — has triggered a legislative scramble across dozens of countries and exposed millions of people, including children, to non-consensual exploitation at industrial scale.
Deepfake porn volumes are growing at triple-digit rates annually
The most comprehensive tracking comes from Home Security Heroes’ 2023 State of Deepfakes report, which identified 95,820 deepfake videos online — up from 14,678 in Sensity AI’s 2019 baseline.
Production of deepfake pornographic videos specifically surged 464% between 2022 and 2023, jumping from roughly 3,725 to 21,019 new videos in a single year. Across the top 10 dedicated deepfake porn websites, cumulative video views exceeded 303 million, with 34.8 million monthly visitors driving traffic.
The growth has only accelerated since. DeepStrike.io projects deepfake files surged from 500,000 in 2023 to 8 million by 2025 — a 1,500% increase in two years.
Sumsub’s 2024 Identity Fraud Report documented a 4x increase in deepfakes detected globally year-over-year. The UK’s Crest Advisory estimated non-consensual sexual deepfake content grew 1,780% between 2019 and 2024.
Keepnet Labs tracked deepfake incidents rising from 42 in 2023 to 150 in 2024, with 179 incidents in Q1 2025 alone — outpacing all of 2024 within three months.
The trajectory shows no sign of slowing. DeepStrike projects annual growth nearing 900%, and the European Parliament Research Service notes that deepfakes have been doubling in number every six months. Europol has warned that 90% of all online content could be synthetically generated by 2026. By 2030, analysts project that 1 trillion deepfake images will be generated annually — and that non-pornographic use cases (enterprise, advertising, fraud) will finally surpass pornographic ones in raw volume, even as the absolute number of NCII deepfakes continues to climb.
The overwhelming majority of this content targets women. Sensity AI’s foundational research found 100% of early deepfake pornography subjects were female; the 2023 data puts it at 99%. South Korean singers and actresses constitute 53% of individuals featured, and 94% of all targets work in entertainment.
Roughly 4,000 female celebrities were catalogued across top deepfake porn websites. The Taylor Swift deepfake incident in January 2024 — where explicit AI-generated images accumulated 47 million views on X before removal — became a watershed moment that directly accelerated federal legislation.
The AI Porn market is worth billions and growing fast
The broader global adult entertainment market was valued at $61.79 billion in 2024, projected to reach $112.64 billion by 2033 at a 6.9% CAGR, according to SkyQuestTT.
The AI-specific segment of this market is harder to pin down, but several data points sketch its contours. The AI-generated sexual content market is projected to reach $2.5 billion by 2025.
The closely related AI girlfriend and companion app market was valued at $2.57–$2.7 billion in 2024, with projections reaching $11–$24.5 billion by 2032–2034 depending on the estimate, reflecting a 20–25% CAGR.
AI companion apps have been downloaded 220 million times globally as of July 2025, with downloads surging 88% year-over-year in the first half of 2025. Consumer spending on these apps hit $82 million in H1 2025, on pace for over $120 million by year-end, with 64% year-over-year revenue growth.
Revenue per download more than doubled, jumping from $0.52 in 2024 to $1.18 in 2025. Leading platforms include Character.AI ($32.2 million annual revenue), Chai AI ($30 million+ ARR with just 12 employees), and Candy.ai ($25 million ARR, fully bootstrapped).
Some 337 actively revenue-generating AI companion apps existed globally by mid-2025, with 128 new ones launching in H1 2025 alone.
AI companion & NSFW app market forecasts
- The AI companion market is growing at 30% annually and is projected to reach $140–210 billion by 2030.
- The AI girlfriend app market was worth $2.57 billion in 2024 and is expected to reach $11.06 billion by 2032, at a 20% CAGR.
- The NSFW emotional support segment alone is valued at $1.2 billion in 2025, with a projected 32% CAGR — the fastest-growing subsegment with the highest monetization.
- The global AI girlfriend app market is expected to grow at a CAGR of 27.4% from 2024 to 2030.
- AI companion apps are on track for $120M in consumer revenue in 2025, with revenue per download more than doubling year-on-year.
OnlyFans, the dominant creator platform, processed $7.22 billion in gross revenue in fiscal year 2024, paying out $5.8 billion to creators. While not exclusively an AI-powered platform, 84% of its creators now use AI tools in some capacity — from AI chatbots handling fan messages to content optimization — and direct messages, the area most ripe for AI automation, generate 70% of income for top creators.
Deepfake AI market size forecasts (2025–2030)
Multiple research firms have sized the broader deepfake AI market — note these cover all uses (fraud detection, entertainment, advertising) not just adult content:
| Source | 2024/2025 baseline | 2030 projection | CAGR |
| Grand View Research | $764.8M (2024) | $6.14B (2030) | 42.3% |
| Mordor Intelligence | $1.14B (2025) | $8.11B (2030) | 48.1% |
| P&S Intelligence | $572M (2024) | $5.29B (2030) | 44.8% |
| Markets & Markets | $850M (2025) | $7.27B (2031) | 42.8% |
| Fortune Business Insights | $9.19B (2025) | $51.4B (2034) | 21% |
The global deepfake AI market was estimated at $764.8 million in 2024 and is projected to reach $6.14 billion by 2030, growing at a CAGR of 42.3%.
The wide range across firms ($6B–$51B by 2030–2034) reflects different scope definitions — narrower reports count only synthetic media tools, wider ones include detection software, enterprise AI, and adjacent markets.
Nudification apps and tools have reached hundreds of millions of users
The nudification ecosystem represents one of the most alarming vectors of AI porn proliferation. The Tech Transparency Project reported in January 2026 that deepfake porn and nudification apps had been downloaded 705 million times from Apple and Google app stores.
Graphika identified 34 synthetic non-consensual intimate image providers that received over 24 million unique visitors in September 2023 alone, while 52 Telegram groups used to access such services contained at least 1 million users. Referral link spam for nudification services increased more than 2,000% on Reddit and X since early 2023.
The accessibility is staggering. One in every three deepfake tools allows creation of pornographic content. Creating a deepfake porn video requires no cost and less than 25 minutes with a single clear photograph.
Nearly 2,300 tools exist for AI face swaps, lip syncs, and face reenactments, according to Sensity AI’s 2024 report. Telegram nudification bots in South Korea alone reached approximately 4 million monthly users by late 2024.
A peer-reviewed study in PLOS One (Steel, 2026) surveying 557 U.S. adolescents aged 13–17 found that 55.3% had created at least one nudified image and 36.3% reported having a non-consensual nudified image created of them.
The Thorn Foundation’s March 2025 research surveying 1,200 young people aged 13–20 found 1 in 8 personally knew someone targeted by deepfake nude imagery, 6% had been targets themselves, and 2% admitted to creating deepfake nudes of another person. Disturbingly, 1 in 5 teens believed creating deepfake nudes — including of minors — was legal.
Enterprise deepfake capabilities are projected to grow 500% by 2030, meaning the tools currently powering nudification apps will become dramatically more capable and accessible over the next five years.
AI-generated child sexual abuse material is skyrocketing
The Internet Watch Foundation (IWF) has documented an exponential rise in AI-generated child sexual abuse material. In 2023, analysts found 20,254 AI-generated images on a single dark web forum in one month, with 2,978 confirmed as depicting child sexual abuse.
By 2024, actionable reports containing AI-generated CSAM rose 380% to 245 reports covering 7,644 images. The 2025 figures were far worse: 8,029 AI-generated images and videos assessed as showing realistic child abuse, including 3,443 AI-generated videos — a 26,385% increase from 2024’s 13 videos.
Of those videos, 65% (2,233) depicted Category A abuse — the most extreme classification including rape and sexual torture. Critically, 99% of AI-generated CSAM was found on the clear web, not the dark web.
NCMEC’s CyberTipline received 67,000 reports involving generative AI in 2024, up from 4,700 in 2023 — a 1,325% increase. In the first half of 2025, that figure exploded to over 440,000 AI-related reports.
However, Stanford CyberLaw analysis revealed an important caveat: at least 78% of H1 2025 reports stemmed from Amazon scanning AI training data for hash matches rather than actual AI-generated content, a distortion caused by NCMEC’s single “Generative AI” checkbox conflating different meanings. Still, even adjusting for this, the trajectory is clearly exponential.
The IWF noted that 90% of AI-generated CSAM images were realistic enough to be assessed under the same legal framework as real CSAM, and 99.6% depicted females.
The Grok scandal in late December 2025 through early 2026 dramatically illustrated the scale risk. When xAI’s Grok chatbot launched image editing with a “Spicy Mode,” the Centre for Countering Digital Hate calculated that it produced an estimated 3 million sexualized images in 11 days, generating 6,700 sexually suggestive images per hour — 84 times more than the top five deepfake websites combined.
Analysis of 20,000 Grok images found approximately 2% appeared to depict minors, translating to an estimated 23,000 explicit images of children. The incident triggered investigations from regulators in Ireland, the UK, France, California, Spain, and India, and Malaysia and Indonesia became the first countries to block the service entirely.
A patchwork of laws is racing to catch up
The United States passed its first federal law targeting AI-generated intimate imagery when President Trump signed the TAKE IT DOWN Act on May 19, 2025.
The law makes it a federal crime to knowingly publish non-consensual intimate imagery — real or AI-generated — of identifiable persons, requires platforms to remove flagged content within 48 hours, and carries criminal penalties including up to three years imprisonment for content involving minors.
The bill passed unanimously in the Senate and 409–2 in the House, a rare bipartisan achievement driven by the Taylor Swift incident and the Aledo, Texas high school case where a student created nude deepfakes of classmates. Platforms must comply by May 19, 2026.
The DEFIANCE Act, creating a federal civil right of action allowing victims to sue for up to $250,000 in damages, passed the Senate unanimously in January 2026 and awaits House action.
At the state level, 46 states have enacted laws addressing deepfake intimate imagery as of early 2026, with 174 total deepfake laws passed since 2019 — 82% concentrated in 2024–2025. Some 146 bills were introduced in 2025 alone. Meanwhile, 45 states now explicitly criminalize AI-generated CSAM.
Internationally, South Korea enacted among the world’s most comprehensive laws in September 2024, criminalizing not just creation and distribution but also possession and viewing of sexual deepfakes, with penalties up to seven years imprisonment.
This followed a national crisis: 812 deepfake sex crime cases were reported to police in the first nine months of 2024, up from 156 in all of 2021, with 83.7% of suspects being minors. The UK criminalized sharing intimate deepfakes through the Online Safety Act 2023, then extended the law to criminalize their creation via the Data (Use and Access) Act 2025, effective February 6, 2026 — carrying unlimited fines.
Australia passed the Criminal Code Amendment (Deepfake Sexual Material) Act in August 2024, with penalties up to seven years.
The EU AI Act, fully enforceable August 2026, mandates transparency labeling for AI-generated content including deepfakes, with penalties up to €35 million or 7% of global turnover. France enacted Article 226-8-1 in 2024, criminalizing non-consensual sexual deepfakes with up to two years imprisonment and €60,000 fines.
The legislative acceleration is likely to intensify. Deepfakes are forecast to appear in 95% of phishing attacks by 2027, and political deepfakes are projected to impact 50% of elections by 2028 — pressures that will force faster regulatory responses. The deepfake detection market, itself a product of the crisis, is projected to reach $10 billion by 2030 growing at a 38.3% CAGR, while the broader counter-deepfake industry is on track to triple in size from 2023 to 2026.
Platforms are adopting AI across the content lifecycle
Industry adoption of AI extends far beyond deepfakes. A frequently cited industry statistic puts 87% of adult sites as already using AI technology in some form, primarily for content recommendation, tagging, and moderation rather than content generation.
AI recommendation algorithms account for over 75% of consumption on some platforms, predict user preferences with up to 93% accuracy, and AI-based video editing tools have accelerated post-production by 55%. Some 62% of adult content creators reported using AI for ideation, direct content creation, or fan engagement by 2025.
A March 2025 peer-reviewed study by Lapointe et al. in the Archives of Sexual Behavior analyzed 36 AI porn generation websites and found 80.6% enabled image generation, 41.7% allowed video generation, 44.4% featured interactive AI agents, and 55.6% offered content alteration tools including deepnude capabilities.
Customization options were extensive, with 97.2% using feature selection and 72.2% supporting text prompting. The study documented how these platforms allow users to specify body type (72.2% of sites), clothing (75%), and sociodemographic characteristics (27.8–86.1%).
Detection remains a critical challenge. Only 24.5% of people correctly identify high-quality deepfake videos, and an iProov 2025 study found just 0.1% of participants correctly identified all fake and real media presented to them.
While AI detection tools claim over 90% accuracy in lab settings, the Deepfake-Eval-2024 benchmark showed many models suffered a 45–50% drop in performance on real-world deepfakes.
StopNCII.org, which creates digital fingerprints to prevent re-uploading of intimate images, was protecting 2 million images as of November 2025, a 97% increase from 2024, with a 90%+ success rate in blocking and removal.
Key studies and reports shaping the discourse
The foundational research in this space comes from a handful of organizations whose findings are consistently cited in policy, academic, and journalistic contexts:
- Sensity AI / Deeptrace (2019): Established the baseline — 14,678 deepfakes online, 96% pornographic, 100% targeting women. Their 2020 follow-up revealed a Telegram bot that had “stripped” 680,000+ victims, with 104,852 images publicly shared.
- Home Security Heroes (2023): The most comprehensive count — 95,820 deepfake videos, 98% pornographic, 550% growth since 2019, with detailed platform traffic data.
- Internet Watch Foundation (2023–2026): Gold-standard tracking of AI-generated CSAM, documenting the exponential curve from 51 reports in 2023 to thousands of videos in 2025.
- Thorn (2025): Landmark survey establishing that 1 in 8 American teenagers personally know a deepfake victim.
- Lapointe et al. (2025): First peer-reviewed content analysis of AI porn generation websites, published in Archives of Sexual Behavior.
- Steel (2026, PLOS One): Nationally representative survey finding 55.3% of U.S. adolescents aged 13–17 had created nudified images.
- Sumsub Identity Fraud Report (2024): Documented the 4x year-over-year increase in deepfakes detected globally and a deepfake attempt every five minutes.
- Graphika (2023–2024): Mapped the nudification ecosystem, identifying 24 million monthly unique visitors and 2,000%+ growth in referral spam.
Conclusion
The data paints a picture of exponential, compounding growth across every measurable dimension — content volume, platform traffic, app downloads, victimization rates, and child exploitation. Three dynamics make this trajectory particularly difficult to reverse.
First, production costs have collapsed to zero while quality has become nearly indistinguishable from reality, creating an asymmetry where content generation vastly outpaces detection and removal.
Second, the victim pool has shifted from primarily celebrities to ordinary people and children, with adolescents both creating and being victimized at alarming rates.
Third, while legislative momentum is unprecedented — 174 state laws, the first federal statute, and new international frameworks — enforcement remains sparse, with only about 4% of reported cases resulting in charges in the UK, and no federal prosecutions yet recorded under the TAKE IT DOWN Act.
The financial stakes are rising in parallel. Fraud losses attributable to generative AI are expected to climb from $12.3 billion in 2024 to $40 billion by 2027 (Deloitte), growing at a 32% CAGR. Deepfake-enabled contact center fraud alone could reach $44.5 billion by year-end 2025 (Pindrop). The commercial ecosystem funding AI pornography’s growth — a combined market projected at hundreds of billions by 2030 — is vastly outpacing the enforcement capacity built to contain it.
The gap between the scale of harm and the capacity to address it is widening, not closing.

![AI Companion Market Size [2024-2034]](/_next/image/?url=https%3A%2F%2Fres.cloudinary.com%2Fdvzkzccvn%2Fimages%2Ff_auto%2Cq_auto%2Fv1773420220%2FAI-Companion-Market-Size%2FAI-Companion-Market-Size.jpg%3F_i%3DAA&w=3840&q=75)
