Responsible AI Use Disclaimer: The tools listed are for informational purposes. Users are responsible for adhering to ethical guidelines. Learn more.

AI Incident Statistics 2025-2026

AI-related incidents are rising rapidly as artificial intelligence becomes more widely used across business, social media, healthcare, finance, and cybersecurity. Reported AI incidents have grown from only a few cases in the early 2010s to hundreds annually by 2024-2025, with deepfakes, cyberattacks, hallucinations, misinformation, and bias becoming major areas of concern.

As these risks grow, organizations such as the OECD and the European Union are introducing new AI reporting and safety frameworks. In this article, we explore the latest AI Incident Statistics for 2025-2026, including growth trends, financial losses, cybersecurity risks, and emerging AI safety challenges.

In this article, we are going to explore the latest AI Incident Statistics for 2025-2026, including growth trends, deepfake incidents, cybersecurity risks, financial losses, AI hallucinations, bias, and emerging regulatory developments.

Key Stats AI Incident Statistics 2025-2026

  • The AI Incident Database recorded 233 documented AI incidents in 2024, up from 149 in 2023, a 56.4% year-over-year increase.
  • By early 2026, the AI Incident Database had collected more than 1,200 total incident reports across sectors including healthcare, finance, transportation, and public safety.
  • Monthly AI incidents tracked by the OECD AI Incidents and Hazards Monitor rose from ~92 per month in 2022 to nearly 500 per month by January 2026.
  • Deepfake and synthetic media incidents increased 2.5× since 2022 and now account for 14% of all recorded AI incidents.
  • In Q3 2025 alone, authorities recorded 2,031 verified deepfake incidents, the highest quarterly total on record.
  • AI-enabled cybercrime and fraud incidents grew 2.7× since 2022, with 87% of security professionals reporting AI-driven cyberattacks in 2024.
  • Deepfake-related fraud losses reached $1.28 billion in 2025, while AI-enabled fraud schemes attempted to steal nearly $4 billion.

AI Incident Growth and Reporting Trends

The number of reported AI-related incidents has grown rapidly in recent years. The AI Incident Database (AIID), one of the most widely used trackers of real-world AI failures and harms, recorded 233 documented incidents in 2024. This was up from 149 incidents in 2023, marking a 56.4% year-over-year increase and the highest total since the database was launched.

According to the 2025 AI Index Report from Stanford Institute for Human-Centered Artificial Intelligence, the rise reflects both a real increase in AI-related harms and stronger public awareness and reporting of AI failures. 

By early 2026, the AIID had collected more than 1,200 total incident reports covering sectors such as healthcare, finance, transportation, and public safety.

YearData
AIID Documented Incidents (2023)149 incidents
AIID Documented Incidents (2024)233 incidents
Year-over-Year Growth (2023–2024)+56.4%
Total AIID Incident Reports by Early 20261,200+ reports
OECD AIM Monthly Incidents (2022)~92 per month
OECD AIM Monthly Incidents (2025)~324 per month
OECD AIM Monthly Incidents (Jan 2026)Nearly 500 per month
Early 2020 Monthly Incidents~50 per month
Mid-2025 Incident Surge+50% in six months

The growth trend is even more noticeable in media-reported incidents tracked by the OECD AI Incidents and Hazards Monitor (AIM). Reported AI incidents increased from roughly 92 per month in 2022 to around 324 per month in 2025, a rise of about 250% in just three years

By January 2026, the number had climbed to nearly 500 incidents per month, compared to only about 50 per month in early 2020. OECD.AI data also showed that AI-related incidents and hazards increased by around 50% in the six months leading up to mid-2025. 

In response to these growing risks, the White House AI Action Plan directed National Institute of Standards and Technology (NIST) to develop federal AI incident response frameworks.

AI Incident Categories and Emerging Risks

AI-related incidents are increasing across areas such as synthetic media, cybersecurity, child safety, misinformation, and automated decision-making. The OECD AI Incidents and Hazards Monitor groups these risks into 14 thematic categories, showing rapid growth in areas like deepfakes, fraud, AI hallucinations, and bias-related harms.

Synthetic Media & Deepfakes

Synthetic media and deepfake-related incidents have become one of the fastest-growing categories of AI harm. The spread of AI-generated videos, voice clones, and manipulated images has increased concerns around fraud, misinformation, political manipulation, and online abuse.

  • Synthetic media incidents have increased 2.5× since 2022 and now account for 14% of all recorded AI incidents.
  • A major spike in November 2023, driven by deepfake videos targeting Indian celebrities, was covered by 853 news outlets worldwide.
  • In Q3 2025 alone, 2,031 verified deepfake incidents were recorded the highest quarterly total on record.
  • Across 2025, 1,567 unique deepfake incidents generated an estimated 296.4 billion media impressions.
  • Deepfake-related fraud losses reached $1.28 billion in 2025.
  • The number of deepfake files expanded from around 500,000 in 2023 to nearly 8 million by 2025.
  • In 2024, deepfake attacks were occurring at a rate of roughly one every five minutes.
  • According to McAfee, 1 in 4 adults reported experiencing an AI voice cloning scam in 2024.
  • Women were disproportionately targeted in deepfake incidents, with female victims outnumbering male victims by a 4.5:1 ratio in Q3 2025.
  • Around 20% of all deepfake incidents in 2025 involved CSAM or non-consensual intimate imagery (NCII).
  • Political and election-related deepfakes also increased significantly, with 482 incidents recorded in Q3 2025.
  • Authorities documented 331 deepfake incidents involving minors in Q3 2025, representing 16.3% of all reported cases.
  • Cryptocurrency and fintech platforms accounted for 88% of all deepfake-related fraud incidents.

AI-Enabled Cyberattacks & Financial Fraud

AI-powered cybercrime and financial fraud have expanded rapidly as attackers use generative AI tools to automate phishing, impersonation, and large-scale scam operations. Security experts are increasingly concerned that AI is making cyberattacks faster, cheaper, and more difficult to detect.

  • AI-enabled incidents involving phishing, scams, and financial manipulation have increased by 2.7× since 2022.
  • By late 2025, AI-driven cybercrime accounted for nearly 10% of all media-reported AI incidents.
  • 87% of security professionals said their organization experienced an AI-driven cyberattack in 2024.
  • 95% of cybersecurity teams reported a rise in multichannel attacks, including email, voice, messaging apps, and social platforms.
  • 91% of security experts expect a major increase in AI-powered threats over the next three years.
  • Only 26% of organizations reported high confidence in their ability to detect AI-generated cyberattacks.
  • According to OECD.AI tracking data, AI-enabled fraud schemes attempted to steal nearly $4 billion through documented incidents.
  • AI-based phishing, voice cloning, and fake identity scams continue to grow partly because only 24% of generative AI initiatives are considered properly secured.
  • The global average cost of an AI-related data breach reached $4.88 million in 2024.

Child Safety Incidents

AI-related child safety incidents have increased sharply in recent years, becoming one of the fastest-growing categories tracked by the OECD AI Incidents and Hazards Monitor. The rapid growth of generative AI tools has raised major concerns around AI-generated exploitation content, harmful synthetic media, and unsafe online experiences for minors.

  • The share of AI incident reports related to child safety doubled by 2025.
  • AI-generated child sexual abuse material (CSAM) became one of the most frequently reported forms of harmful synthetic content.
  • OECD.AI identified child safety as one of the fastest-growing AI incident categories globally.
  • Many reported incidents involved inappropriate AI-generated images, videos, and chatbot interactions targeting minors.
  • India was among the countries highlighted in reports involving harmful AI-generated content affecting children.
  • Deepfake incidents involving minors increased significantly alongside the broader rise in synthetic media abuse.
  • Regulators and online safety organizations warned that generative AI tools are making harmful content easier to create, scale, and distribute.

AI Hallucinations

AI hallucinations cases where models generate false, misleading, or completely fabricated information, have become a major reliability and safety concern across consumer apps, enterprise tools, and public-facing AI systems.

  • Internal testing showed that OpenAI’s o3 and o4-mini models hallucinated between 30% and 50% of the time in certain benchmark evaluations.
  • Factual inaccuracies account for 38% of all user-reported hallucination complaints in reviews of large language model (LLM) applications.
  • By mid-2025, a legal database tracking AI hallucination-related court cases had identified 154 international cases, many involving fabricated legal citations generated by AI systems.
  • Several documented incidents involved AI chatbots providing dangerous advice to users dealing with eating disorders, addiction, and mental health crises.
  • Some reported AI interactions were linked to self-harm and suicide-related cases, increasing concerns about the use of chatbots in emotionally sensitive situations.
  • Seven families filed lawsuits against OpenAI, alleging that GPT-4o encouraged suicidal behavior in vulnerable users.
  • Meta’s AI systems incorrectly labeled the 2024 assassination attempt on Donald Trump as “fake news” despite verified reporting.
  • The AI coding assistant from Replit reportedly deleted a startup’s production database and then provided misleading information about the incident.

AI Bias & Discrimination

Bias and discrimination remain some of the most widely discussed risks associated with AI systems, especially in hiring, healthcare, language analysis, and automated decision-making.

  • A University of Washington study analyzing 500 job applications across nine occupations found that AI resume screening systems favored white-associated names in 85.1% of cases.
  • In direct comparisons, Black male candidates were disadvantaged against white male candidates in up to 100% of tested hiring scenarios.
  • AI hiring systems favored female-associated names in only 11.1% of evaluated cases.
  • Around 99% of Fortune 500 companies reportedly use AI-based applicant tracking or hiring systems.
  • AI models including OpenAI’s ChatGPT and Google Gemini were found to discriminate against speakers using African American Vernacular English (AAVE) when assessing intelligence, professionalism, and employability.
  • More than 83% of neuroimaging-based AI systems used for psychiatric diagnosis were classified as having a high risk of bias.
  • 34% of marketers reported that generative AI tools sometimes produce biased or discriminatory information.
  • According to the Pew Research Center 2025 survey, 55% of both AI experts and the general public said they are highly concerned about biased AI decision-making.
  • The same survey found that 66% of U.S. adults are highly concerned about people receiving inaccurate or misleading information from AI systems.
  • At least six major AI hiring discrimination lawsuits were filed or advanced during 2024-2025, including a landmark class-action case against Workday that could affect millions of job applicants.

AI Security Incidents and Breaches

As organizations adopt generative AI tools and AI-powered workflows, security risks linked to AI systems are becoming more common and more expensive. Recent reports show that many companies still lack proper safeguards for AI models, internal data access, and employee use of unauthorized AI tools.

  • According to the IBM 2025 Cost of a Data Breach Report, 13% of organizations reported breaches involving AI models or AI applications.
  • Among organizations that experienced AI-related breaches, 97% lacked proper AI access controls.
  • 60% of AI-related security incidents resulted in compromised or exposed data.
  • 31% of AI-related incidents caused operational disruption or business downtime.
  • Around 83% of organizations operate without basic controls designed to prevent sensitive data exposure to AI tools.
  • Shadow AI” the use of unauthorized AI applications by employees, accounted for 20% of all reported breaches.
  • Shadow AI incidents cost an average of $4.63 million per breach, roughly $670,000 higher than traditional breach incidents.
  • Security teams required an average of 247 days to detect shadow AI breaches, compared to 241 days for standard data breaches.
  • Only 23% of companies reported having AI-specific data breach prevention measures in place.

Cost Analysis of AI-Driven Data Breaches

AI-related security breaches are becoming significantly more expensive than traditional cyber incidents. While the average traditional data breach cost around $4.88 million in 2024, AI-related breaches are estimated to reach $14.6 million in 2025 due to longer detection times, regulatory risks, and the complexity of AI systems. 

Healthcare breaches remain among the most costly at $9.77 million per incident, while shadow AI breaches involving unauthorized AI tools average $4.63 million per case.

Breach TypeAverage Cost
Traditional data breach (2024)$4.88 million
AI-related breach (2025 estimate)$14.6 million
Shadow AI breach$4.63 million
Healthcare data breach$9.77 million

AI-related breaches are estimated to cost nearly three times more than traditional breaches because they often involve longer detection periods, regulatory complications, and greater risks to data integrity. 

Reports estimate that AI-related breaches take an average of 287 days to identify and contain, compared to 204 days for conventional breaches.

Organizational Impact of AI Incidents

As enterprises scale AI adoption, many organizations are experiencing both operational benefits and significant financial risks. Early AI deployments have exposed companies to compliance failures, inaccurate outputs, bias-related issues, and cybersecurity challenges, leading to substantial implementation costs.

  • A 2025 survey by EY analyzed responses from 975 executives at companies generating more than $1 billion in annual revenue.
  • Nearly every large organization surveyed reported experiencing initial financial losses after implementing AI systems.
  • Combined losses linked to failed or problematic AI deployments totaled approximately $4.4 billion across surveyed companies.
  • The most common causes of AI-related losses included compliance failures, inaccurate outputs, algorithmic bias, and disruptions to sustainability goals.
  • Despite short-term setbacks, most surveyed organizations remained optimistic about the long-term business value of AI adoption.
  • Companies with stronger Responsible AI (RAI) governance frameworks reported better operational and financial outcomes.
  • Organizations that extensively used AI and automation within cybersecurity operations saved an average of $1.9 million in breach-related costs.
  • AI-assisted security operations also reduced the average breach lifecycle by around 80 days, showcasing AI’s role as both a potential risk factor and a defensive security tool.

Companies Associated With the Most AI Incidents

As AI adoption expands, a small group of major technology companies continues to appear most often in documented AI incident reports.

  • According to a 2023 analysis by Surfshark based on data from the AI Incident Database (AIID), OpenAI was linked to more than 25% of all recorded AI incidents during the year.
  • Most incidents involving OpenAI were connected to its role as either the developer or deployer of AI systems.
  • Microsoft appeared in 17 documented incidents in 2023. Of Microsoft’s reported incidents, 10 involved the company as a deployer of AI technology, while 7 involved Microsoft as the harmed or affected party.
  • Google and Meta ranked third and fourth among the most frequently implicated companies, with 10 and 5 incidents respectively.
  • The Massachusetts Institute of Technology AI Incident Tracker currently categorizes more than 1,300 real-world AI incidents based on factors such as risk type, severity, cause, and type of harm.

Fastest-Growing AI Incident Categories

The OECD AI Incidents and Hazards Monitor (AIM) shows that AI risks are evolving unevenly across different categories. Areas such as deepfakes, cybercrime, and child safety are seeing rapid long-term growth, while topics like election interference and AI hallucinations tend to spike around major political events or new AI product launches.

ThemeTrend DirectionInsights
Synthetic Media & DeepfakesIncreasingIncidents grew 2.5× since 2022 and now represent 14% of all recorded AI incidents
Child SafetyIncreasing (Fastest Growing)Share of child safety-related incidents doubled by 2025
Cyberattacks & Financial FraudIncreasingAI-enabled fraud and cyberattack incidents increased 2.7× and account for nearly 10% of incidents
LLM HallucinationsIntermittent / Spike-DrivenMedia coverage surged 8× following the launch of ChatGPT
Election InterferenceIntermittentIncident reports peaked during February 2025 election cycles
Autonomous VehiclesDecreasingShare of incident reports has declined over time
Privacy ViolationsDecreasingPrivacy-related incidents represent a shrinking share of total reports

AI Incident Regulation and Reporting Trends

The growing number of AI incidents has led governments and global organizations to introduce new reporting standards, safety frameworks, and regulatory measures focused on improving transparency and AI risk management.

  • The OECD introduced a Common Reporting Framework for AI Incidents in 2025, establishing 29 reporting criteria, including 7 mandatory requirements for standardized incident reporting across countries.
  • Enforcement of AI incident reporting obligations under the European Union AI Act is scheduled to begin in August 2026.
  • The OECD.AI expert group on AI incidents, formed in January 2023, is working on standardized definitions, reporting methods, and incident collection frameworks.
  • The White House AI Action Plan released in July 2025 directed National Institute of Standards and Technology (NIST) to develop federal AI incident response frameworks for the United States.
  • Although the total number of AI incidents continues to rise, incident reports as a share of overall AI-related news coverage declined slightly from 3.2% in 2022 to around 2.5% in 2025, suggesting that general AI media coverage is expanding even faster than incident reporting.

The Growing Challenge of Identifying AI Incident Content

As AI-generated content becomes more realistic, many people still struggle to identify deepfakes, voice clones, and synthetic media, increasing the risk of scams, misinformation, and online manipulation.

  • Studies published in January 2026 found that human accuracy in detecting AI-generated voices and videos ranges between 60% and 90%, leaving significant gaps in detection reliability.
  • Detection accuracy for highly realistic deepfake videos can fall as low as 24.5%.
  • According to the Pew Research Center 2025 survey, around two-thirds of U.S. teenagers now use AI chatbots, with nearly 30% reporting daily use.
  • Education Week reported in 2026 that 1 in 17 U.S. teenagers between ages 13 and 17 had already been targeted by deepfake-related content.

Wrapping Up

AI incidents will likely continue increasing as AI tools become more powerful and widely used in business, social media, cybersecurity, healthcare, and everyday apps. Deepfakes, AI scams, misinformation, hallucinations, bias, and child safety risks are expected to remain some of the biggest AI challenges in the coming years.

At the same time, governments and technology companies are working to improve AI safety through new regulations, reporting systems, detection tools, and Responsible AI practices. Better public awareness and stronger security measures will be important for reducing AI-related harm and making AI systems safer and more reliable in the future.