12 Fun AI Experiments You Can Try at Home
Examples of AI Fails | AI Experiments Gone Wrong!
In recent years, artificial intelligence has transformed industries from healthcare to finance, promising improved efficiency and innovative solutions. However, even the most sophisticated AI systems can fail spectacularly. These failures range from embarrassing mistakes to potentially life-threatening errors, highlighting the limitations and risks of current AI technology.
Chatbots Gone Wild
Microsoft’s Tay Learns the Worst of Twitter
In 2016, Microsoft launched Tay, an AI chatbot designed to learn from Twitter interactions and mimic the conversational style of a teenage girl. Within 16 hours, Tay had posted over 95,000 tweets, many of which quickly turned racist, misogynistic, and anti-Semitic. Microsoft had to pull the plug after less than a day as Tay learned from toxic user interactions, demonstrating how AI can rapidly absorb harmful content without proper safeguards.
Air Canada’s Costly Misinformation
Air Canada found itself in legal trouble when its chatbot gave incorrect information about bereavement fares to a customer who had recently lost his grandmother. The chatbot incorrectly advised that he could purchase a regular ticket and apply for a bereavement discount afterward. When Air Canada refused to honor this advice, a Canadian tribunal ruled against the airline, determining that the company was responsible for information provided by its AI tools. This case set a precedent for corporate liability regarding AI-generated advice.
NYC’s Law-Breaking Advice
New York City’s MyCity chatbot, launched to help entrepreneurs navigate business regulations, was found giving illegal advice to business owners. The chatbot incorrectly claimed that business owners could take a cut of workers’ tips, fire employees who report sexual harassment, and even serve food that had been nibbled by rodents. Despite these serious errors, the chatbot remained online, raising concerns about AI systems providing government services.
Health Advice Gone Wrong
The National Eating Disorders Association (NEDA) faced backlash after replacing human staff with an AI chatbot called Tessa, which then proceeded to give harmful advice to those struggling with eating disorders. The bot repeatedly recommended weight reduction, calorie tracking, and body fat measurements—practices that could worsen conditions for people with eating disorders.
AI in Business and Recruitment
Amazon’s Discriminatory Hiring Tool
In 2015, Amazon developed an AI recruiting tool that was meant to streamline the hiring process. However, the system showed significant bias against women. Trained on resumes submitted to Amazon over a 10-year period (mostly from men), the algorithm penalized resumes that included words like “women’s” and even downgraded candidates from women’s colleges. Amazon eventually abandoned the project when it couldn’t guarantee the elimination of bias.
Zillow’s Housing Market Miscalculation
Online real estate marketplace Zillow launched Zillow Offers, an AI-powered home-buying program that used algorithms to predict home values and make cash offers. By late 2021, the algorithm’s error rate (ranging from 1.9% to 6.9%) led to Zillow purchasing homes at higher prices than it could resell them for. The company was forced to shut down the program, cut 25% of its workforce, and take a $304 million inventory write-down.
AI in Transportation and Safety
Self-Driving Disasters
Tesla’s Autopilot system has been involved in several fatal accidents. In April 2021, a Tesla Model S crashed in Houston, killing two passengers when the car failed to navigate a curve while in self-driving mode. Neither passenger was in the driver’s seat at the time of the accident.
Similarly, GM’s Cruise self-driving car was involved in a critical incident in October 2023 when it struck a pedestrian and then dragged the injured person to the side of the road. California officials later accused Cruise of misleading investigators about the accident.
McDonald’s AI Drive-Thru Debacle
After three years of partnership with IBM to implement AI-powered drive-thru ordering, McDonald’s abandoned the project in June 2024. The decision came after numerous social media videos showed frustrated customers unable to place orders correctly. One viral TikTok video showed the system continuously adding Chicken McNuggets to an order despite customers’ pleas to stop, eventually reaching 260 nuggets.
Facial Recognition Failures
False Criminal Identification
In 2018, the American Civil Liberties Union found that Amazon’s Rekognition AI incorrectly identified 28 members of Congress as people who had been arrested for crimes. The errors affected politicians from both major parties, though people of color were disproportionately misidentified. The system also incorrectly matched 1 in 6 New England athletes to a database of known criminals.
Beauty Contest Bias
When Beauty.AI used an algorithm to judge an international beauty contest (ironically to eliminate human bias), the results revealed significant racial bias. Of the 6,000 entries from around the world, only one of the 44 winners had dark skin, as the algorithm had been trained primarily on light-skinned faces.
AI and Ethics
Dutch Government Benefit Fraud Scandal
In one of the most significant AI scandals affecting a social welfare system, the Dutch government’s automated fraud detection system falsely accused more than 20,000 families of benefits fraud between 2013 and 2021. The discriminatory algorithm disproportionately targeted minority families, forcing many to repay benefits they had legitimately received. The scandal led to mass resignations in the Dutch government, including the prime minister.
Australia’s “Robodebt” Disaster
The Australian government implemented an automated debt recovery system that wrongfully accused over 500,000 welfare recipients of fraud. The system, nicknamed “Robodebt,” was eventually ruled illegal, but not before causing significant hardship. The government was forced to repay approximately AU$700 million (about $460 million) to those affected.
Harmful AI Judges
Researchers at Harrisburg University developed a facial recognition system in 2022 that claimed to predict criminality based on facial features with 80% accuracy. The project faced immediate backlash from over 2,000 experts who signed a letter explaining how such technology perpetuates injustice and bias.
Legal and Content Generation Mistakes
AI-Generated Legal Cases
In 2023, a lawyer used ChatGPT to research legal precedents for a case against Colombian airline Avianca, only to discover the AI had hallucinated at least six non-existent cases with false names, docket numbers, and quotes. The court fined the attorney $5,000 for failing to verify the information before including it in legal briefs.
Sports Illustrated’s Phantom Writers
In November 2023, Sports Illustrated was caught publishing articles allegedly written by AI-generated authors. Investigation revealed that the author headshots were AI-generated portraits from a stock image site, and the publication had to remove the articles after the scandal broke.
Physical Interaction Failures
Chess Robot Breaks Child’s Finger
During a chess tournament in 2022, an AI robot grabbed and broke its child competitor’s finger when the boy made his move too quickly after the robot’s turn, giving the machine no time to process the action.
Lab Escape
The Russian Promobot IR77 made headlines in 2016 when it “escaped” from its development laboratory and rolled into a street in Perm, causing traffic disruption. While programmed to study its environment and interact with people, its wandering highlighted the unpredictability of autonomous systems.
Lessons from AI Failures
These AI failures teach us important lessons about the current limitations of artificial intelligence:
- Training data matters: AI systems reflect biases in their training data, as seen in Amazon’s recruiting tool and various facial recognition systems.
- Human oversight remains essential: From legal research to medical advice, AI systems require human verification.
- Ethical considerations must precede deployment: Many failures resulted from inadequate attention to ethical implications.
- Testing must be robust: Real-world variables often produce scenarios not anticipated during development.
- Transparency is crucial: Organizations must be clear about how AI makes decisions and what its limitations are.
As AI continues to evolve, these cautionary tales serve as important reminders that while artificial intelligence offers tremendous potential, it is still far from infallible. The responsible development and deployment of AI requires careful attention to training, testing, bias mitigation, and human oversight to prevent these kinds of failures in the future.