Technology Policy Brief #159 | Naja Barnes | November 20th, 2025
The future of Artificial Intelligence (AI) is not fully determined, but it will continue to have a significant impact on our society and the way we live. The ways it currently impacts society are through improvements in efficiency, productivity, and accessibility. Self-driving cars relying on AI, AI-powered robots are used to provide aid and assistance in the healthcare system, and AI security systems are used to automate threat detection, among other examples. Although AI creates positive advantages and impacts, it also creates negative effects. The negative effects cover the environment, employment and other fields. These examples are often caused by products that use and incorporate AI, but what (potential) harm is created due to inaccuracies within the actual systems of AI?
Analysis
Inaccuracies within the data quality of AI systems can create threatening situations that then lead to harmful outcomes. At Kenwood High School in Baltimore County, Maryland, Taki Allen, a high school student, was handcuffed and had a firearm pointed at him. The incident occurred because the school’s AI-powered security system mistook the teen’s Doritos bag for a possible firearm. The school’s district security department canceled the gun detection alert, but the principal was unaware that the alert was canceled when she reported the incident. This harmful mistake could have possibly physically harmed the teen, and very clearly highlights the limitations of AI’s capabilities. It was able to incorrectly check for a threat quickly, but was unable to correctly identify that there was a threat in the first place. It was the action of humans that corrected the mistake created by the AI-powered security system.
Self-driving cars, powered by AI systems, have also caused harm and even casualties. In 2018, Elaine Herzberg was struck and killed by a self-driving car as she walked her bicycle across the street. There was a backup driver (Rafaela Vasquez) in the car, but he was visibly distracted and had no hands on the wheel at the time of the accident. Most backup drivers are instructed to keep their hands on the wheel to take control of the car quickly in emergencies. However, the self-driving car failed to detect the woman in the street, highlighting the limitations of AI systems. This instance demonstrated AI’s limitations in reacting to unpredictable situations. This is another example of an AI-powered system creating a potentially harmful situation that a human could have fixed.
Conclusion
AI systems enable our society to operate more efficiently, but there is no denying the potential harm they pose due to issues with data quality and limited capabilities. Human intervention is sometimes necessary in instances involving AI-powered products.
Engagement Resources
- What is the history of artificial intelligence (AI)? https://www.tableau.com/data-insights/ai/history
- Student handcuffed after Doritos bag mistaken for a gun by school’s AI security system https://www.cnn.com/2025/10/25/us/baltimore-student-chips-ai-gun-detection-hnk
- When AI Gets It Wrong: Addressing AI Hallucinations and Bias https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/
- How a Self-Driving Uber Killed a Pedestrian in Arizona https://www.nytimes.com/interactive/2018/03/20/us/self-driving-uber-pedestrian-killed.html
Keywords: AI, Harm, Impact, Threat, System
