
The Promise and Peril of AI in Crime Prevention
Artificial Intelligence (AI) has revolutionized crime prevention, with facial recognition technology (FRT) leading the charge in law enforcement. Apps like GoVia: Highlight A Hero promise to enhance community safety by identifying potential threats and recognizing local heroes who contribute to neighborhood well-being. However, as with any technology, particularly one as powerful as AI-driven facial recognition, the implications of bias, systemic discrimination, and digital rights violations must be rigorously examined.
The Problem: Facial Recognition and Racial Bias
A growing body of evidence suggests that facial recognition technology disproportionately misidentifies Black individuals and other people of color, leading to wrongful arrests and exacerbating systemic discrimination. A striking case occurred in Detroit, where a Black man was falsely arrested due to incorrect facial recognition analysis. After spending days in jail, he sued the Detroit Police Department and recently won a settlement in 2024 [(US News, 2024)]. His case is not isolated; other Black plaintiffs have filed similar lawsuits, demonstrating a troubling pattern of algorithmic injustice.
Digital Redlining and Algorithmic Discrimination
Digital redlining—the practice of using technology to reinforce systemic discrimination—manifests in how AI is deployed in policing. Facial recognition systems often rely on datasets that skew toward lighter skin tones, making them far less accurate for people with darker skin. According to research from the National Institute of Standards and Technology (NIST), facial recognition algorithms misidentify Black and Asian individuals up to 100 times more than white individuals [(NIST, 2019)].
This bias is not just an accident; it is a direct consequence of flawed training data, which reflects historical inequities. Algorithmic discrimination in law enforcement reinforces existing racial biases rather than eliminating them. Digital rights organizations, such as the Algorithmic Justice League (AJL), founded by Joy Buolamwini, actively challenge these injustices by advocating for more ethical AI policies and pushing for transparency in how AI systems operate [(AJL, 2023)].
The Legal and Ethical Backlash
Recent legal battles highlight the dangers of unchecked AI in law enforcement.
- Facial Recognition Wrongful Arrests – Several Black plaintiffs have taken legal action against law enforcement agencies due to wrongful arrests caused by misidentification. These cases underscore the urgent need for safeguards against algorithmic bias [(US News, 2024)].
- Rite Aid’s Reckless Use of AI – The Federal Trade Commission (FTC) recently imposed a five-year ban on Rite Aid for its irresponsible deployment of facial recognition technology, citing its discriminatory impact [(FTC, 2024)]. This ruling sets a precedent for holding corporations accountable for AI misuse.
- Push for Digital Rights – Advocacy groups such as the Electronic Frontier Foundation (EFF) and AJL have pressured policymakers to regulate AI-driven surveillance tools, ensuring they do not disproportionately target marginalized communities [(EFF, 2024)].
Algorithmic Justice: A Path Forward
While AI holds promise for crime prevention, it must be implemented with fairness, transparency, and accountability. Here are some solutions to mitigate bias in facial recognition technology:
- Regulating AI in Law Enforcement – Strict oversight and legal frameworks must be established to prevent wrongful arrests and racial profiling.
- Auditing AI Algorithms – Independent reviews of facial recognition models should ensure fairness and accuracy across all racial and ethnic groups.
- Banning High-Risk AI Applications – As seen with the FTC’s Rite Aid ban, certain uses of AI should be restricted when they pose a high risk of harm.
- Community-Driven AI Policies – Policymakers should work closely with advocacy groups like AJL and affected communities to create equitable AI solutions.
GoVia’s Take
GoVia: Highlight A Hero has the potential to foster safer communities, but it must be designed with built-in safeguards against algorithmic bias and discrimination. AI should be a tool for justice, not an instrument of oppression. If facial recognition and predictive policing continue unchecked, they risk reinforcing systemic inequities rather than resolving them. By prioritizing algorithmic justice, digital rights, and transparency, we can ensure that AI serves all communities fairly and equitably.
References
- US News. (2024). “Facial recognition technology jailed a man for days. His lawsuit joins others from Black plaintiffs.”
- FTC. (2024). “Rite Aid’s ‘reckless’ use of AI facial recognition tech earns 5-year ban.”
- NIST. (2019). “Face Recognition Vendor Test (FRVT) – Demographic Effects.”
- Algorithmic Justice League. (2023). “Fighting AI Bias for Equitable Technology.”
- Electronic Frontier Foundation. (2024). “Digital Rights Advocacy and AI Regulation.”
By confronting these issues head-on, we can create a future where AI enhances safety without compromising justice.

One thought on “Algorithmic Discrimination: GoVia Highlight A Hero – A Community Police Safety App”
Yeah, the debate shouldn’t recognize you as this person and then all of a sudden this person. I think it’s a brilliant idea.