“A Deputy Killed, A City Watching: Los Angeles’s Ambush Murder Case in the Age of AI and Community Evidence”

PBS

AI‑driven predictive policing can help solve or deter attacks on officers, but because it is built on historically biased data and opaque algorithms, it risks automating the same racialized over‑policing and constitutional harms that led to the George Floyd era of protest and mistrust. GoVia Highlight A Hero matters precisely because it points toward a different model of “smart” public safety—one that makes communities co‑owners of data, transparency, and recognition rather than just subjects of surveillance.

How predictive policing can “techwash” old injustices

Predictive policing tools ingest historical crime and enforcement data to forecast where crime is likely to occur or who is “high risk,” and then direct officers and resources accordingly. But multiple investigations have shown that these datasets are not neutral records of crime; they are records of where police have historically chosen to patrol, stop, and arrest—often concentrating disproportionate enforcement in Black, Latino, and low‑income neighborhoods.

Civil‑rights groups warn this creates a feedback loop: biased data in, biased predictions out, and then more patrols and stops in the same communities, even when underlying crime patterns are not higher. After George Floyd’s murder, critics argued that layering an algorithm on top of an already discriminatory system can give “a misleading and undeserved imprimatur of impartiality” to policing that still treats communities of color as permanent suspects. Scholars and advocates now describe some of these systems as “digital redlining” and a potential “digital Jim Crow,” because they systematically route suspicion and surveillance along old racial lines while claiming to be data‑driven and objective.

Why Clinkunbroomer’s case shows tech’s upside—but not its equity

In Deputy Ryan Clinkunbroomer’s killing, cameras, community information, and rapid digital coordination clearly helped. Surveillance footage captured a dark sedan pulling up alongside his marked car before the ambush, and a flyer with vehicle images and a call for video and tips went out quickly, helping focus the investigation. A suspect ultimately barricaded himself in a Palmdale home and surrendered after a tactical operation that used negotiation and de‑escalation techniques, with multiple firearms and the vehicle of interest recovered.

That is the “win”: properly used technology—cameras, community outreach, coordinated digital alerts—can rapidly protect officers, generate leads, and bring an armed suspect into custody without further loss of life. But those are largely reactive investigative tools, not predictive systems that silently recode where suspicion is aimed tomorrow. They do not by themselves answer core George Floyd–era questions about who gets stopped, surveilled, or subjected to force in everyday policing.

Why deputy‑safety gains don’t solve George Floyd–era failures

Predictive policing, as currently deployed, is mostly aimed at intensifying enforcement in places and among people already flagged by past data—not at reducing excessive force, racial profiling, or unconstitutional stops. Legal scholars warn that when AI systems push officers into certain neighborhoods more often, or label people “high risk” based on prior contact with police, they raise Fourteenth Amendment equal‑protection concerns—especially when race and poverty correlate with risk scores.

At the street level, this can look like officers arriving in a Black neighborhood with a computer‑generated “hunch,” not individualized suspicion, which undermines traditional Fourth Amendment protections against unreasonable searches and stops. In the post‑Floyd climate, communities already skeptical of police see predictive tools marketed as objective upgrades while body‑cam videos, protest crackdowns, and complaint data still show disparate treatment and force. Without structural change and community control, AI risks simply making unjust patterns faster, more efficient, and harder to challenge.prospectmagazine.

Why platforms like GoVia Highlight A Hero are different

Where predictive policing concentrates power inside opaque systems, a platform like GoVia Highlight A Hero can redistribute power—if it is built the right way.

Community‑engagement and citizen‑reporting apps already show that digital tools can deepen two‑way communication, help residents feel heard, and make them more willing to report crime and cooperate in investigations when there is visible responsiveness and accountability. These platforms increase transparency by providing real‑time updates, feedback channels, and clear pathways for non‑emergency reporting that align with community‑policing principles.

GoVia Highlight A Hero can go further by:

  • Centering resident agency over data
    Instead of scraping location histories or passively harvesting surveillance, GoVia can make every contribution—video, tips, context—opt‑in, with clear controls over how long data is retained and what types of cases it can be used for. That flips the script from “always watched” to “I choose when and how to help,” which is crucial in communities scarred by over‑policing.
  • Making AI and analytics visible, not invisible
    Where predictive tools are often secretive, GoVia can surface plain‑language disclosures in‑app: what analytics are in use, what they predict (places, not people; serious violence, not low‑level offenses), and what is explicitly off‑limits (protests, immigration, school discipline, etc.). That lets residents debate and shape AI use instead of discovering it only after harm.
  • Turning data into accountability, not just enforcement
    The same infrastructure that tracks resident reports and evidence can also track officer behavior trends, response times, use‑of‑force complaints, and follow‑through on community concerns, echoing how some departments now use tech to measure trust and satisfaction over time. Publishing those patterns—by neighborhood, not by “risk score”—helps ensure technology is watching the system, not only the streets.esports.
  • Elevating everyday heroism on both sides
    Highlight A Hero’s storytelling layer can spotlight officers who de‑escalate, intervene against misconduct, or build youth programs—and residents who step in as Good Samaritans, share critical footage, or prevent violence. In a climate defined by viral trauma clips, balancing that narrative matters for legitimacy, recruitment, and community self‑respect.

Why this is “important to the cause”

The “cause” after George Floyd is not just safer officers or smarter data; it is a justice system that does not treat entire communities as permanent risk zones while claiming neutrality. Predictive policing, left unchecked, drags old inequities into a new technical regime and risks deepening the very mistrust that exploded onto the streets in 2020.prospectmagazine.

GoVia Highlight A Hero is important because it offers a different blueprint: one where technology is explicitly designed to share power, build trust, and make both heroism and harm visible from the community’s perspective—not just through a patrol‑car lens. Grounded in transparent communication, resident control over data, and measurable feedback loops, it can help ensure that the same digital revolution that solved a deputy’s murder does not quietly reproduce the injustices that put millions in the streets after George Floyd.

Leave a comment

Your email address will not be published. Required fields are marked *