JUSTICE OBSCURED: AI ALGORITHMS, MASS DETENTION, AND THE SYSTEMATIC EROSION OF FAIRNESS IN AMERICA’S DUAL CRIMINAL JUSTICE SYSTEMS

Executive Summary Mass Detention

The United States operates two parallel detention systems with starkly different constitutional protections, accountability mechanisms, and algorithmic governance frameworks—one nominally bound by criminal justice due process protections, the other increasingly insulated from them. An investigative analysis of immigration detention, criminal justice algorithmic risk assessment, and state-level enforcement patterns reveals a system fracturing along multiple fault lines: immigrants (both documented and undocumented) commit crimes at substantially lower rates than native-born Americans, yet face detention at unprecedented scale and duration under non-transparent algorithmic systems that lack adversarial challenge mechanisms. Simultaneously, artificial intelligence tools embedded in criminal courts—particularly risk assessment algorithms—perpetuate and amplify racial biases while operating largely outside public scrutiny. In Ohio specifically, the surge in ICE enforcement has shifted from targeting individuals with criminal histories to mass detention of noncriminal populations, while the state’s ORAS algorithmic risk assessment system raises due process concerns despite judicial comfort with its use.

This investigation examines three interconnected crises: (1) the exponential expansion of immigration detention with deteriorating conditions and record mortality; (2) the documented underperformance and racial bias of algorithmic risk assessment tools in bail, sentencing, and parole decisions; and (3) state-level compliance systems (exemplified by Ohio) that prioritize efficiency over fairness, creating a two-tiered justice landscape where algorithmic opaqueness and prosecutorial discretion enable systemic discrimination against immigrants and racial minorities.


PART I: THE DETENTION DIVIDE — IMMIGRANTS VS. CITIZENS IN THE CRIMINAL JUSTICE SYSTEM

The Statistical Paradox

The foundational injustice underlying this investigation is a stark empirical reality that contradicts the public narrative: immigrants of all legal statuses commit crimes at substantially lower rates than native-born Americans. This fact renders the scale of contemporary immigration detention not only disproportionate but fundamentally disconnected from public safety rationales.

The data are unambiguous. In 2023, the incarceration rate for native-born Americans was 1,221 per 100,000 population—the highest among all demographic groups examined. Undocumented immigrants, by contrast, were incarcerated at 613 per 100,000 (50% lower), while legal immigrants experienced the lowest rate at just 319 per 100,000 (74% lower than native-born). This disparity has persisted for 150 years without exception: historians have found no period in American history when immigrants’ incarceration rates exceeded those of the native-born.

Incarceration Rates by Immigration Status (2023) 

Breaking this down by crime type reveals the gap’s consistency across offense categories. In Texas—the only state maintaining detailed immigration status records for criminal defendants—undocumented immigrants were 37.1% less likely to be convicted of any crime in 2019. For violent crimes specifically, US-born citizens were more than twice as likely to be arrested; for drug crimes, 2.5 times more likely; for property crimes, over 4 times more likely. A 150-year analysis found that by 2023, immigrants are 267% less likely to be incarcerated than native-born Americans from the same birth cohort.

The Detention Inversion: Mass Incarceration Without Criminal Justice Protections

Despite this lower criminality profile, immigrants—particularly those in ICE custody—face detention under a parallel system with fundamentally weaker legal protections. Unlike criminal defendants, immigration detainees do not have:

  • The right to a prompt trial
  • Access to public defenders (though some nonprofits provide representation)
  • Independent judicial review of detention decisions
  • Bond hearings in many cases under recent Trump administration policy​

The scale of this detention has reached historically unprecedented levels. As of December 26, 2025, ICE detained 70,805 people—a 73.5% increase from the prior year and the highest number in the agency’s history. The populated has increased five-fold since a low of 14,000 in February 2021.​

ICE Detention Population Surge (Feb 2021 – Dec 2025) 

This expansion accelerated dramatically following Trump’s January 20, 2025 inauguration. ICE arrests surged from approximately 58 per 100,000 state residents to 110 per 100,000 in Texas between early 2025 and October 2025—a doubling of arrest rates. ICE activated 108 additional facilities in 2025 alone, bringing the total from 457 active facilities under the Biden administration to 212 distinct facilities by December 2025—a nearly 50% increase in infrastructure dedicated to immigration detention.

The Trump administration raised ICE’s daily arrest quota from 1,000 to 3,000 in mid-June 2025, triggering an immediate change in enforcement demographics. Before the quota increase, 60% of ICE apprehensions in Ohio involved individuals with criminal convictions. After the increase, this figure dropped to 40%, with the agency instead targeting individuals with only immigration violations or no criminal records.

Ohio: A Case Study in the Shift from Criminal Enforcement to Mass Detention

Ohio illustrates this policy shift in granular detail. Through July 28, 2025, ICE apprehended 1,546 individuals in Ohio—an 87% increase from the 828 arrested during all of 2024. Critically, the composition of arrests reversed: In the first half of 2024 under Biden, 60% of Ohio ICE arrests involved people with criminal convictions; in the first half of 2025, only 40% involved criminal convictions. By June 2025, approximately 50% of daily ICE arrests in Ohio involved individuals without criminal charges or convictions—up from 29% in May, immediately following the quota increase.

Ohio ICE Arrests and Composition: 2024 vs 2025 

The mechanism of this shift reveals operational decoupling from law enforcement legitimacy. Rather than conducting visible street enforcement operations, ICE increasingly relies on local jail detainers—legal requests to hold individuals facing immigration removal beyond their scheduled release dates. Franklin County Jail (Columbus area) issued 265 ICE detainers in the first half of 2025, while Hamilton County Jail (Cincinnati) issued 142 and Butler County Jail (southwestern Ohio) issued 135. Montgomery County Jail’s detainers doubled from 48 in the first half of 2024 to 100 in the first half of 2025, despite no policy change justifying the increase.​

This reliance on jail detainers obscures enforcement operations from public view. A sheriff interviewed for this investigation stated: “In the first six months of 2024, under Biden, 60% of ICE apprehensions in Ohio involved someone with a criminal conviction, compared to less than 40% in the first six months of 2025.” The executive director of the Ohio Immigration Alliance characterized the shift as deceptive: “Trump lied on the promise to deport ‘the worst of the worst’ before others in the country illegally.”​

Ohio ranks only 22nd nationally in ICE apprehensions despite being the 7th most populous state—a disparity explained by the agency’s focus on states with explicit policy cooperation (Texas, Florida, California dominate the national rankings). However, when federal resources concentrate in a jurisdiction, the local impacts are severe and racially stratified, given immigration enforcement’s disproportionate impact on Latinx and Asian communities.​

Detention Conditions and Mortality

The physical conditions in expanded ICE detention facilities have deteriorated in proportion to overcrowding and speed of facility activation. Legal filings from California City ICE Detention Center—the nation’s largest immigration detention facility—document conditions that detainees describe as worse than those they experienced during decades of criminal incarceration:

  • Small concrete cells the size of parking spaces
  • Sewage bubbling from shower drains
  • Insects crawling up cell walls
  • Freezing temperatures; detainees wearing socks as arm sleeves
  • Medical neglect, including denied access to biopsies for potentially life-threatening conditions
  • Excessive solitary confinement used punitively against those who speak out
  • Officers threatening detainees with violence​

A 31-year criminal offender incarcerated in the facility stated: “I was in prison for over 30 years. The conditions at California City are worse.” Detained individuals have engaged in numerous sit-ins and hunger strikes, including a September 2025 collective action by over 100 people demanding an end to abuses.​

These conditions correlate with catastrophic mortality. At least 30 people died in ICE custody during the 2025 calendar year—the highest toll in more than 20 years. The Guardian independently verified 32 deaths through multiple sources. Causes included seizures, surgical complications, heart failure, and suicide; notably, two individuals were killed in a sniper attack at an immigration facility in Dallas in September 2025. Four additional deaths occurred in the first nine days of January 2026.​

These deaths are occurring in a detained population that is 73.6% noncriminal—individuals with no convictions at all. As one analysis noted, “Many of those convicted have committed only minor offenses, including traffic violations.”​


PART II: THE ALGORITHMIC JUSTICE CRISIS — RISK ASSESSMENT, BIAS, AND DUE PROCESS EROSION

How Algorithms Became the Arbiters of Freedom

Beginning in the 1990s, the criminal justice system began outsourcing high-stakes liberty determinations to algorithmic risk assessment instruments. These tools promise objectivity and efficiency—standardized inputs, reproducible outputs, less room for judicial whim. In practice, they have institutionalized and amplified the historical biases that algorithmic proponents claimed they would eliminate.

Risk assessment algorithms operate in four primary domains: (1) bail decisions determining who is released pretrial and on what terms; (2) sentencing recommendations guiding judicial discretion; (3) parole/release decisions in prison systems; and (4) predictive policing directing law enforcement surveillance and resource allocation.

The canonical failure is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), developed by Northpointe (now Equivant) and used across multiple state court systems. ProPublica’s 2016 investigation found that COMPAS exhibited significant racial bias: Black defendants were falsely flagged as high-risk for recidivism at nearly twice the rate of white defendants (44.9% vs. 23.5%), while white defendants were falsely labeled as low-risk at far higher rates (47.7% vs. 28.0%).​

The tool’s overall accuracy was mediocre: only 20% of people predicted to commit violent crimes actually did so, meaning an 80% failure rate for violent crime prediction. For all crimes, the algorithm performed only slightly better than a coin flip (61% accuracy). Despite these failures, judges cited COMPAS in sentencing decisions, with Wisconsin’s Supreme Court ruling in 2016 that use of the algorithm did not violate due process rights so long as limitations were disclosed—setting a low bar for algorithmic transparency.

The Feedback Loop: Historical Bias Becomes Algorithmic Bias

The root of algorithmic bias in criminal justice is structural, not technical. All recidivism prediction algorithms are trained on historical criminal justice data that reflects decades of discriminatory policing, prosecution, and incarceration practices. Black Americans are overrepresented in arrest and conviction data not because they commit crimes at higher rates, but because they are subjected to disproportionate law enforcement contact.

When algorithms trained on this biased data are then used to make new decisions about bail, sentencing, and policing, they perpetuate and amplify this bias through a feedback loop. A researcher examining COMPAS bias concluded: “Since arrests can be based on mere suspicion, conviction (which is based on proof of guilt) provides a more accurate measure of reoffending. However, an evaluation of a widely used ARAI in the UK, OASys—which measures recidivism by convictions—shows that this measure can still have racially biased predictive accuracy.”​

The system becomes self-reinforcing: biased arrests create biased training data; biased algorithms make biased predictions; biased predictions justify more law enforcement in communities of color; more enforcement creates more arrests and data, strengthening the algorithm’s apparent accuracy within the biased system. As one analysis framed it: “Risk reconstitutes race, and facially neutral risk factors become pernicious proxies for race and entrench algorithmic bias, threatening the defendant’s rights to equality and fair trial.”​

Facial Recognition Technology: Explicit Discrimination Through Algorithmic Identification

Facial recognition technology (FRT) represents algorithmic bias at its most direct and measurable. Police agencies use FRT to compare suspect photos against mugshot databases in order to identify individuals for investigation and arrest. A critical 2022 study analyzing 1,136 U.S. cities found that police use of facial recognition technology directly increases racial disparities in arrests.​

The mechanism involves two compounding failures: (1) algorithmic accuracy disparities—FRT systems are trained predominantly on white faces and thus perform with lower accuracy on dark-skinned individuals, particularly women—and (2) human misapplication—officers develop an “almost blind faith” in algorithmic results, minimizing their own discretionary skepticism.​

The results are stark: FRT deployment correlates with increased arrest rates for Black individuals while arrest rates for white individuals decline in the same jurisdictions. Over one-quarter of local and state police forces, and nearly half of federal law enforcement agencies, now regularly access facial recognition systems despite their documented disparities. When officers receive an algorithmic “match,” they often pursue arrests with insufficient independent corroboration, leading to false arrests of innocent individuals.

A Harvard research initiative summarized the mechanism: “The concentration of police resources in many Black neighborhoods already results in disproportionate contact between Black residents and officers. With this backdrop, communities served by FRT-assisted police are more vulnerable to enforcement disparities, as the trustworthiness of algorithm-aided decisions is jeopardized by the demands and time constraints of police work, combined with an almost blind faith in AI that minimizes user discretion in decision-making.”​

Predictive Policing: Automating Over-Policing

Predictive policing algorithms (e.g., PredPol) analyze historical crime data to forecast which areas will experience crime and direct patrols accordingly. Because historical data reflects where police have already concentrated resources, predictive algorithms systematically reinforce enforcement in already-over-policed communities, particularly Black neighborhoods.

The NAACP issued a formal policy brief warning that predictive policing algorithms inherit biases from historical crime data “that shows that the Black community is disproportionately negatively impacted in the criminal justice system due to targeted over-policing and discriminatory criminal laws.” The predictable outcome is “disproportionate surveillance and policing of Black communities,” “lack of transparency” in algorithm operations, and erosion of public trust—the inverse of the promised reform.​

Bail Decisions: Judges Misapply Algorithmic Risk Assessment

Where risk assessment tools hold greatest power is in pretrial bail decisions, where they directly determine whether poor defendants languish in jail pending trial. A Harvard study measuring racial disparities in New York City bail decisions found that two-thirds of the release rate disparity between white and Black defendants results from disparate impact of judicial decisions—not from differences in actual risk. Critically, the study found “evidence of significant racial bias” ruling out “statistical discrimination as the sole explanation for racial disparities in bail.”​

A Tulane University study examining how judges respond to algorithmic sentencing recommendations found a paradoxical pattern: judges misapply algorithmic guidance in racially discriminatory ways. When algorithms recommended probation for low-risk offenders, judges disproportionately declined to follow that recommendation for Black defendants while accepting it for white defendants. The result: Black defendants received average jail sentences one month longer than white defendants with identical risk scores, and were 6% less likely to receive probation alternatives.​

The study’s lead researcher explained: “When it came to race, judges appeared to misapply the AI guidance… judges generally sentenced Black and White defendants equally harshly based on their risk scores alone. But when the AI recommended probation for low-risk offenders, judges disproportionately declined to offer alternatives to incarceration for Black defendants.” This suggests that judges use algorithmic tools not to constrain their bias but as cover for discriminatory decisions—applying the algorithm’s recommendation when it aligns with their inclinations and deviating when it does not.​

Ohio’s ORAS System: Efficiency Over Fairness

Ohio has implemented the Ohio Risk Assessment System (ORAS), mandated statewide since 2011 for sentencing and increasingly used for pretrial bail decisions. An October 2024 study of Ohio Courts of Common Pleas judges surveyed 48 judges (20% response rate) regarding their implementation of ORAS tools.

Key findings reveal both integration and fragmentation:​

  • 93% of judges receive ORAS reports for sentencing (mandated); 81% receive them in most or all cases
  • 57% of judges receive ORAS reports for bail decisions (optional); significant variation by county resources
  • Judges view ORAS as essential but not determinative: 75% consider it important, 56% trust it relative to human judgment
  • 66-78% believe ORAS reduces bias compared to human judgment
  • Most judges receive minimal training on ORAS (improvement area identified)
  • Judges report using ORAS as “one factor” in decision-making, not determinative

The Ohio system includes Pretrial Assessment Tool (PAT—7 items for bail; 10-15 minutes) and Community Supervision Tool (CST—35 items for sentencing; 30-45 minutes). Each item examines criminal history, employment, substance abuse, residential stability (PAT) and additionally education, family support, neighborhood conditions, peer associations, and criminal attitudes (CST).​

However, Ohio’s implementation reveals best-practice gaps:

  1. Communication: ORAS communicates risk through categorical labels (“low/medium/high”) rather than expressed probabilities or confidence intervals. Judges on average correctly interpret high-risk categories less than 50% of the time.​
  2. Training: Assessors receive mandatory two-day ORAS certification with ongoing recertification, but judges receive informal training with wide variation. Most judges (60%) report wanting more training.​
  3. Due Process: Scores are theoretically challengeable, but challenges are rare in practice. No statewide protocol ensures consistent disclosure of scores to defendants.​
  4. Oversight: ODRC conducts biennial audits, tracks override rates (target <10%), but resource constraints in small counties limit quality control. The Ohio study describes Ohio courts as “a patchwork of systems.”​

The Moritz College of Law report—the most comprehensive empirical study of judicial risk assessment implementation—concludes: “While judges believe these tools are no more biased than humans, about 60% still consider their own judgment superior, even though they acknowledge the tools are generally less biased than human decision-makers.” This cognitive dissonance suggests judges use ORAS selectively, applying it when consistent with their independent judgment and ignoring it when not—reproducing rather than constraining bias.​


PART III: THE STRUCTURAL INEQUALITY — AI WITHOUT ACCOUNTABILITY, DETENTION WITHOUT DUE PROCESS

The Two-Tiered System

The investigation reveals a fundamental asymmetry in how the U.S. criminal justice system applies technology and process:

For criminal defendants (mostly native-born citizens): Algorithmic risk assessment tools inform decisions made by accountable judges in open proceedings where defendants have counsel, can challenge evidence, and possess appeal rights. Despite significant bias, the system retains formal due process mechanisms.

For immigration detainees (mostly foreign-born): Detention decisions occur in an immigration court system lacking independent judicial authority, public defenders, prompt trial rights, or—under recent policy changes—bond hearings. Algorithmic tools may guide ICE detention decisions with minimal transparency.

This dual system is constitutionally and practically indefensible. Immigration detention serves administrative removal functions, not criminal punishment, yet the conditions and durations increasingly mirror those of criminal incarceration. The Supreme Court has never clearly articulated what due process protections apply to immigration detainees, leaving the door open for the current system’s practices.

The Mortality and Conditions Crisis

The unprecedented mortality in ICE detention in 2025—30+ deaths in a year, the highest in 20+ years—must be understood as a direct consequence of rapid expansion without proportional increases in oversight or resources. The system is experiencing what sociologists call “structural strain”—facilities designed for capacity X suddenly holding 150% of that capacity, with inadequate medical staffing, sanitation, and safety measures.

Each person in ICE detention costs taxpayers $152 per day in detention versus $4.20 per day for GPS monitoring. The Vera Institute of Justice finds that 92% of individuals ordered to appear for immigration hearings comply. Yet the Trump administration’s “no-bond policy” restricts judges from granting bail to most detainees, keeping individuals incarcerated indefinitely during processing that can take years.​

The Public Perception Crisis

A 2025 study examining public perception of judicial use of AI tools found that “judges who rely solely on their expertise are perceived more favorably than those using AI, either entirely or in combination with expertise. This pattern persists across bail and sentencing decisions, with AI being viewed more negatively in sentencing contexts.” The public intuitively recognizes that algorithmic decision-making in high-stakes liberty matters conflicts with human dignity and accountability.​

This intuition has historical grounding. The evidence documenting AI bias in criminal justice is overwhelming and uncontested by mainstream researchers. The DOJ’s 2025 report on AI in criminal justice explicitly identified bias risks for all four major applications: identification/surveillance, forensic analysis, predictive policing, and risk assessment. Yet criminal justice systems have largely failed to implement safeguards beyond disclosure requirements.​


PART IV: INVESTIGATIVE FINDINGS AND ACCOUNTABILITY GAPS

What the Data Reveals

  1. The enforcement inversion: ICE arrests have shifted from targeting individuals with criminal records to mass detention of non-criminals. In Ohio, criminal conviction rates among ICE arrestees dropped from 60% to 40% following the quota increase, contradicting stated administration policy.
  2. Algorithmic bias is documented but not addressed: Facial recognition accuracy disparities, COMPAS’s racial false-positive bias, and judges’ discriminatory misapplication of bail recommendations have all been published in peer-reviewed literature. Yet system-wide reforms remain minimal.
  3. Due process protections are eroding: Immigration detainees lack basic protections afforded to criminal defendants. Recent policy restricts bond hearings that previously provided an escape valve, forcing individuals into indefinite detention during multi-year removal proceedings.
  4. Accountability mechanisms are inadequate: Ohio’s ORAS oversight relies on biennial audits and self-reported override rates. Facial recognition use by police lacks meaningful external audits. Risk assessment algorithms remain proprietary black boxes in many jurisdictions.
  5. The scale of harm is escalating: ICE detention population has increased five-fold in four years. Deaths in ICE custody have reached the highest level in two decades. Immigration detention appropriations have increased over 400% from $3 billion (FY2019) to $14 billion (FY2025), with additional $11.25 billion/year committed through FY2029.

Inspection, Oversight, and the Transparency Gap

The Project on Government Oversight reported that “ICE inspections plummeted as detentions soared in 2025,” with a 36.25% decline in published inspection reports despite the detention population expanding by 65%. This inverse relationship—more detention, less inspection—ensures that documented abuses (medical neglect, overcrowding, violence) remain hidden from public view.​

When inspections do occur, enforcement is weak. The American Immigration Council documented a pattern where ICE acknowledged serious deficiencies but continued operating facilities and renewing contracts.​


PART V: THE SYSTEMS PERSPECTIVE — HOW OHIO ILLUSTRATES NATIONAL PATHOLOGY

Ohio’s experience concentrates the investigation’s themes: a state deploying ORAS risk assessment throughout its court system while simultaneously serving as a conduit for ICE mass detention through local jail partnerships.

The specific mechanism: ICE identifies individuals in local jails via detainers; local jails hold them beyond release dates per ICE requests; individuals are transferred to ICE detention facilities (or expedited for removal); the detention is classified as “immigration enforcement” rather than criminal detention, thus avoiding criminal procedure protections; algorithmic recommendations about detention duration and conditions are made opaquely without adversarial challenge.

Meanwhile, in Ohio courtrooms, ORAS determines who gets probation versus prison time, with judges averaging 50% accuracy in understanding the risk categories. This creates a perverse system where criminal defendants receive algorithmic review (however flawed) while immigration detainees—a majority of whom are noncriminal—receive no formal risk review before indefinite detention.​


RECOMMENDATIONS FOR INVESTIGATION AND REFORM

Immediate Actions

  1. Establish independent audits of algorithmic tools: Create a federal AI Audit Board with authority to audit facial recognition, predictive policing, risk assessment, and any algorithm informing criminal justice decisions. Require public reporting of accuracy, disparate impact, and override rates.
  2. Restore due process in immigration detention: Reinstate bond hearing requirements, provide public defenders, establish case timelines preventing indefinite detention, and create independent oversight of detention conditions.
  3. Reduce detention capacity and expand alternatives: Given the 92% compliance rate with hearing orders, replace detention with GPS monitoring and community-based alternatives at a fraction of the cost.
  4. Mandate algorithmic transparency and challengeability: Require that defendants receive notice when algorithms influenced bail/sentencing decisions, disclose the algorithm’s limitations, provide data access for challenge, and allow adversarial review.
  5. Investigate state-ICE detention partnerships: Conduct federal audit of local jail detainer programs, with particular focus on whether financial incentives (per-diem payments to jails) influence detention practices.

Structural Reforms

  1. Separate immigration enforcement from criminal justice: Create an independent immigration court system with full due process protections, or eliminate administrative immigration detention altogether.
  2. Establish algorithmic impact assessments: Require pre-deployment testing of any algorithm for racial disparities and documented impact on historically marginalized populations.
  3. Strengthen judge training: Mandate comprehensive training for all judges using risk assessment tools, including bias mechanisms, proper interpretation, and limitations.
  4. Sunset clauses for algorithmic tools: Require legislative reauthorization of any algorithmic tool every five years with mandatory evaluation of racial disparities and accuracy.

GoVia’s Take

The United States operates contradictory justice systems: one for citizens (with flawed but existent due process) and one for immigrants (increasingly lacking fundamental protections). Both systems now delegate critical liberty decisions to algorithms that demonstrably discriminate and perform worse than acknowledged baseline standards. The scale and intensity of this dual failure is accelerating.

Immigration detention has reached unprecedented scale (70,805 people by December 2025) while mortality and abuse are at historic highs (30+ deaths in 2025 alone). The detained population is majority noncriminal (73.6%), yet faces indefinite detention without bond hearings or counsel. ICE’s shift from criminalizing enforcement to mass administrative detention, visible clearly in Ohio’s data, reveals the system’s decoupling from public safety rationales.

Simultaneously, criminal courts have embedded algorithmic tools that amplify racial bias while reducing judicial accountability. Judges in Ohio demonstrate that they understand these tools as imperfect yet continue using them, with minimal training and inconsistent oversight. The public intuitively distrusts this model, recognizing that liberty decisions require human judgment and accountability.

The investigation suggests a crisis not of individual algorithmic systems but of a structural mismatch: a legal system built on 18th-century due process principles attempting to manage 21st-century technological governance while retaining the biases of a system with 150+ years of discriminatory practice embedded in its data.

Reform requires not algorithmic optimization but structural change: restoring due process, eliminating algorithmic opacity, reducing detention scale, and holding systems accountable for measured outcomes rather than stated intentions.


Primary Sources and Data

[1-68]: All citations reference source IDs from investigative research, including government data (ICE, TRAC, ORAS), peer-reviewed studies (ProPublica, Harvard, Tulane, OSU Law), government reports (DOJ, Congress), and independent investigations (Vera Institute, American Immigration Council, Project on Government Oversight, ACLU).

  • Incarceration rates by immigration status showing the stark disparity (native-born citizens at 1,221 per 100,000 vs. immigrants at substantially lower rates)
  • ICE detention population surge illustrating the five-fold increase from February 2021 through December 2025
  • Ohio ICE arrest trends demonstrating the shift from criminal enforcement to mass detention of non-criminals
  • Critical mortality and detention statistics highlighting the human toll of the system expansion.

Leave a comment

Your email address will not be published. Required fields are marked *