When AI-Altered Media Becomes a Due Process Crisis

As official government accounts release manipulated photos or videos depicting arrests and the use of force, this media begins to influence public understanding — before courts, lawyers, or juries ever evaluate the evidence.

By Nicholas E. Stewart

March 18, 2026

The federal government is increasingly shaping not only how enforcement happens, but how enforcement is framed. As AI-altered or digitally modified photos and videos depicting arrests and the use of force are released by official government accounts, such media is beginning to influence public understanding of these events — before courts, lawyers, or juries ever evaluate the evidence.

The core risk is straightforward. When the government distributes altered or curated images tied to enforcement actions, it does more than manage public relations. It creates an unofficial evidentiary layer that can distort fact-finding, tilt early legal decision-making, and quietly undermine due process.

On January 22, the White House reposted a digitally altered image of civil rights attorney Nekima Levy Armstrong’s arrest at an anti-immigration enforcement protest in Minnesota. The reposted image depicted Armstrong crying, while original footage shows her composed and calm. Civil liberties advocates condemned the repost as misleading. The administration defended it.

This incident did not occur in isolation. Two days later, federal immigration agents shot and killed 37-year-old Alex Pretti, an ICU nurse, during a protest tied to a federal enforcement operation in Minneapolis. His family says video shows that he was holding a phone and attempting to assist others when a Border Patrol agent shot him multiple times. Earlier that month in the same city, Renée Nicole Good, a 37-year-old mother of three, was fatally shot by an ICE agent during another federal operation as her vehicle reversed away. Family members and eyewitnesses say she posed no threat.

It is tempting to treat lethal force and AI-altered or digitally modified arrest media as separate problems. They are better understood as two expressions of the same structural failure: a state that can exercise coercive power while also curating the narrative about that power, using technologies that carry an aura of objectivity.

Images, video, and algorithmic outputs circulate today with an implicit promise of truth. When a machine or an official account presents a version of events, many people accept it as unmediated reality. That dynamic already undergirds law enforcement technologies such as automated risk scores, predictive systems, and AI-assisted report drafting. It is now extending into government communications themselves.

AI-altered or digitally modified arrest media shared by official government accounts carry a different legal and social weight than misinformation circulating on social platforms. These images and videos arrive with institutional authority. They can shape public understanding before discovery occurs, before defense counsel reviews evidence, and before any judge evaluates credibility. The risk is not primarily reputational. It is procedural.

Defense attorneys may argue that altered official media doesn’t stay in the realm of public relations: once it touches on the facts of an arrest or use of force, it may trigger disclosure obligations, requiring the government to produce the modified material and the original, as part of the evidentiary record. Civil rights litigators may contend that distributing visuals that are materially misleading contributes to constitutional harm, especially if such media influences charging decisions, bail arguments, or public narratives about guilt. Even when corrections appear later, early impressions often persist.

This pattern mirrors long-standing concerns about algorithmic decision-making in policing and courts. Automated risk assessments, predictive policing systems, and AI-assisted reporting tools are frequently introduced as efficiency upgrades. Yet scholars have warned that these systems embed historical data, opaque assumptions, and institutional priorities that can reproduce existing disparities. Their outputs appear objective even when they encode contestable judgments.

As the executive director of the Justice Education Project, the first national Gen Z–led criminal justice reform organization, I have spent several years interviewing criminal defense clinicians and access-to-justice scholars about how emerging technologies reshape legal power. (This work fed into the book we recently published, Next Steps into Criminal Justice Activism: Technology, Ethics, and the Future of Justice.) In these conversations, a consistent theme emerged: technology does not simply capture reality. It frames it.

Amber Baylor, a Columbia Law School professor who directs a criminal defense clinic focused on communities subjected to intensive policing, put it plainly. “Surveillance technologies and algorithms are often presented as neutral, but they reflect human biases and are used unevenly across communities,” she told me. “For some people, they appear harmless. For others, they increase surveillance, over-policing, and risk. Footage can sometimes corroborate a client’s account. That does not erase structural inequities.”

Colleen Shanahan, a University of Pennsylvania Carey Law professor whose work focuses on lawyerless courts and access to justice, cautions against treating technology as a substitute for human infrastructure. Technology, she notes, is just a tool: “Without investment in relationships, conversation, and problem-solving, digital systems alone do not produce justice.”

 Technology has always played a role in shaping legal processes, and official images have long carried a presumption of accuracy. What is new today is the ease with which such images can now be manipulated and distributed at scale and the corresponding difficulty of detecting what has been altered. That gap between appearance and reality is harder to close than ever. 

The problem compounds when official images merge with algorithmic outputs and  system-generated narratives. When these forms of institutional authority reinforce each other, the presumption of accuracy deepens. Defendants and impacted communities must then prove that what appears authoritative, visually, computationally, and institutionally, is misleading. That is a heavier burden that it might seem.

That redistribution of burden quietly reshapes power. 

If prosecutors encounter government-distributed media portraying a person as distressed, aggressive, or unstable, it may color charging assessments. If judges see an official visual narrative early, it may influence bail determinations or credibility assumptions. If jurors later encounter conflicting  media, a corrected version, raw footage, or a defense-produced account, the first impression may already be fixed.

From a governance perspective, government agencies should treat the deliberate publication of AI-altered or digitally modified arrest media as a liability exposure, not a communications tactic. But that framing assumes agencies see manipulation as a risk to be managed. When an administration views narrative control as a strategic asset, and faces no explicit penalty for distorting the record, there is little internal incentive to stop. Without enforceable disincentives, voluntary restraint is not a realistic expectation. That is precisely why external guardrails are not optional.

To address a situation in which official media can be altered, distributed at scale and absorbed as fact before any legal proceeding begins, it is essential to have guardrails in place that create accountability where institutional self-interest currently fills the void. 

First, agencies should be prohibited from publishing materially altered images or videos of arrests or enforcement actions without clear, prominent disclosure.  

Second, agencies should require internal legal review before any digital modification of arrest-related media and preserve unedited versions as discoverable by default.

Third, agencies should be required to disclose when automated systems, algorithms, or AI tools contribute to incident summaries, reports, or public releases.

Finally, and as a foundation for all of this, independent investigations into federal law enforcement killings should be standard practice, not measures triggered only after public outcry.

At the same time, several foundational research questions remain unresolved:

  • How frequently do government agencies publish AI-altered or digitally modified arrest media, and under what internal approval processes?
  • When such media exists, are unedited originals reliably preserved and disclosed in discovery?
  • How are machine-generated or machine-assisted police narratives currently classified in practice: evidence, metadata, or internal notes?
  • What logging and provenance architectures would be required to make narrative and visual modifications meaningfully auditable?

This piece does not resolve those questions. It maps a problem space that legal institutions are not yet equipped to govern.

Trust in legal institutions depends on more than formal doctrine. It depends on a shared belief that evidence is not being quietly engineered, that narratives are not pre-packaged, and that accountability remains possible. When people confront lethal state action alongside a state-sanctioned version of what happened, legitimacy erodes. As we see today, that erosion is not theoretical.

Nicholas E. Stewart is executive director of the Justice Education Project, the first national Gen Z–led criminal justice reform organization. A criminology graduate of the University of South Florida, he is the lead author of Next Steps Into Criminal Justice Activism: Technology, Ethics, and the Future of Justice, a youth-authored book on AI and the criminal legal system featuring interviews with leading law faculty. His research has been published in peer-reviewed journals including Psychological Services and outlets including The Hill and Teen Vogue, and his findings have been briefed to organizations such as Human Rights Watch and the Sentencing Project.