THE NEW AMERICAN AUTHORITARIANISM

American authoritarianism: biometric surveillance, executive power, and the end of civil liberties

Biometric surveillance, executive power, and deportations without due process.

Clearview, Palantir, ICE: dissent under algorithmic scoring. On March 8, 2025, federal agents arrest a student inside his university residence in Manhattan. They don’t have a warrant for a crime. They have an executive order.

His name is Mahmoud Khalil. A Palestinian Columbia student, green card in his pocket. Hours after the arrest, Donald Trump celebrates on Truth Social: “The first arrest of many to come.”

Khalil is not the protagonist of this story. He is its symbol. It could be anyone: one of the 2,000 international students who, between March 2025 and January 2026, saw their visas revoked; one of the nine public figures arrested or deported for expressing opinions against the government; or one of the millions of faces captured every day by a surveillance infrastructure that no longer distinguishes between suspect and citizen, between threat and dissent.

To understand what is happening, you have to see the full picture. The United States is undergoing a transformation across three interconnected dimensions: technological, with over $300 million invested in domestic surveillance tools; legal, with the reactivation of executive powers that enable deportations without trial; ideological, with the systematic use of the immigration apparatus to target political dissent.

None of this is without precedent. But the combination is.

Between 1956 and 1971, the FBI’s COINTELPRO program surveilled and sabotaged political movements from communists to the Black Panthers. It operated with human informants and phone wiretaps, required significant resources for each individual target, and when it was exposed Congress imposed limits on domestic intelligence. In 2013, Snowden’s revelations showed that the NSA’s PRISM program collected communications from Google and Facebook servers at industrial scale — but it was formally aimed at foreigners, and American citizens enjoyed theoretical protections.

The surveillance of 2025 combines PRISM’s scale with COINTELPRO’s political targeting, operates through immigration authority where constitutional protections are weaker, does not distinguish citizens from non-citizens in data collection, and above all is not secret. The contracts are public, the apps are documented, and the deportations are celebrated on presidential social media. COINTELPRO was dismantled when it became public. PRISM was partially reformed after Snowden. Biometric surveillance in 2025 operates in broad daylight, and that paradoxically makes it harder to challenge.


The new architecture of surveillance

In September 2025, ICE finalizes a $9.2 million contract with Clearview AI through a “sole-source” process, meaning a direct award without competitive bidding. Federal rules require competitive procurement for contracts above $250,000, and the exception requires that only one vendor has genuinely unique capabilities. In facial recognition — a market with dozens of competitors — that justification is, at best, debatable. But the contract goes through, and with it a new era begins.

To understand what makes Clearview different, you have to start with what existed before. The traditional federal database, called IDENT and recently renamed HART, contains about 270 million biometric records collected through official interactions with the state: visa applications, border crossings, arrests, asylum claims. Clearview holds over 50 billion images — nearly two hundred times more — obtained in a completely different way: scraping the public web. Instagram, LinkedIn, Facebook, news sites, online photo albums, wedding pictures, college reunions. Everything that has ever appeared online with a human face is potentially in the system, without consent — and often without the subjects even knowing they’ve been indexed.

The difference is not only quantitative. IDENT identifies people who have already interacted with the system, who are already “inside” in some way. Clearview can identify anyone who has ever appeared in an online photo, including individuals who have never broken a law, crossed a border, or drawn any authority’s attention.

As Albert Fox Cahn, director of the Surveillance Technology Oversight Project, told the Guardian: “It’s no longer about what you did. It’s about who you are, where you’ve been, who you know.”

But raw data alone has limited value. You need a system that aggregates it, correlates it, and turns it into operational intelligence. That system is ImmigrationOS, developed by Palantir under a $30 million contract — the real brain of the operation. ImmigrationOS is a “data fusion” platform that integrates heterogeneous streams into a single interface: biometric records from IDENT and Clearview, the full history of entries and exits, tax and social security data, social media activity including posts and contact networks, vehicle movements tracked by license-plate readers, and geolocation purchased from commercial brokers — the phone apps that sell user location to advertisers, and from there to anyone willing to pay.

American authoritarianism — An American city under biometric surveillance: checkpoints, facial scanners, and corporate towers symbolizing executive control and mass surveillance.
SEO alt text (already in the image tag): “American authoritarianism — An American city under biometric surveillance: checkpoints, facial scanners, and corporate towers…”

To see what this means in practice, imagine a stop in Los Angeles. An agent scans a person’s face. Within seconds, ImmigrationOS returns a profile: the subject crossed the border at Tijuana three times in the last year, their vehicle was detected in a neighborhood under observation, one of their Facebook contacts has a pending removal order, and their employer appears not to have paid social security contributions under their name. None of these signals, taken alone, is illegal. But together they form what the system calls a “risk profile.” The algorithm is not looking for crimes already committed — it’s looking for correlations, patterns, associations. The crime, if any, is built afterward.

The interface between this brain and the real world is Mobile Fortify, an application developed by Customs and Border Protection that turns any government smartphone into a portable biometric station. The agent frames the subject’s face; the system also captures fingerprints in “contactless” mode via the camera; and within seconds it queries the FBI, the State Department, visa databases, and state warrant systems simultaneously. The output includes full identity, immigration status, and any pending removal orders.

Two aspects of Mobile Fortify deserve special attention. The first is the authority the system has acquired. According to Congressman Bennie Thompson, ranking member of the House Homeland Security Committee, ICE agents treat app results as “definitive,” more reliable than paper documents like birth certificates. In a case documented by 404 Media, a woman identified as “MJMA” was scanned twice during the same stop, and the app returned two completely different identities. The agent proceeded anyway — with no tools to resolve the inconsistency, and no incentive to try. The second aspect is consent — or rather, its total absence. An internal DHS document is explicit: “ICE does not provide the opportunity for individuals to decline or consent to the collection and use of biometric data.” Scanning is mandatory for anyone stopped, regardless of citizenship or legal status.

Every captured image is transmitted to the Automated Targeting System, where it is retained for fifteen years. ATS is not a simple archive — it is a scoring platform created after 9/11 to identify potential terrorists, later expanded to include virtually anyone who crosses U.S. borders or is stopped by federal agents. It assigns risk scores based on travel patterns, associations, behaviors, and algorithmic correlations. A person can be labeled “high risk” without committing any crime, based on where they went, who they know, what they posted online. ATS is accessible not only to DHS, but also to the FBI, DEA, and — through sharing agreements — intelligence agencies. Fifteen-year retention means that the image of an American citizen stopped in a random check in 2025 remains available until 2040 for future investigations or correlations.

The scale of these systems creates a mathematical problem that matters. Consider a facial recognition algorithm with a 0.1% false positive rate — excellent by industry standards and consistent with Clearview’s performance in NIST testing. Applied to 50 billion images, that 0.1% yields 50 million potential misidentifications. This is not a flaw fixable with “better algorithms” — it is a mathematical property of matching at massive scale. And these errors are not evenly distributed: NIST studies show higher false positive rates for non-white faces, for women, and for older people. Those 50 million errors disproportionately hit those who are already vulnerable.

Illinois illustrates a different, subtler but equally significant problem. In 2008 the state passed the Biometric Information Privacy Act, America’s strictest biometric privacy law, requiring explicit consent before collecting identifiers like facialprints. In 2022 the ACLU sued Clearview for massive violations and won: a court injunction bans the company from providing services to Illinois state and local agencies. Chicago police can’t use Clearview. But federal agents operating in the same city are not bound by that restriction — state law does not apply to the federal government. The result is a patchwork geography of privacy where protections depend not on where you are, but on who stops you.

The 2025 settlement added yet another chapter. Clearview lacked sufficient cash to pay the $51.75 million damages ordered by the court. In normal class actions this leads to bankruptcy or installment payments, but the judge chose a different path: converting part of the award into equity, granting plaintiffs — people whose images were collected without consent — 23% of the company’s shares. The victims of surveillance are now Clearview shareholders, and their financial return depends on the company’s commercial success — the expansion of the very practices they challenged. Illinois shows a structural problem: local protections, even when they win in court, cannot contain federal power — and even legal victories can produce outcomes that strengthen rather than limit contested practices.


The new face of American authoritarianism: dissent made deportable

Secretary of State Marco Rubio did not accuse Mahmoud Khalil of a crime. He invoked a provision allowing visas and residency status to be revoked when an individual’s presence entails “adverse consequences for U.S. foreign policy.” Rarely used before 2025, this formula turns political opinion into a deportability factor. No illegal act needs to be proven — it is enough that expressed positions are deemed incompatible with U.S. interests, defined unilaterally by the executive with minimal judicial review.

The operational tool is SEVIS, the database managing records for international students and scholars in the United States. Terminating a SEVIS record automatically revokes legal status, without a prior hearing or formal notice. Students discover they are deportable when they try to re-enter after travel, or when they are stopped for a check. Between March and May 2025, over 2,000 international students were subjected to this, and patterns documented by journalists and civil rights organizations indicate systematic targeting of individuals involved in pro-Palestinian activity — even when involvement was limited to signing petitions or attending university-authorized demonstrations.

Two of the nine public cases illustrate the pattern clearly. Rümeysa Öztürk, a Turkish Fulbright scholar at Tufts, was detained after co-signing an op-ed critical of Israel’s response in Gaza. She did not attend protests, did not violate campus rules — she expressed an opinion in an academic publication. “I didn’t know an opinion could cost you everything,” she told the Boston Globe after being released on bail. Badar Khan Suri, an Indian researcher at Georgetown, was arrested over social media posts criticizing the Gaza war and U.S. support for Israel. “I wrote what I believed. In America. I thought it was allowed,” he told The Washington Post. The other cases — Yunseo Chung, Ranjani Srinivasan, Aditya Wahyu Harsono, Momodou Taal, Mohsen Mahdawi, Leqaa Kordia — follow the same script: opinions, associations, words. No violence alleged, no crime charged beyond expressed thought.

Judge William G. Young, in one ruling related to these cases, wrote that the system appears “intentionally designed to chill the First Amendment rights of noncitizens”. The phrasing matters: not “side effect,” not “unintended consequence,” but intentional design.

A detail uncovered by The Intercept’s reporting sheds light on how targets are identified. DHS officials admitted consulting Canary Mission during case assessments. Canary Mission is a website compiling dossiers on students and faculty it labels anti-Israel or antisemitic, including photos, university affiliations, social media excerpts, and event participation. It is run anonymously, funded by undisclosed donors, accountable to no public authority, and offers no evidentiary standards for inclusion or procedures to contest listings. A crowdsourced blacklist, built without accountability, has been integrated into the decision-making processes of a U.S. federal agency.


American authoritarianism — An American city under biometric surveillance: checkpoints, facial scanners, and corporate towers symbolizing executive control and mass surveillance.
SEO alt text (already in the image tag): “American authoritarianism — An American city under biometric surveillance: checkpoints, facial scanners, and corporate towers…”

The surveillance industrial complex

The budget tells the story better than any statement. ICE allocated more than $300 million to surveillance technologies in FY2025–2026 — not for border patrols or detention centers, but for software, algorithms, and population monitoring capabilities.

Over $100 million is earmarked for social media monitoring: tools that analyze public content to identify sentiment, map relationship networks, and pinpoint central figures in protest movements. The stated goal is threat detection; the practical effect is systematic surveillance of political speech. The system isn’t searching for crimes — it’s searching for opinions, associations, patterns of thought.

Contracts with Cellebrite and Paragon cover complementary capabilities. Cellebrite, an Israeli company, specializes in extracting data from seized mobile devices: its products can bypass passwords and encryption to access messages, photos, browsing history, and historical GPS locations — but it requires physical access to the device. Paragon provides something different: spyware for remote surveillance. Its main product, Graphite, can be installed on a phone without any user interaction — knowing the number is enough. Once active, it transmits messages, calls, location, and screen activity in real time. The distinction is crucial: Cellebrite extracts data from devices already seized; Paragon enables invisible surveillance of anyone, anywhere, without the target knowing. ICE contracts with both suggest both forensic and active interception capabilities.

This apparatus is structured as a public-private hybrid. Coercive authority remains federal — only ICE can arrest, only DHS can deport. But the information infrastructure is largely outsourced. Clearview owns and manages the biometric database; Palantir develops and maintains the aggregation platform; Cellebrite and Paragon provide intrusion capabilities; dozens of smaller contractors run specific components.

This structure creates a systemic accountability problem that attorney Paromita Shah of Just Futures Law described to the Los Angeles Times as “non-accountability by design.” When something goes wrong — a misidentification, an unjust detention — who is responsible? The operator benefits from qualified immunity and is rarely personally liable unless they violate “clearly established” rights, and using an agency-authorized app typically doesn’t. The federal agency benefits from sovereign immunity and can be sued only in limited cases authorized by Congress, excluding “discretionary functions” like enforcement decisions. The contractor is shielded by indemnification clauses: if sued over harms stemming from government use, the agency covers legal costs and damages. The concrete result is that a person wrongly matched, detained for hours, who loses a flight or job because of an error, often has no clear legal target. They can spend years and tens of thousands on lawyers only to discover that none of the defendants is legally accountable for the harm.


American authoritarianism: when the government doesn’t answer

The institutional response to this transformation has been fragmented and largely ineffective.

In Congress, Democrats introduced bills to limit federal facial recognition use, but none made it out of committee. Republicans defended the programs as necessary for national security. In the courts, there have been partial and temporary wins: a judge ordered Khalil’s release in April 2025, but an appeals court reversed in January 2026, and the pattern repeats — every victory is appealed, every appeal resets the baseline or makes it worse. The universities involved — Columbia, Tufts, Georgetown — issued statements of “concern,” but none pursued legal action to defend their students. “Universities are afraid,” an anonymous faculty member told the Chronicle of Higher Education. “Afraid of losing federal funds, of being accused of supporting terrorism.”

The only actors offering sustained resistance are civil society organizations: the ACLU, Electronic Frontier Foundation, and Council on American-Islamic Relations have launched lawsuits, organized legal defense, and documented abuses — with resources infinitely smaller than the apparatus they challenge.

International reactions follow a predictable pattern. Turkey summoned the U.S. ambassador after Öztürk’s arrest. South Korea expressed “concern” over Chung’s case. India remained silent about its citizens. Countries with economic or strategic leverage protest; others don’t. But a less visible consequence is unfolding: U.S. soft power has long rested on the promise of being a refuge for dissidents, intellectuals, and students worldwide. When a Fulbright scholar is arrested for an op-ed, that promise hollow-outs — not through sudden collapse, but through slow erosion that redefines what America means to those watching from outside.


The precedent: American authoritarianism in the long run

Data in the Automated Targeting System is retained for fifteen years. Images in Clearview are effectively permanent. Multi-year Palantir contracts create operational dependencies that are hard to unwind. A future administration could, in theory, dismantle ICE, cancel contracts, and fire officials. But the data would remain — on contractor servers, in backups, in datasets traded across the commercial surveillance market. A captured face cannot be “uncaptured.” The infrastructure outlives its builders.

New DHS rules proposed in late 2025 accelerate this dynamic. The concept of “continuous vetting throughout the immigration lifecycle” extends biometric collection to ever broader categories: minors including newborns, U.S. citizens sponsoring family visas, anyone “associated” with immigration processes. DHS estimates 1.12 million new biometric collections per year under this regime. The perimeter of surveillance expands by regulatory definition, without new laws or public debate.

The First Amendment formally protects free expression. But if expressing an opinion can trigger deportation, status revocation, insertion into watchlists consulted for fifteen years, how many will choose silence? The system doesn’t need to deport every dissident to work — it only needs the possibility to be real and visible. It only needs Khalil to be arrested, Öztürk to be detained, and their stories to circulate through campuses and immigrant communities. The message reaches everyone else. And the effect extends beyond directly vulnerable non-citizens: it reaches those who work with them, study with them, organize with them — anyone who suspects that one day they could land on the wrong side of an algorithmic correlation.

Documenting is already a form of resistance. Every recorded arrest, every cataloged algorithmic error, every archived procedural abuse builds the record for future accountability — if not judicial in the short term, then historical in the long term. Regimes change. Archives remain. Understanding the technical architecture is necessary to contest it: the systems described here are not natural forces, not inevitable — they are human constructions funded by public budgets and operated by individuals who can refuse, expose, resist.

But none of this works without a preliminary act: naming what is happening for what it is. Not “democratic erosion,” not “security drift,” not “concern.” A deliberate construction of political control through technological and legal instruments. Systematic targeting of dissent through immigration authority. Mass biometric surveillance without consent, without effective limits, without accountability.

This is the new American authoritarianism. Naming it is the precondition for any response.

Read the full deep dive (opens in a new tab).

Read more

Sources: American authoritarianism

Reporting (opens in a new tab)

Similar Posts