Artificial intelligence, while holding promise, is also enabling a new era of extensive domestic surveillance in the United States. This investigation exposes how undisclosed agreements and the significant financial influence of lobbying by tech giants contribute to what can be described as a “profit-driven panopticon”. This rapid “domestic surveillance creep” threatens fundamental civil liberties and due process, fundamentally reshaping the relationship between citizens and the state.
Hidden Contracts and Controversial Deployments
AI surveillance and policing systems are becoming “increasingly prevalent in cities across the United States”. While the Department of Homeland Security (DHS) has issued directives on AI use, ostensibly prohibiting “unlawful or improper systemic, indiscriminate or large-scale monitoring, surveillance or tracking of individuals,” these directives still allow AI outputs to serve as a basis for law enforcement actions, raising concerns about their true protective scope.
The DHS has confirmed its use of digital tools to analyze social media posts for individuals applying for visas or green cards, specifically searching for “extremist” rhetoric or “antisemitic activity,” which raises significant questions about definition and potential mislabeling.
A particularly concerning example involves Palantir, which holds a multi-million dollar contract with U.S. Immigration and Customs Enforcement (ICE), part of a larger five-year, $96 million agreement with DHS signed in 2022. This contract enables “complete target analysis of known populations” through ICE’s Investigative Case Management (ICM) system. The ICM is a massive database containing real-time tracking tools and data from various agencies, allowing agents to categorize individuals using “hundreds of highly specific categories,” enabling mass profiling. Controversies include “disappearances” and deportations of green card and visa holders under the Trump administration for expressing views “at odds” with U.S. foreign policy, often with families and lawyers unaware of their whereabouts.
Beyond immigration, the FBI utilizes AI for advanced video analytics, capable of identifying license plates, extracting text, detecting objects, and tracking faces for movement analysis, processing data in days rather than a year. The Transportation Security Administration (TSA) also employs AI systems for identity verification at security checkpoints, leading to “ongoing questions about civil liberties and transparency”.
At the state level, the Texas Department of Public Safety (DPS) has built an “expansive surveillance apparatus” powered by AI under Governor Greg Abbott’s $11 billion “Operation Lone Star”. This includes controversial AI software tools such as:
- Clearview AI for facial recognition, a contract extended to 2030 ($1.2 million in January 2025) despite concerns over data mining without consent and racial inequities in accuracy for darker skin tones.
- Automatic license plate readers (Motorola Solutions’ LEARN database).
- “Drawbridge cameras” have detected over 2.1 million people.
Facial recognition is also widely used by police departments in major U.S. cities, leading to documented wrongful arrests due to AI misidentification. Experts warn that “AI law enforcement tends to undermine democratic government, promote authoritarian drift, and entrench existing authoritarian regimes”.
Major defense and tech contractors like Booz Allen Hamilton, Accenture Federal Services, SAIC, and Palantir are actively scaling AI solutions for federal missions, including national security, defense, and law enforcement. This creates a “profit-driven panopticon” where corporate financial interests are linked to the expansion of state surveillance power. The continued contracts for controversial technologies, despite documented biases and privacy concerns, highlight that financial incentives often override ethical considerations and public outcry.
Visualizing the Domestic Digital Dragnet
Table: U.S. Government AI Surveillance: Key Contracts and Controversies
U.S. Agency/Law Enforcement Body | Primary Contractor | Type of AI Surveillance System/Capability | Contract Value/Duration | Documented Controversy/Abuse |
DHS/ICE | Palantir | Investigative Case Management (ICM) database, target analysis | $96 million (5-year, from 2022) | Mass profiling, “disappearances” and deportations of migrants/visa holders based on views |
Texas Department of Public Safety (DPS) | Clearview AI | Facial Recognition Software | $1.2 million (extended to 2030) | Data mining without consent, racial bias in identification, erosion of privacy |
Texas Department of Public Safety (DPS) | Motorola Solutions | LEARN database (Automatic License Plate Readers) | Unknown | “Dragnet” surveillance, warrantless tracking of public movements |
Texas Department of Public Safety (DPS) | (Various) | Drawbridge cameras | Unknown | Detection of 2.1M people, apprehension of 1.1M, concerns about police state |
FBI | (Internal/Various) | Advanced Video Analytics | Unknown | Processing vast CCTV footage, object/face tracking, raises civil liberties questions |
TSA | (Internal/Various) | AI systems for identity verification | Unknown | Ongoing questions about civil liberties and transparency at checkpoints |
Various U.S. Police Departments | (Various) | Facial Recognition Technology | Unknown | Wrongful arrests due to AI misidentification, systemic inaccuracy for people of color |
Gaps and Loopholes in U.S. Laws and Oversight
Despite growing concerns, critical gaps and loopholes in U.S. laws and oversight allow AI surveillance to expand largely unchecked. While DHS policy nominally prohibits AI outputs from being the “sole basis” for law enforcement actions or denying government benefits, this language creates a significant loophole, allowing AI to be a contributing factor, thereby sidestepping stricter oversight and accountability.
A new Office of Management and Budget (OMB) memo sets “minimum risk mitigation practices” for federal agencies but critically allows agencies “far too much discretion to opt out of key safeguards”. Agencies can waive minimum practices if they “alone” deem that compliance would “increase risks to safety or rights overall,” or “create an unacceptable impediment to critical agency operations”. Such “vague criteria are prone to abuse,” and agencies can opt out if they decide the AI is not a “principal basis” of a decision, a loophole that has undermined the effectiveness of similar laws. This effectively allows the very entities deploying AI to define the limits of their own oversight.
The consequences of these gaps are profound:
- AI systems are known to amplify discrimination; for instance, facial recognition technology is less accurate for people of color, leading to disproportionate blocking of Black asylum seekers.
- A Department of Justice algorithm overpredicted re-offending rates for Black, Asian, and Hispanic individuals, making them less likely to qualify for early release.
- “Longstanding weaknesses in how agencies police themselves” and chronically understaffed privacy and civil rights watchdogs could undermine critical oversight.
Furthermore, the absence of a comprehensive national privacy bill in the U.S. means there are “few legal safeguards to limit workplace computer or network surveillance,” allowing broad monitoring without consent. While the responsibility for protections rests with Congress, legislative action has been slow and fragmented.
The Trump administration’s recent rescission of the Biden-era AI Diffusion Rule, which would have imposed export controls on AI models, was justified by claims that it “would have stifled American innovation and undermined US diplomatic relations”. This signals a strategic shift towards less regulation on AI exports, potentially allowing more AI technology to flow even to “trusted foreign countries” under the guise of maintaining global AI dominance.
“Gaps in U.S. AI Oversight”
Lobbying Billions: Corporate Influence on AI Policy
Powerful tech companies are engaged in extensive lobbying efforts to influence AI surveillance policy, profoundly impacting public accountability, privacy, and democratic governance. Lobbying activity on AI-related issues has surged dramatically, with over 3,400 lobbyists deployed in 2023, a staggering 120% increase from 2022. A disproportionate 85% of lobbyists hired in 2023 represented corporations or corporate-aligned trade groups, indicating strong corporate influence.
While the tech industry led with nearly 700 lobbyists, a broad range of other industries also actively engaged in AI lobbying. Top lobbying clients include the U.S. Chamber of Commerce, Intuit, Microsoft, the Business Roundtable, and Amazon. Major tech companies like Amazon, Meta, Google parent company Alphabet, and Microsoft each spent over $10 million on lobbying in 2023. These companies possess a “sophisticated lobbying apparatus” that has “outgunned the efforts of other organizations,” including civil society groups.
A significant portion of the industry actively argues “against regulating AI, arguing that regulation would impede technological progress,” a common refrain in their lobbying efforts. Lobbying efforts are heavily concentrated on the White House, with over 1,100 lobbyists targeting it in 2023, nearly double the number directed at any other federal agency. The overwhelming influence of powerful corporate interests “likely gives them a disproportionate influence on the development of AI laws in the U.S.”.
This unchecked influence contributes to the erosion of the “reasonable expectation of privacy” and fosters a “climate of fear” and “self-censorship” among the populace. When corporate profits and perceived technological advantage are prioritized through aggressive lobbying, the public’s right to privacy and due process is systematically diminished. This dynamic fundamentally distorts democratic governance, allowing powerful private interests to shape laws that govern highly invasive technologies with minimal public accountability.
“AI Lobbying Spending Surge”
Conclusion: The Algorithmic Eye is Watching
The investigation into domestic AI surveillance reveals a disturbing landscape where technological advancement is inextricably linked to the erosion of fundamental rights and the consolidation of unchecked power. The rapid expansion of AI surveillance capabilities by U.S. government agencies and law enforcement through hidden contracts and controversial deployments creates a “profit-driven panopticon” that threatens civil liberties and due process.
This expansion is facilitated by critical gaps and loopholes in U.S. laws and oversight, where self-regulation and vague criteria allow agencies to bypass safeguards. Furthermore, the disproportionate sway of powerful tech companies, through aggressive lobbying, directly undermines public accountability and democratic governance.
The consequence is a future where the promise of AI is overshadowed by its perilous reality as a tool for pervasive control, leaving citizens increasingly vulnerable to unseen algorithmic chains. The time for transparency, accountability, and stringent regulation is not merely urgent; it is long overdue.