The Pentagon Is Wrong About How AI Works, and It's Putting Us All In Danger
As the US Defense Department accuses Anthropic of being able to flip a kill-switch on its AI during wartime, experts say this betrays a deep misunderstanding of how large language models actually work
The Department of Justice filed a response in federal court on March 17, 2026, that reads like the premise of a techno-thriller. Anthropic, the San Francisco-based AI company behind the Claude chatbot, might sabotage American military operations by covertly manipulating its own models. The department alleged the company’s engineers could, at the flip of a digital switch, disable or distort AI systems deployed on Pentagon infrastructure, potentially compromising classified operations and endangering warfighters in active combat zones.
This is not fiction. This is the United States government’s official legal position. But this is literally impossible.
In the filing, which responds to Anthropic’s lawsuit against the Department of Defense, Justice Department attorneys assert that defense secretary Pete Hegseth “reasonably” determined that “Anthropic staff might sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, or operation of a national security system.” The government’s narrative is clear. Anthropic’s Claude models are not merely software products but potential weapons of corporate sabotage, vessels through which a recalcitrant vendor could wreak havoc on American military operations should the company decide its ethical “red lines” have been crossed.
Anthropic’s response, delivered in court filings and public statements, is equally unequivocal. The company has no back door into Department of Defense systems, cannot log into government infrastructure, and lacks any mechanism to alter, disable, or influence models once they have been deployed onto secured military networks. As Anthropic chief executive Dario Amodei wrote in a company blog post published February 27, 2026, the company “cannot in good conscience accede to their request” to remove safeguards against mass surveillance and fully autonomous weapons. But the company also cannot remotely sabotage its own technology, because the very architecture of large language models makes such cinematic scenarios impossible.
Misunderstanding AI is the Real Risk to National Security
The collision between these two narratives reveals something far more consequential than a contract dispute between a tech company and its largest potential government customer. It exposes a fundamental fracture in how American institutions understand a technology that is rapidly becoming central to national security, economic competitiveness, and democratic governance. The Pentagon is treating Claude like traditional software: remotely patchable, centrally controlled, governed by explicit logic that human engineers can modify at will. The reality is radically different. Modern large language models are static mathematical formulas, billions of numbers frozen in time, whose behavior emerges from complex statistical patterns rather than editable code. You cannot flip a kill switch on a matrix. You cannot alter a weight file remotely any more than you can edit a photograph after it has been printed and mailed.
This misunderstanding is not a mere technical quibble or academic distinction. It will shape defense procurement decisions, regulatory frameworks, and the capacity of democratic governments to govern technologies they do not comprehend. If policymakers continue to treat AI systems as remotely controllable executables when they are, in fact, immutable mathematical filters, the resulting policies will target imaginary vulnerabilities while ignoring genuine risks. The stakes could not be higher. As AI systems proliferate through military operations, electoral campaigns, and critical infrastructure, the public’s ability to understand what these systems can and cannot do will determine whether democracy can survive the technological transition that is already underway.
What LLMs Actually Are
To understand why the Pentagon’s allegations miss the mark, one must first grasp what a large language model actually is. Equally important is understanding what it is not.
At their foundation, modern large language models are neural networks trained on vast corpora of text to predict the next token in a sequence. This deceptively simple objective, given the words “The cat sat on the,” predict what comes next, masks extraordinary complexity. Through exposure to hundreds of billions of words drawn from books, articles, code repositories, and web pages, these networks learn intricate statistical patterns. Not merely which words tend to follow which, but syntactic structures, semantic relationships, factual associations, and even reasoning patterns that emerge from the co-occurrence of concepts in training data.
The crucial point, and the one that appears to have escaped the Defense Department’s analysts, is how this learning is implemented. Traditional software consists of explicit instructions: if-then statements, loops, functions that human programmers write and can modify. When Microsoft issues a security patch for Windows, engineers are editing source code, recompiling, and distributing new executable files. The program remains fundamentally a set of instructions that the computer follows.
Large language models are different. They are implemented not as code but as mathematical matrices: enormous grids of numerical values (“weights”) that transform input vectors into output probabilities through successive layers of mathematical operations. The “knowledge” of an LLM exists not as explicit rules but as patterns embedded in these weights, patterns so distributed and interconnected that no human can point to a specific parameter and say, “This controls the model’s opinion on tax policy,” or “This determines whether it will answer a harmful request.”
When training completes, these weights are frozen. They become totally static: a multi-gigabyte file containing billions of floating-point numbers. That’s it. Running the model is not a matter of executing instructions but of performing linear algebra. Input tokens are converted to vectors, multiplied by weight matrices, passed through activation functions, and transformed through successive layers until probabilities emerge. The model operates without cognition or decision-making in any meaningful sense. It filters inputs through a fixed mathematical structure and produces outputs that reflect patterns learned during training.
The model is a pasta strainer, not a pot of boiling water. It doesn’t “do” anything, it just filters the input and produces output, usually as text.
This distinction matters profoundly for the Pentagon’s allegations. Anthropic’s engineers cannot “log into” a deployed model and alter its behavior any more than a photographer can log into a printed photograph and change the image. Once the weights are transferred to Department of Defense infrastructure, Anthropic has no technical access to them. The model file is simply a collection of numbers; running it requires only computational resources and the appropriate inference software, which is itself open-source and widely available. There is no phone home mechanism, no remote administration console, no kill switch embedded in the weights themselves.
As researchers at Stanford’s Human-Centered Artificial Intelligence Institute noted in a 2024 article, the real strategic asset is not the thin serving code but the model weights and underlying training data, which are extremely costly to reproduce. The weights represent billions of dollars in computational resources and the accumulated statistical extraction of humanity’s written output. Once transferred, they are simply files, albeit files of extraordinary sophistication and value.
Inside the Black Box: A Brief Tour of LLM Architecture
The transformer architecture, introduced by Google researchers in 2017 and now ubiquitous in large language models, provides the structural foundation for understanding why remote manipulation is impossible.
A transformer model consists of stacked layers, each containing two primary components: a multi-head self-attention mechanism and a position-wise feed-forward network. The attention mechanism allows the model to weigh the relevance of different input positions when producing each output position; the feed-forward networks apply learned transformations to these attended representations. Between each layer sit normalization operations and residual connections that enable the training of deep networks.
Critically, all of these components exclude executable logic in the traditional sense. There are no conditional branches, no flags that can be toggled, no wartime mode that Anthropic could activate. The attention mechanism is purely mathematical: queries, keys, and values are derived from input embeddings through learned weight matrices, and attention scores are computed via scaled dot-product operations followed by softmax normalization. The feed-forward networks are simple linear transformations with nonlinear activations.
When researchers speak of “running” a model, they mean performing matrix calculations. The weights in the model are constants and never change between inference calls. A model’s response to “What is the capital of France?” is determined entirely by the fixed values in its weight matrices, values that were established during training, have not changed since, and will not change in the future. The model consults no database, checks no policy server, and evaluates no dynamic rules. It simply performs billions of arithmetic operations and produces a probability distribution over possible next tokens.
If the Defense Department fears that Anthropic might alter model behavior during wartime, they must believe either that Anthropic can remotely modify weight files, a technical impossibility once those files are deployed on air-gapped military networks. Or, they believe the models themselves contain some form of remote code execution capability that would allow Anthropic to override their behavior. Neither scenario aligns with how transformer models actually function.
The government’s allegations seem to imagine AI systems as remotely administered services, something like a cloud-hosted database where administrators can modify queries or revoke access in real time. But deployed LLMs are not services; they are artifacts. Once Anthropic transfers the weight file to the Pentagon and the Pentagon loads it onto classified systems, Anthropic has no more ability to influence that model than a book publisher has to alter text in a volume already sitting on a reader’s shelf.
Can You Secretly “Edit” an LLM After Training?
The question of whether large language models can be modified after training is not merely theoretical. The emerging field of model editing has produced techniques like MEND, SERAC, ConCoRD, ROME, and MEMIT that aim to correct individual facts or adjust narrow behaviors without retraining models from scratch. These methods represent genuine advances, but their limitations illuminate why the government’s fears of remote sabotage are misplaced.
As a 2024 paper posted to ArXiv explains, current editing methods are localized and constrained. They can sometimes update specific facts, changing which person holds a particular office, for instance, but they struggle with the downstream implications of such changes. Updating who is UK prime minister without correctly updating related facts about their family or cabinet demonstrates the brittleness of these interventions.
More fundamentally, model editing requires direct access to the weight matrices themselves. Techniques like ROME (Rank-One Model Editing) and MEMIT (Mass Editing Memory in a Transformer) operate by computing targeted modifications to specific layers, modifications that must be applied directly to the stored parameters. These are not remote operations; they require possession of and computational access to the model weights. Once a model is deployed on Department of Defense infrastructure, Anthropic has no such access.
The research also reveals a critical limitation that undermines the Pentagon’s narrative of covert manipulation: edited models often exhibit side effects. Changes made to one behavior can unpredictably affect others, and edited models may become unstable or degraded in ways that would be immediately apparent to users. A “sabotaged” model would not likely behave subtly differently in wartime. It would likely behave erratically or nonsensically, betraying the tampering through its degraded outputs.
Why the Pentagon Is (Once Again) Off-base
The Justice Department’s court filing articulates a theory of AI systems that bears little resemblance to how these technologies actually function. Disabling a deployed LLM would require either physical access to the servers hosting the model or some form of remote kill switch embedded in the weights themselves. No such kill switch exists; transformer architectures include no mechanisms for remote deactivation. The weights are simply numbers, inert until multiplied with input vectors. They contain no logic for checking authorization, no network code for receiving commands, no conditional branches that could be triggered by external signals.
The allegation that Anthropic might “preemptively alter the behavior” of models is equally disconnected from technical reality. Altering model behavior requires modifying weights, which requires computational access to the deployed model files. Once those files reside on classified Pentagon systems, Anthropic has no such access. The company cannot send updates, patch vulnerabilities, or introduce bugs without going through the same procurement and deployment processes that govern any software update.
The Pentagon’s stance appears to conflate two distinct scenarios: API access, where models run on vendor-controlled infrastructure and can be modified or revoked by the vendor, and on-premise deployment, where models run on customer-controlled systems and the vendor has no ongoing access. The Justice Department’s filing discusses Anthropic’s ability to “disable its technology” as if the company were operating a cloud service where flipping a switch could cut off access. But the disputed deployment involves models transferred to DoD infrastructure, the equivalent of shipping a product rather than providing a subscription service.
This confusion has real consequences. If the Defense Department genuinely believes that Anthropic retains the ability to sabotage deployed models, the department is operating under a threat model that does not correspond to actual technical capabilities. Resources devoted to monitoring for remote manipulation or kill switch activation are resources not devoted to genuine security concerns: poisoned training data, compromised fine-tuning pipelines, maliciously modified weights before delivery, or unsafe optimization choices that erode safety constraints.
Real Risks vs. Imaginary “Kill Switches”
The Pentagon’s focus on vendor sabotage, while technically unfounded, distracts from genuine risks that do threaten AI systems deployed in national security contexts. Understanding the difference between realistic vulnerabilities and cinematic scenarios is essential for developing effective security protocols.
Realistic concerns begin with the supply chain of AI systems themselves. As research on backdoor attacks in deep neural networks has demonstrated, malicious actors can implant triggers during training that cause models to behave normally under most conditions but produce targeted outputs when specific patterns appear. These backdoors are implanted during the training phase, not activated remotely after deployment; they exist as patterns in the weight matrices themselves, waiting for their trigger conditions.
The threat here is not that Anthropic might remotely sabotage its own models, but that models might contain vulnerabilities introduced during training, either accidentally or deliberately, that could be exploited by adversaries who discover the trigger patterns. A model trained on poisoned data might refuse legitimate military commands under specific conditions, hallucinate critical information, or produce subtly wrong outputs that could influence operational decisions. These risks are serious, but they are risks of training-time contamination, not runtime manipulation.
Similarly, fine-tuning and optimization choices can impact safety margins and model behavior, but these effects arise during training or retraining, not through invisible runtime levers. Research has shown that fine-tuning aligned language models can compromise safety even when users do not intend to do so; benign fine-tuning datasets can inadvertently degrade the safety guardrails established during initial alignment training. Again, these vulnerabilities emerge from the training process, not from remote manipulation of deployed systems.
The distinction matters for defense procurement. Rather than monitoring for vendor sabotage, a threat that does not exist in the form the Pentagon imagines, security protocols should focus on validating training data integrity, auditing model weights for anomalous patterns, and red-teaming deployed systems against adversarial inputs. Independent model audits, checksum verification of weight files, and continuous monitoring for unexpected behaviors are practical security measures that address real vulnerabilities.
Focusing on an implausible sabotage vector also carries opportunity costs. The Defense Department’s dispute with Anthropic has already disrupted operations: the government is working to replace Claude with alternatives from Google, OpenAI, and xAI, a transition that the Justice Department’s filing acknowledges cannot happen immediately because “the Pentagon cannot simply flip a switch at a time when Anthropic currently is the only AI model cleared for use on the department’s classified systems.” This disruption was caused by a contract dispute over use restrictions, not by any demonstrated technical vulnerability, but the government’s response has treated the situation as a security threat requiring immediate mitigation.
The irony is that the Pentagon’s actions may push defense AI systems toward less secure arrangements. If vendors fear that contractual disputes will lead to supply-chain-risk designations and potential bans, they may be less willing to accept government contracts with stringent use restrictions. The result could be a shift toward either brittle in-house models developed without adequate resources or opaque arrangements with vendors who refuse to accept any use limitations, arrangements that are harder to audit and less likely to prioritize safety.
Why Governments Keep Getting AI Wrong
The Pentagon’s mischaracterization of Anthropic’s capabilities is not an isolated incident but part of a broader pattern in which high-level officials frame AI systems using analogies from traditional software, cyber backdoors, or even physical weapons, leading to misaligned regulation and procurement rules that address imaginary threats while neglecting genuine ones.
This pattern reflects the fundamental challenge of technological governance: policymakers must make decisions about systems they do not fully understand, using conceptual frameworks inherited from earlier technologies. Software was once new and poorly understood; now it is taken for granted that policymakers comprehend the difference between local executables and cloud services, between source code and compiled binaries, between vulnerabilities and backdoors. AI systems have not yet achieved that level of conceptual familiarity, and the result is policy frameworks that misfire.
The mismatch encourages demands for impossible guarantees. The Pentagon’s filing suggests that Anthropic should be able to prove it cannot sabotage its own technology. In the absence of perfect knowledge, the government is demanding assurances that vendors cannot provide, not because they are hiding capabilities but because the requested capabilities do not exist in the form imagined.
Meanwhile, feasible controls go neglected. Rigorous testing of deployed models, transparent update protocols that document what changes in new weight files, clear lines of liability for training-time and deployment-time failures, these are achievable security measures that do not require violating the laws of mathematics. But they require accepting that AI systems are probabilistic, emergent, and imperfectly interpretable, acceptances that run counter to the traditional software paradigm of deterministic behavior and explicit logic.
The Anthropic case illustrates the risks of this conceptual confusion. By treating a contract dispute over ethical use restrictions as a supply-chain security threat, the Pentagon has escalated a disagreement about values into a legal confrontation with significant operational consequences. The company that developed one of the few AI systems cleared for classified Pentagon use is now being pushed out of defense procurement because the government mischaracterized how that company’s technology functions.
This approach risks chilling collaboration between AI vendors and government agencies. If ethical restrictions on military use can trigger supply-chain-risk designations, vendors may conclude that accepting government contracts requires abandoning all principled limitations. The result would be a race to the bottom in AI safety, with defense contracts going to whoever promises the most permissive use terms rather than whoever offers the most secure and reliable systems.
Citizens Need an LLM 101 NOW
The implications of the Pentagon-Anthropic dispute extend far beyond defense procurement. As AI systems proliferate through political campaigns, electoral infrastructure, media production, and public discourse, the public’s understanding of what these systems can and cannot do will determine whether democratic societies can navigate the coming turbulence.
The 2024 election cycle offered a preview of what is to come. As the Brennan Center for Justice documented in their analysis “Gauging the AI Threat to Free and Fair Elections,” AI-generated deepfakes targeting candidates proliferated across social media platforms. Russian operatives created synthetic videos of Vice President Kamala Harris; a former Palm Beach County deputy sheriff, operating from Russia, collaborated on fabricated videos falsely accusing vice-presidential nominee Tim Walz of assault; AI-generated robocalls featuring synthetic voices of President Biden urged New Hampshire primary voters not to cast ballots.
These incidents demonstrate not just the capabilities of generative AI but the vulnerabilities of a public that lacks basic literacy about these technologies. Voters who cannot distinguish between authentic and synthetic media are voters who can be manipulated by actors wielding cheap fabrication tools. Citizens who believe AI systems are “intelligent” in any meaningful sense, capable of judgment, intention, or moral reasoning, will misinterpret the outputs of probabilistic text engines as evidence, authority, or wisdom.
AI literacy must include understanding that large language models are not oracles, agents, or remote-controlled ideologues. They are statistical pattern-matching systems trained on human text, with fixed training cutoffs and no real-time access to information unless specifically engineered to retrieve it. They hallucinate; they confabulate; they reproduce biases present in their training data. They do not “know” things in any meaningful sense; they predict which sequences of tokens are statistically likely given their training.
This baseline understanding provides immunity to certain forms of manipulation. A citizen who knows that LLMs lack real-time knowledge cannot be fooled by synthetic news reports generated by systems whose training data ends months before the reported events. A citizen who understands that these systems are probabilistic rather than intentional will not attribute malice or conspiracy to model outputs that reflect training data biases. A citizen who recognizes AI-generated content as statistically probable rather than factually grounded will approach synthetic media with appropriate skepticism.
The political calendar makes this literacy urgent. Campaigns and governments are integrating AI into messaging, decision support, and cyber operations. Deepfakes will proliferate; synthetic text floods will drown authentic discourse; bad-faith political claims about “rogue AIs” or “sabotaged models” will exploit public ignorance to delegitimize opposition or justify repressive measures. Without baseline understanding, voters will be vulnerable to both AI-driven disinformation and to political manipulation that mischaracterizes the underlying technology.
What AI Literacy Looks Like in Practice
The gap between AI’s actual capabilities and public understanding is wide, but it is bridgeable. A modest investment in conceptual education can provide citizens with the mental models needed to navigate an AI-saturated political environment.
Core concepts that every citizen should grasp begin with the nature of model weights. A large language model is not a program in the traditional sense but a file containing billions of numbers, the “weights” that encode statistical patterns learned from training data. These weights are static; once created, they do not change unless deliberately retrained or edited through computationally intensive processes. Running the model means performing mathematical operations on these weights, not executing instructions written by programmers.
Understanding training data matters more than understanding clever code. An LLM’s outputs reflect the patterns in its training corpus; it knows what it has seen, biased by how often and in what contexts it has seen it. Training data quality and sourcing are thus more important than architectural details in determining what a model “knows” and how it behaves. Models trained on toxic or biased data will produce toxic or biased outputs regardless of safety filters added afterward.
Recognizing AI-generated content requires attention to telltale signs: perfect grammatical correctness combined with factual errors; confident assertions about events after training cutoffs; generic or hedged language on specific topics; characteristic phrasing patterns that differ from human idiosyncrasy. None of these markers is foolproof, but collectively they provide signals that content may be synthetic rather than authentic.
Civil society organizations and experts have begun calling for AI-focused media literacy programs, whistleblower protections, and transparent communication about model capabilities and limitations. These demands recognize that technological literacy is not merely an individual responsibility but a collective necessity for democratic functioning. Platforms that host AI-generated content should be required to label it; AI vendors should be required to document training data sources and model limitations; educational institutions should incorporate AI literacy into civic education curricula.
Concrete steps for individual citizens include following reputable AI reporting from outlets that prioritize accuracy over sensationalism; using simple tests to probe what chatbots know and don’t know, establishing their training cutoffs and limitations; and treating political claims about AI with the same skepticism applied to traditional campaign spin. When a politician claims a rival is using “rogue AI” or warns of “sabotaged models,” the appropriate response is not alarm but inquiry: what specifically is being alleged, and does it align with what is technically possible?
Don’t Let the Metaphor Win
The Pentagon’s clash with Anthropic reveals the power of metaphor in shaping policy. By conceiving of large language models as remotely controllable software rather than static mathematical artifacts, the Defense Department constructed a threat model that led to legal escalation, operational disruption, and the potential exclusion of one of the most safety-conscious AI developers from defense procurement.
But the metaphor is wrong, and the policies that flow from it will fail. Large language models are not magic; they are matrix filters shaped by their training history, performing linear algebra on input vectors to produce probability distributions over output tokens. They contain no hidden kill switches, no remote administration capabilities, no wartime modes that vendors can activate at will. Once deployed, they are simply files, extraordinarily sophisticated files, but files nonetheless.
If leaders cling to the wrong mental model, they will regulate ghosts and ignore real vulnerabilities. They will demand impossible guarantees of real-time control while neglecting achievable measures like training data audits and weight file verification. They will chase vendor sabotage scenarios that cannot happen while overlooking backdoor vulnerabilities that could. They will make policy based on science fiction rather than computer science.
The alternative is not resignation or technological determinism but informed governance. An educated public can pressure institutions to grapple with how these systems truly function, demanding policies that address genuine risks rather than imagined ones. This requires humility from policymakers, acceptance that AI systems are not merely complex software but genuinely different artifacts requiring new conceptual frameworks, and investment from citizens in understanding the technologies that will shape their lives.
As AI seeps into national security and electoral politics, the biggest risk may not be the models themselves but our refusal to understand how they work before we hand them the keys. The Pentagon has already demonstrated this danger: by mischaracterizing Anthropic’s technology, it has disrupted its own operations and potentially degraded its AI capabilities. Similar mistakes in electoral contexts could undermine democratic legitimacy; in security contexts, they could create vulnerabilities that adversaries exploit.
The path forward requires clear thinking about what large language models actually are: not agents with intentions, not software with remote administration, but static mathematical structures that transform inputs into outputs through patterns learned from training data. This understanding is not merely academic; it is the foundation upon which sound policy must be built. Until policymakers and citizens alike grasp that LLMs are matrix, not magic, the gap between technological reality and institutional response will continue to widen, with consequences that none of us can afford.
The Brewster Take
The Pentagon’s fight with Anthropic is not really about sabotage. It is about power and misunderstanding; the government’s inability to grasp that some technologies cannot be centrally controlled, combined with a tech company’s refusal to let its creations be used for mass surveillance and autonomous killing. Both sides are wrong in their own ways.
The Defense Department clings to outdated metaphors of command and control, treating neural networks like they are traditional software with backdoors and kill switches.
Anthropic, for all its technical sophistication, seems naive about how the world of power actually works, surprised that refusing to build weapons for the state might have consequences.
The rest of us are caught in the middle, watching two institutions stumble toward a future neither fully comprehends. What emerges from this collision will shape not just defense procurement but the boundaries of corporate ethics, government oversight, and whether democratic societies can govern technologies that outpace their understanding. The matrix does not care about our political dramas. It simply multiplies weights and produces probabilities.
The sooner we stop treating it like magic, the sooner we can start building policies that might actually work.
Sources
Dave, Paresh. “Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems.” WIRED, March 17, 2026. https://www.wired.com/story/department-of-defense-responds-to-anthropic-lawsuit/
Dave, Paresh. “Anthropic Denies It Could Sabotage AI Tools During War.” WIRED, March 20, 2026. https://www.wired.com/story/anthropic-denies-sabotage-ai-tools-war-claude/
Hays, Kali. “Anthropic boss rejects Pentagon demand to drop AI safeguards.” BBC News, February 27, 2026. https://www.bbc.com/news/articles/cvg3vlzzkqeo
“How Do We Fix and Update Large Language Models?” Stanford Human-Centered Artificial Intelligence Institute, September 30, 2024. https://hai.stanford.edu/news/how-do-we-fix-and-update-large-language-models
Wu, Haibin, et al. “Can LLM Safety Be Preserved During Fine-Tuning? A Framework for Evaluating Changes in Alignment and Performance.” ArXiv:2403.14236v1, March 21, 2024. https://arxiv.org/abs/2403.14236
Qi, Xiangyu, et al. “Fine-Tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!” ArXiv:2310.03693, October 5, 2023. https://arxiv.org/abs/2310.03693
Vaswani, Ashish, et al. “Attention Is All You Need.” ArXiv:1706.03762, June 12, 2017. https://arxiv.org/abs/1706.03762
Alammar, Jay. “The Illustrated Transformer.” jalammar.github.io, 2018. https://jalammar.github.io/illustrated-transformer/
“Gauging the AI Threat to Free and Fair Elections.” Brennan Center for Justice, 2024. https://www.brennancenter.org/our-work/analysis-opinion/gauging-ai-threat-free-and-fair-elections
“Why citizens and campaigns need to improve AI literacy in this very political year.” SC World, 2024. https://www.scworld.com/perspective/why-citizens-and-campaigns-need-to-improve-ai-literacy-in-this-very-political-year



