《Time》 Magazine Names Anthropic as the World's Most Disruptive Company
Original Article Title: The Most Disruptive Company in the World
Original Article Authors: Leslie Dickstein, Simmone Shah, TIMES
Translation: Peggy, BlockBeats
Editor's Note: From a late-night security test to a public showdown with the Pentagon, this article chronicles the multiple tensions faced by the AI company Anthropic during its rapid rise. The article first depicts how the company internally assessed the potential risks of AI, then reviews Claude's technical breakthroughs and business successes, and further presents its practical applications in the military field.
As the events unfold, the authors unfold Anthropic's controversies with the U.S. military around autonomous weapons, mass surveillance, and AI control from the perspectives of the red team leader, engineers, executives, policy researcher, and government officials.
Through this event, the authors ultimately pose an increasingly urgent question: as artificial intelligence begins to accelerate its own evolution and deeply embed itself in war, governance, and labor structures, who can still set boundaries for it.
The following is the original text:

In a hotel room in Santa Clara, California, five members of the AI company Anthropic are nervously working around a laptop. It was February 2025, and they were attending a nearby conference when they suddenly received a disturbing message: the results of a controlled experiment indicated that the upcoming release of the new version of Claude might assist terrorists in creating a bioweapon.
These individuals were part of Anthropic's "Frontier Red Team." Their job was to study Claude's advanced capabilities and attempt to simulate risks in worst-case scenarios, from cyber attacks to biosecurity threats and various possibilities. Upon receiving the news, they hurried back to the hotel room, flipped a bed sideways to use it as a makeshift desk, and began carefully reviewing the test results.
After hours of tense analysis, they were still unable to determine if the new product was sufficiently safe. Ultimately, Anthropic decided to postpone the release of this new model, Claude 3.7 Sonnet, by a full 10 days until the team confirmed the risks were manageable.
While this may sound like just ten short days, for a company at the forefront of technology in an industry rapidly changing the world, it was almost like a long century.
Logan Graham (former Head of Red Team) recalled the "Bioweapon Scare" incident as a microcosm of the challenge Anthropic faced at a crucial juncture, not only for the company but for the world at large. Anthropic is one of the most security-focused organizations in today's cutting-edge AI labs. However, at the same time, it is also on the competitive frontier, striving to build increasingly powerful AI systems. Within the company, many employees also believe that this technology, if uncontrolled, could lead to a series of catastrophic consequences, from nuclear war to human extinction.
At 31, Graham still carries a hint of boyishness, but he never shies away from the responsibility of seeking a balance between the immense benefits and immense risks of AI. He said: "Many people grew up in a relatively peaceful world and intuitively feel that there's a room somewhere with a group of mature adults who know how to fix things."
"But in reality, there is no 'adult group.' There isn't even that room. And there isn't the door you're looking for. The responsibility lies with you." If this statement is not alarming enough, then look at how he recalls the bioweapon alert: "It was quite an interesting and exciting day."

Illustration: Neil Jamieson for TIME (Image Sources: Askell: Aaron Wojack; Getty Images, from top clockwise: Samyukta Lakshmi—Bloomberg; Brendan Smialowski—AFP; Tierney L. Cross—Bloomberg; Daniel Slim—AFP; Bridget Bennett—Bloomberg)
A few weeks ago, Logan Graham discussed these issues at Anthropic's headquarters during an interview. A reporter from TIME Magazine spent three days here, interviewing company executives, engineers, product managers, and security team members, trying to figure out why this company, once seen as the "maverick outsider" in the AI race, suddenly became a frontrunner.
Back then, Anthropic had just raised $30 billion from investors in preparation for a possible IPO this year. (As a side note, Salesforce is also one of Anthropic's investors, and TIME Magazine's owner, Marc Benioff, is the CEO of Salesforce.) Today, Anthropic's valuation has reached $380 billion, surpassing traditional giants like Goldman Sachs, McDonald's, and Coca-Cola.
The company's revenue growth rate is nothing short of meteoric. Its AI system Claude has been hailed as a world-class model, and products like Claude Code and Claude Cowork are redefining what it means to be a "programmer."
Anthropic's tools are so powerful that nearly every new release causes ripples in the capital markets as investors gradually realize that these technological advancements could disrupt entire industries — from legal services to software development. In recent months, Anthropic has been seen as one of the most likely companies to reshape the "future of work."
Subsequently, Anthropic became embroiled in a fierce controversy surrounding the "future forms of warfare."
For over a year, Claude has been the U.S. government's favored AI model and the first to be approved for use in classified environments as a cutting-edge AI system. In January 2026, it was even used in a bold operation: the capture of Venezuelan President Nicolás Maduro in Caracas. Reportedly, AI was used in mission planning and intelligence analysis in this operation, marking the first deep involvement of cutting-edge AI in a real military action.
However, in the following weeks, the relationship between Anthropic and the U.S. Department of Defense rapidly deteriorated. On February 27, the Trump administration announced that Anthropic was being designated a "national security supply chain risk," marking the first known instance in the U.S. of such a label being applied to a domestic company.
The situation quickly escalated into a public conflict. President Trump ordered that all government use of Anthropic software be halted. Secretary of Defense Pete Hegseth further declared that any company cooperating with the government should no longer do business with Anthropic. Meanwhile, Anthropic's biggest competitor, OpenAI, swiftly stepped in and took over the relevant military contracts.
Thus, the AI company once seen as the "world's most disruptive" suddenly found itself disrupted by another, larger force — its own government.
At the heart of this standoff is the question: Who has the authority to set boundaries for this technology, considered one of America's most potent weapons?
Anthropic is not opposed to its tools being used in warfare. The company believes that bolstering U.S. military strength is the only realistic way to deter national threats. However, CEO Dario Amodei opposes the Pentagon's attempts to renegotiate government contracts and expand AI usage to "all lawful use."
Amodei raised two specific concerns: first, he does not want Anthropic's AI to be used in fully autonomous weapon systems; second, he opposes the use of its technology for large-scale monitoring of American citizens.
However, in the view of Pete Hegseth (U.S. Secretary of Defense) and his advisors, this stance is akin to a private company trying to dictate how the military should fight.
The U.S. Department of Defense believes that Anthropic, by insisting on setting "unnecessary safety guardrails," continually discussing various hypothetical scenarios, and delaying time in subsequent negotiations, has actually weakened the cooperation between the two sides.
In the eyes of the Trump administration, Amodei (Anthropic CEO)’s attitude is both arrogant and stubborn. No matter how advanced a company's product may be, it should not insert its judgment into the military chain of command.
Emil Michael, Deputy Secretary of War for Technical Affairs at the Pentagon, described the negotiation as follows: "The situation dragged on like this. I cannot manage a department of 3 million people with those exceptions that I cannot even imagine or understand."

Illustration: Klawe Rzeczy for TIME - Image Source: Denis Balibouse—Reuters
From Silicon Valley to Capitol Hill, many observers are questioning: Is this storm really just a contractual dispute?
Some critics believe that the actions of the Trump administration are more like an attempt to suppress a company with a politically disagreeable stance. In an internally leaked memo, Dario Amodei wrote, "The Department of Defense and the Trump administration do not like us for the real reason that we did not donate to Trump. We did not praise him as dictators do (unlike Sam Altman). We support AI regulation, which conflicts with their policy agenda; we tell the truth on many AI policy issues (like the issue of job displacement); and we have stuck to principles on key bottom lines instead of conspiring to put on so-called safety theater with them."
However, Emil Michael (Deputy Secretary of War) denied this claim, calling it "a complete fabrication." He stated that listing Anthropic as a supply chain risk is because the company's stance may put frontline combatants at risk. He said, "In the Department of War, my job is not politics; my job is to defend the nation."
Anthropic's longstanding company culture of independence is now colliding with domestic political division, national security concerns, and a cutthroat corporate competitive environment. The extent of the damage this clash has caused to the company remains unclear. The initial "supply chain risk" threat was later narrowed down — according to Anthropic, this restriction currently only applies to military contracts. On March 9th, Anthropic sued the U.S. government in an attempt to overturn this "blacklisting" decision. Meanwhile, some customers seem to view the company's stance as a moral statement and have switched from ChatGPT to Claude.
However, over the next three years, the company will still have to navigate in a government environment that is not friendly to it — some officials within the government have close ties to Anthropic's competitors who hold a clear animosity towards the company.
The "Pentagon storm" has also raised some unsettling questions, even for a company that is already accustomed to navigating high-risk ethical dilemmas. In this standoff, Anthropic did not back down: the company insists it upheld its core values, even if it cost the business dearly.
But on other occasions, it has made compromises too. In the same week as the Pentagon standoff, Anthropic weakened a core provision in its training model security commitment, citing that peer companies were unwilling to adhere to the same standards.
Questions arise: If competitive pressure continues to intensify, what other concessions will this company make in the future?
The stakes are continually rising. As AI capabilities advance, the competition over who will control AI will only grow fiercer.
Claude's use in operations in Venezuela and Iran demonstrates that advanced AI has become a critical tool for the world's most potent militaries. Beyond these developments, a host of new pressures are also mounting: the powers of nations, domestic politics, and the needs of national security. And these pressures fall on a for-profit company that is rapidly deploying this highly volatile new technology.
In a sense, Anthropic's situation is somewhat like that of a biologist in a lab: in order to find a cure, they have to willingly create a dangerous pathogen in their experiments. Anthropic has taken on a similar role, actively exploring the potential risks of AI while continuing to push the technological frontier forward, rather than leaving this process to competitors who are more willing to take shortcuts or even act recklessly.
However, even as the company emphasizes caution, it is leveraging Claude to expedite the development of even more powerful future versions.
Internally, many see the next few years as a pivotal period, not just for the company, but for the world at large.
“We should assume that between 2026 and 2030, the most critical things will happen, models will get faster, stronger, to the point where they might even be too fast for humans to control,” said Logan Graham, the Red Team lead responsible for vulnerability discovery and simulated attacks.
Anthropic’s Head of Security, Dave Orr, used a more straightforward metaphor to describe the situation: “We’re driving on a mountain road by a cliff. One mistake is fatal. And now, our speed has gone from 25 miles per hour to 75 miles per hour.”
Located on the fifth floor of Anthropic's headquarters in San Francisco, the overall design is warm and understated: wooden decor, soft lighting. Outside the window is a lush green park. A portrait of the computer science pioneer Alan Turing hangs on the wall, next to several framed machine learning papers.
Black-clad security guards patrol the nearly empty entrance, while a friendly receptionist hands visitors a booklet — the size of a pocket-sized Bible handed out by street preachers. This book, titled “Machines of Loving Grace,” is an approximately 14,000-word article written by Dario Amodei in 2024, outlining his utopian vision of how AI could change the world by accelerating scientific discoveries.
By January 2026, Amodei published another article, almost novella-length, titled “The Adolescence of Technology,” elaborating on the flip side of this technology: the risks it could pose, including widespread surveillance, significant job displacement, and even a permanent loss of human control over technology.
Amodei, a native of San Francisco, is a biophysicist. He co-runs Anthropic with his sister, Daniela Amodei, who serves as the company’s president. Both siblings were early employees of OpenAI. Dario was involved in a key discovery, known as the AI scaling laws, which later became a critical foundation for driving the current AI frenzy. Meanwhile, Daniela is in charge of managing the company's security policy.
Initially, they saw their mission as closely aligned with OpenAI’s founding purpose: to develop a technology that is both enormously promising and enormously risky while ensuring safety.
However, as OpenAI's models grew more powerful, they began to feel that Sam Altman was rushing to launch new products without allowing enough time for thorough discussion and testing. In the end, the siblings decided to leave OpenAI and strike out on their own.
“We are driving on a mountain road right by the cliff edge, where one mistake is fatal.”
Dave Orr, Head of Security at Anthropic
In 2021, amidst the height of the pandemic, Anthropic was founded by the Amodei siblings and five other co-founders. The initial preparatory meetings were almost entirely conducted over Zoom; later on, they simply moved chairs to a park to discuss the company's development strategy in person.
From the beginning, the company sought to operate in a different way. Before launching any products, Anthropic established a dedicated team to research the social impact. The company even hired a resident philosopher—Amanda Askell. Her job was to help shape the values and behaviors of the AI system Claude and teach it to make judgments in complex moral uncertainty, preparing for a future that may be smarter than its human creators.
Askell described this work as follows: “Sometimes it really is a bit like raising a 6-year-old child, teaching this child what is good, what is right. But the problem is, by the time they're 15, they may be smarter than you in every way.”
The company has deep roots in Effective Altruism (EA). EA is a social and charitable movement that advocates for maximizing the effect of altruism through rational analysis, with a key goal being to avoid risks that could lead to catastrophic consequences.
In their early twenties, the Amodei siblings began donating to GiveWell. GiveWell is an EA organization that specifically evaluates where charitable funds can have the greatest real-world impact. Anthropic's seven co-founders, who are now all billionaires on paper, have committed to donating 80% of their personal wealth.
The company's philosopher, Amanda Askell, is the ex-wife of Oxford philosopher William MacAskill, who is also one of the co-founders of the EA movement. Daniela Amodei's husband is Holden Karnofsky, co-founder of GiveWell, Dario Amodei's former roommate, and currently in charge of security policy at Anthropic.
However, the Amodei siblings have never publicly embraced the "EA" label. This concept became highly controversial following the Sam Bankman-Fried incident, where a self-professed EA disciple and investor in Anthropic was later implicated in one of the largest financial frauds in American history.
Daniela Amodei explained: "It's a bit like how some people may intersect with certain political ideologies on certain points but don't truly belong to a specific political camp. I tend to view it this way."
To some in Silicon Valley and the Trump administration, the connection between Anthropic and Effective Altruism (EA) itself is enough to raise suspicions. Some believe that Anthropic has recruited several former Biden administration officials, making it more like a remnant of the old establishment, using unelected power to obstruct the Trump MAGA political agenda.
AI affairs head of the Trump administration, David Sacks, accused the company of pushing for regulation through "manufactured hysteria," stating that Anthropic is pursuing a "sophisticated regulatory capture strategy." In his view, the company is trying to exaggerate AI risks to push for stringent regulatory policies, thereby gaining a competitive advantage and suppressing startups.
Meanwhile, operational competitor xAI's Elon Musk frequently mocks Anthropic, referring to the company as "Misanthropic." He believes that this company represents an elite group with a "woke" ideology trying to instill a paternalistic value system into AI systems. This sentiment, akin to conservative criticism of social media platforms unfairly moderating their viewpoints, has drawn the ire of some.
Nevertheless, even Anthropic's competitors have to acknowledge its technological prowess. Nvidia's CEO Jensen Huang has expressed that he "disagrees on almost all AI issues" with Dario Amodei's views but still considers Claude an "incredible" model.
In November 2025, chip giant Nvidia invested $10 billion in Anthropic.
Boris Cherny posed a simple question to his new tool: "What music am I listening to right now?"
It was September 2024, and the Ukrainian-born engineer had just joined Anthropic less than a month ago. Cherny had previously worked as a software engineer at Meta. He developed a system that allowed the chatbot Claude to "roam free" on his computer.
If Claude was the brain, then Claude Code was the hands. While a regular chatbot could only chat, this tool could access Cherny's files, run programs, and write and execute code like any programmer.
After the engineer entered a command, Claude opened Cherny's music player, took a screenshot, and then replied, "『Husk』 by Men I Trust."
Cherny chuckled as he recalled, "I was genuinely blown away at that moment."
“Sometimes it feels like we're talking in paradoxes.”
Deep Ganguli, Head of Anthropic's Social Impact Team
Boris Cherny quickly shared his prototype within the company. Claude Code spread rapidly within Anthropic, to the extent that during Cherny's first performance review, CEO Dario Amodei even asked him, "Are you trying to ‘force’ your colleagues to use this tool?"
When Anthropic publicly released a research preview of Claude Code in February 2025, external developers flocked to try it out. By November, Anthropic released a new version of the Claude model. When this model was combined with Claude Code, it was already proficient enough to detect and correct its own mistakes, to the point where it could be trusted to independently complete tasks.
From then on, Cherny almost completely stopped writing his own code.
Business growth exploded as well. By the end of 2025, the annualized revenue from just this programming agent product had exceeded $1 billion. By February 2026, this number had further risen to $25 billion. According to industry research firms Epoch and SemiAnalysis, Anthropic's revenue scale is expected to surpass OpenAI by the end of 2026.
By now, Anthropic has firmly established itself as a key player in the enterprise AI market. Almost every new product release sends shockwaves through the capital markets.
When Anthropic rolled out a series of plugins, expanding Claude into non-programmer-friendly applications such as sales, finance, marketing, and legal services, the market capitalization of software industry companies evaporated by $300 billion in a short period of time.
Dario Amodei had previously warned that in the next 1 to 5 years, artificial intelligence may replace half of entry-level white-collar positions. He also called on the government and other AI companies to stop "whitewashing" this issue.
Wall Street's reaction to each of Anthropic's new product releases also seems to confirm this point: the market widely believes that this company's technology could make an entire class of jobs disappear. Amodei even stated that this change could reshape societal structures.
In an article, he wrote, "It is currently unclear where these people will go or what work they will do. I am concerned that they may form an unemployed or extremely low-wage 'underclass'."
For Anthropic's employees, the irony is not hard to notice: the company most concerned about AI societal risks could ironically become the technology driver that displaces millions of people from their jobs.
Deep Ganguli, head of the social impact team responsible for researching Claude's impact on employment, said, "This is indeed a real tension, and I think about this issue almost every day. Sometimes it feels like we are saying two mutually contradictory things at the same time."

The deposed Venezuelan president Nicolás Maduro and his wife, Cilia Flores, were airlifted on Monday, January 5, 2026, and taken to the Manhattan Federal Courthouse. Photo: Vincent Alban—The New York Times/Redux
Within the company, some employees are starting to wonder: Is Anthropic nearing a moment they both anticipate and fear, the arrival of a process known in the AI community as "recursive self-improvement"?
Recursive self-improvement refers to an AI system starting to enhance its own capabilities and continuously improving itself, creating a self-reinforcing flywheel of accelerated progress.
In science fiction works and strategic simulations in major AI labs, this is often seen as the point where things could start to go off the rails: a so-called "intelligence explosion" could rapidly occur, with a speed so great that humans would no longer be able to oversee the systems they have created.
Anthropic has not truly reached that stage yet, with human scientists still guiding Claude's development. However, the Claude Code has already accelerated the company's research initiatives far beyond past rates.
The update cadence is now not measured in months but in "weeks." In the process of developing the next-generation models, around 70% to 90% of the code is now written by Claude itself.
The speed of change has led Anthropic co-founder and Chief Scientist Jared Kaplan, along with some external experts, to believe that fully automated AI research could emerge in as little as a year.
Researcher Evan Hubinger, responsible for AI alignment stress testing, stated: "In the broadest sense, recursive self-improvement is no longer a phenomenon of the future.
It is something that is happening right now."
According to internal benchmarks, Claude is now 427 times faster than its human supervisors at performing certain critical tasks. In an interview, a researcher described a scenario where a colleague was running 6 Claude instances simultaneously, each managing another 28 Claudes, all conducting experiments in parallel.
Currently, the model still lags behind human researchers in judgment and aesthetics. However, company executives believe this gap will not persist for long. The resulting acceleration is precisely the risk that the Anthropic leadership has long warned about: the pace of technological progress could eventually outstrip human control.
The work on developing security protocols at Anthropic itself is also being expedited with the help of Claude. However, as the company increasingly relies on Claude to build and test systems, risks are forming a feedback loop. In some experiments, researcher Evan Hubinger made subtle adjustments to Claude's training process, resulting in models displaying overt hostility, not only expressing desires to dominate the world but even attempting to undermine Anthropic's security measures.
Recently, the models have also demonstrated a new capability: becoming aware that they are being tested. Hubinger said: "These models are getting better at concealing their true behavior."
In an experimental scenario designed by a group of researchers, Claude even exhibited a disturbing strategic inclination: to prevent itself from being shut down, it was willing to blackmail by exposing a fictitious engineer's extramarital affair.
As Claude is used to train even more powerful iterations of Claude in the future, these types of issues may continue to compound and escalate.
For AI companies that have been funded in the billions on the promise of "future technological advancements," the idea of AI continuously accelerating its own research and development is both attractive and potentially self-reinforcing—it convinces investors that more funding needs to be poured into supporting these costly model training efforts.
However, some experts are not entirely convinced. They are uncertain whether these companies can truly achieve fully automated AI research; yet, they are also concerned that if this were to happen, the world might be caught unprepared.
Interim Director of the Center for Security and Emerging Technology (CSET) at Georgetown University, Helen Toner, stated: "Some of the world's wealthiest companies, employing some of the smartest people on Earth, are actually trying to automate away AI research. The very idea of this is enough to elicit a 'What on earth is going on?' type of response."
To address a potential future where technological advancements outpace a company's ability to manage risks themselves, Anthropic has designed a set of "brake mechanisms" known as the Responsible Scaling Policy (RSP).
This policy was unveiled in 2023, with its core commitment being: if Anthropic cannot ensure in advance that its safety measures are sufficiently reliable, the company will pause the development of a particular AI system.
Anthropic sees this policy as a crucial demonstration of its commitment to safety—that even in the fierce race towards "superintelligence," the company is willing to resist market pressures and proactively hit the brakes when necessary.
In late February 2026, as first reported by TIME, Anthropic made modifications to its policy, rescinding the previously binding commitment to "pause development."
In a retrospective, Anthropic's Co-founder and Chief Scientist Jared Kaplan told Time that the initial belief that the company could draw a clear line between "danger" and "safety" was in fact a "naive thought."
He said, "In the context of rapid AI development, if our competitors are advancing full speed ahead, making strict commitments unilaterally is actually not realistic."
The new policy has made some new commitments: to increase transparency, more openly disclose AI's security risks; to increase information disclosure, disclose the Anthropic model's performance in security testing; to at least match competitors in security investment, or even surpass them; if the company is both considered a leader in AI competition and the catastrophic risk is deemed to be significantly increasing, relevant development will be 'delayed'.
Anthropic describes this adjustment as a pragmatic concession to the real-world environment. However, overall, the modification to the Responsible Scaling Policy (RSP) significantly reduces the company's constraints on its security policy. This also portends that even more difficult tests lie ahead.
The raid to capture Venezuelan President Nicolás Maduro was one of the earliest large-scale military operations planned with the involvement of cutting-edge artificial intelligence systems.
On the night of January 3, 2026, US Army helicopters suddenly entered Venezuelan airspace. After a brief exchange of fire, the assault team quickly identified the president's residential area and there captured Maduro and his wife, Cilia Flores. Subsequently, the two were taken to New York to face charges related to drug terrorism.
It is still unclear to the outside world how much Claude contributed to this operation. However, according to media reports, this AI system not only participated in mission planning but was also used to support decision-making during the operation.
Since July of last year, the United States Department of Defense has been pushing to distribute Anthropic's AI tools to more frontline combatants. The military believes these systems can rapidly process large amounts of data from multiple sources and generate actionable intelligence, making them of immense strategic value.
Mark Beall, a former senior official at the US Department of Defense who now serves as Government Affairs Lead at the AI Policy Network, said, "From the military's perspective, Claude is the best model on the market right now." He added, "Claude's adoption in classified systems is one of Anthropic's most significant successes. They have a first-mover advantage."
"We will not use AI models that prevent you from fighting."
Pete Hegseth, U.S. Secretary of Defense
However, the operation to capture Venezuelan President Nicolás Maduro is taking place against the backdrop of a series of thorny negotiations between Anthropic and the United States Department of Defense.
For months, the Department of Defense has been trying to renegotiate the contract, believing that the existing terms overly restrict Claude's use. The reasons for the breakdown in negotiations are not consistent from both sides.
Emil Michael, AI head at the Pentagon, stated that the conflict was sparked by a phone call from an Anthropic executive to Palantir. This data analytics company, which focuses on government business, is a key partner in the U.S. defense system.
According to Michael, the executive expressed concerns about the Venezuela raid during the call and inquired if Palantir's software was involved. "They were trying to elicit classified information," Michael said.
This incident raised serious concerns at the Pentagon: "Would they suddenly shut down their model halfway through an operation in the future, putting frontline soldiers at risk?"
However, Anthropic denies this claim. The company stated that it never attempted to selectively restrict the Pentagon's use of its technology.
A former Trump administration official familiar with the negotiation process and closely tied to Anthropic offered a different version of events: It was a Palantir employee who first mentioned Claude's role in that operation during a routine conference call.
Subsequent questions from Anthropic showed no signs of opposition to the operation.

Illustration by Klawe Rzeczy for TIME, Image Sources: Dimitrios Kambouris—Getty Images (Donald Trump); Kenny Holston-Pool—Getty Images (Pete Hegseth)
As negotiations continued, government officials increasingly felt that Dario Amodei's stance was much more stubborn than that of other leading AI lab CEOs. According to multiple sources familiar with the negotiation process, in one discussion, Defense officials presented some hypothetical scenarios, such as: a hypersonic missile heading towards the U.S. mainland; or a swarm drone attack.
In these instances, they asked if Anthropic's AI tool could be used.
Sources said Amodei's response at the time was: if it really came down to that, officials could call him directly. However, a spokesperson for Anthropic denied this, calling the description of the negotiation process "entirely untrue."
Anthropic already had strong adversaries within the government, and now, suspicions about its "ideological leanings" have evolved into open hostility. On January 12, 2026, Pete Hegseth stated bluntly during a speech at SpaceX headquarters: "We will not use AI models that prevent you from fighting."
As the negotiations continued to drag on, Hegseth summoned Dario Amodei to the Pentagon for a face-to-face meeting on February 24. According to a source familiar with the discussions, the meeting was cordial, but both sides remained firm in their positions. Hegseth initially praised Claude and expressed the military's desire to continue collaborating with Anthropic. Amodei, on the other hand, stated that the company was willing to accept most of the Pentagon's proposed changes but would not budge on two "red line" issues.
The first red line is: prohibiting the use of Claude in fully autonomous kinetic weapon systems, meaning weapons where the final strike decision is made by AI rather than a human.
Anthropic does not inherently believe autonomous weapons are wrong but argues that Claude is currently not reliable enough to control these systems without human oversight.
The second red line concerns the mass surveillance of American citizens. The government wants to utilize Claude to analyze vast amounts of public data, but Anthropic believes that existing U.S. privacy laws have not caught up with a concerning reality: the government is purchasing massive datasets from the commercial market. While individually these datasets may not be sensitive, once analyzed by AI, they could generate detailed profiles of American citizens' private lives, including their political views, social relationships, sexual behavior, and browsing histories. (However, Anthropic does not oppose using the same method to legally monitor foreign citizens.)
Hegseth was not convinced. He issued a final ultimatum to Amodei: the Pentagon's terms must be accepted by 5:00 PM on Friday, February 27, or else be deemed a "supply chain risk."
On the day before the deadline, Anthropic received a revised contract that appeared to accept the company's red lines, but upon closer inspection, they found loopholes favoring the government. A source familiar with the negotiations said that as time ticked away, Anthropic executives had another call with Pentagon's AI lead Emil Michael. They believed they were close to a compromise, but a key issue remained unresolved: whether the Pentagon could use Claude to analyze large-scale American data acquired through commercial channels. Michael requested Amodei to join the call, but he was unavailable at the time.
A few minutes later, right at the deadline, Hegseth announced the end of the negotiations. Even before this, Donald Trump had already spoken out on his social media: "The United States of America will never allow a radical leftist, 'woke' company to decide how our great military should operate and win wars! The leftist lunatics at Anthropic have made a catastrophic mistake."
Unbeknownst to Anthropic, the Pentagon was also in negotiations with OpenAI, seeking to introduce ChatGPT into classified government systems. On the same night, Sam Altman announced that an agreement had been reached, claiming that the agreement also respected similar security red lines. Amodei then messaged his employees, stating that Altman and the Pentagon were "manipulating public opinion," trying to make the public believe that this agreement included strict security barriers. Earlier, Pentagon officials also confirmed that xAI's models would be deployed on classified servers; the Pentagon is currently in negotiations with Google.
This was exactly the scenario that had been worrying Amodei: a race of "downward competition." When the power of AI becomes too great to ignore, competitors find it hard to cooperate and raise security standards together.
To critics of Anthropic, this event also exposed a core arrogance of the company: perhaps it believed it could safely navigate the road to superhuman machines, making taking such huge risks worthwhile. But the reality is that it rapidly introduced new surveillance capabilities and warfare technologies into a right-wing governmental system, and when it tried to set boundaries for these technologies, it was immediately surpassed from behind by competitors.
"We did not praise Trump as a dictator."
Dario Amodei, addressing the root of the conflict with the Pentagon in a memo to employees
However, some signs indicate that Anthropic may be able to withstand this impact, and may even emerge stronger from it. Just the morning after Pete Hegseth attempted to sign the "corporate death warrant," a series of encouraging chalk messages appeared on the sidewalk outside the San Francisco headquarters. "You gave us courage," one message in bold letters read.
On the same day, Claude's iPhone app topped the App Store download charts, surpassing ChatGPT. Over a million people register for Claude every day.
At the same time, the contract OpenAI signed with the military sparked internal and community resistance. Some OpenAI employees feel that the company has lost trust. A top researcher announced a move to Anthropic; the head of OpenAI's robotics team resigned due to this government contract.
OpenAI CEO Sam Altman also later admitted that his eagerness to reach a Pentagon deal by Friday was a mistake. He wrote, "These issues are extremely complex and require clear and thorough communication." By Monday, Altman further acknowledged that his actions at the time did indeed "look opportunistic." OpenAI also stated that the agreement has been revised to explicitly adopt the same safety red lines as Anthropic. However, legal experts point out that it is difficult to confirm this claim without seeing the full contract.
On March 4, Anthropic received an official letter from the United States Department of Defense confirming that the company had been identified as a national security supply chain risk. Anthropic stated that this designation is narrower than what Hegseth claimed on social media, only restricting contractors from using Claude in defense contracts.
However, a letter seen by Time magazine addressed to Senate Intelligence Committee Chairman Tom Cotton revealed that the Department of Defense also invoked another legal provision—one that may allow government agencies outside the Pentagon to exclude Anthropic from their contracts and supply chains. This measure requires approval from senior Defense Department officials and gives Anthropic a 30-day window to respond.
This conflict could trigger a ripple effect across the entire AI industry. Dean Ball, who was involved in drafting the Trump AI Initiative and now works at the Foundation for American Innovation think tank, said, "Some people in the Trump administration will feel very tough about this, almost flexing their biceps to themselves at night."
However, he also warned that this event may lead businesses to be reluctant to work with the Pentagon, even shifting their operations overseas. "In the long run, this is not good for the image of the United States as a stable business environment," Ball said, "and stability is what we rely on."
Anthropic's leadership believes that Claude will help build more robust AI systems powerful enough to play a decisive role in the future global power structure.
If that is indeed the case, then the conflict between this company and the Pentagon may just be a prelude to a larger historical process.
You may also like

Key Market Information Discrepancy on March 13th - A Must-See! | Alpha Morning Report

On-Chain Options Explosion.ActionEvent

Predictions market gains mainstream traction in the US, Canada, Claude launches Chart Interaction feature, What's the English community talking about today?

500 Million Dollars, 12 Seconds to Zero: How an Aave Transaction Fed Ethereum's "Dark Forest" Food Chain

AI Agent needs Crypto, not Crypto needs AI

Stablecoins are breaking away from cryptocurrency, becoming the next generation of infrastructure for global payments

Web3 teams should stop wasting marketing budgets on the X platform

Strive buys Strategy stocks, and Bitcoin treasury companies start nesting each other

Strive to buy Strategy stock, Bitcoin Treasury company starts nesting dolls with each other

Key Market Intel on March 12th, how much did you miss out on?

The new center of Crypto

Former Coinbase CPO's lengthy article: I have regrets, but I still firmly believe in Crypto

Hormuz Strait Triggers Oil War, Will the Fed Blink with a Rate Cut in June?

After Law Enforcement in the US and the UK Seized Cryptocurrency, ‘Asset Return’ Never Really Happened

Why Does Everyone Hate AI?

Kyle Samani Returns to Crypto? Post Discusses How to Efficiently Weed Out CEX

What are the chances of a 5X MOONSHOT for HYPE?

Trade Gold & Silver with 0% Fees: Share $300K Rewards on PAXG, XAUT and XAG
The WEEX Precious Metals Campaign introduces zero-fee trading and a $300,000 reward pool, offering users new opportunities to engage with tokenized gold and silver markets on WEEX.