.png)

T.K. Arun, ex-Economic Times editor, is a columnist known for incisive analysis of economic and policy matters.
March 6, 2026 at 12:07 PM IST
The US government has classified AI company Anthropic, the maker of the Claude chatbot, a supply-chain risk. Anthropic is suing the government over the classification, which could, potentially, prevent any company that does business with the government from using Anthropic’s tools. In this piece, we try to examine what implications this has for India, and its use of AI for sovereign purposes, like national security.
What invited the Trump administration’s ire was Anthropic’s refusal to let the government use its platform for domestic mass surveillance and automated target selection for weapons without a human in the decision-making loop. Despite repeated requests from the government, particularly the Secretary of War, Pete Hegseth, Anthropic refused to remove its safeguards and give the government the right to make unrestricted use of its software platform and tools.
President Trump told reporters he had fired Anthropic like dogs. OpenAI, a rival AI firm, was quick to seize the opportunity and agree to make its AI tools available to the Department of War without conditions. That left Anthropic boss Dario Amodei free to describe OpenAI’s behaviour as ‘mendacious’.
There are several interesting sidelights to this first-ever instance of the US government designating an American company a supply-chain risk. That honour had been reserved for companies like China’s Huawei in the past.
One is that it reveals a relationship and a power dynamic between the government and corporations in the US, whose existence is rarely appreciated, even as Americans routinely take pleasure in damning all Chinese companies as habitual, if not legal, handmaidens to the Chinese Communist Party.
TikTok had been banned in the US on the presumed subservience of the Chinese parent of TikTok to the Chinese government and the CCP, whatever the official status of TikTok as the subsidiary of Cayman Islands-registered Variable Interest Entity, ByteDance, with its operational arm registered in Singapore.
Instances like Apple refusing to hand over the encryption keys to an iPhone used by a terror suspect in an attack in San Bernardino, California, late in 2015, to the investigating Federal Bureau of Investigation serve to reinforce the perception that, in the land of the free and home of the brave, governments have limited leeway over companies.
The eagerness with which America’s corporate elite lined up to pay homage to Trump before his inauguration was a good indication that such independence of business from the government was a myth. Quite apart from the lobbying that produces all kinds of fiscal incentives, permits, and state contracts, American businesses, particularly the bigger ones, do lean heavily on political support. When the Trump administration made it clear that it considered the agenda of Diversity, Equality and Inclusion to be ‘woke’ nonsense, only a few companies continued to uphold the agenda as a core embodiment of their values, while most either formally abandoned it or buried it through sustained neglect.
The short point is that, of business and government in the US, the proposition that never the twain shall meet has been more myth than reality. The good thing about the Trump presidency is that it tears off layers of diaphanous make-believe from the extent of political control over business. When it comes to advanced technology, there is very little make-believe to get rid of.
Anthropic is in negotiations with the Department of War over reinstating the company in the government’s good books, particularly following the reported deployment of Claude tools for target selection in the ongoing war with Iran. That does not alter the fact that AI companies, as well as their offerings, are national assets, not global public goods, even if these are made available to foreigners for free, as in India, or for a nominal fee.
AI can play an immense role in making national security choices, not just along the operational edge, such as in selecting targets for attack. One example of use of AI in passive defence would be scanning hundreds of thousands of satellite and drone images along the border, to spot and flag developments out of the ordinary. Analysing publicly available information to discern actionable patterns of behaviour of interest — what, in the jargon, is called OSInt or Open Source Intelligence — is another. Planning military stores and movements can be improved with AI. Military folk would be able to provide more comprehensive lists of AI use in national security.
What interests us is what kind of AI can India use for the purpose. ChatGPT, Claude, Copilot, Gemini-3 or Llama from American companies? French Mistral and its applications? Or open-source, open-weight models from China that can be freely downloaded and adapted? None of them.
The US government can override commercial contract obligations of AI firms to their clients, including national governments, if its assessment of national security needs warrants such an action. Can India consider itself immune to such strong-arm treatment at the hands of the US administration?
Iran’s naval frigate IRIS Dena was torpedoed by an American submarine off the Sri Lankan coast on March 4. It was at that spot where it was attacked and sunk only because it had invited India’s invitation to take part in a joint naval exercise that India hosted for well over a dozen navies, and was on its way back. For the Americans to attack that ship, they had to totally disregard India’s loss of face over such a development. But attack the ship, they did. New Delhi does not even dare to criticize that attack, calling it tragic, instead.
The Trump administration will come down like a tonne of bricks on India if New Delhi takes any strong stand against US arbitrariness and highhandedness. That would include stopping access to any American AI by New Delhi for strategic purposes.
India will have to rely on indigenous AI models for use in its national security operations. Yes, Sarvam does have a foundation model that makes use of 2 billion parameters. It is a decent effort, no doubt, but cannot match the American AI models that run on trillion-plus parameters.
Can New Delhi leave the development of cutting-edge AI capability to private Indian companies? No private Indian company spends 15-20% of turnover on R&D, the way major American companies do. The government will probably need to set up a specialist state-owned enterprise to do the job, giving it operational autonomy and national prestige. Can Indian technology workers deliver globally competitive outcomes, working in government agencies? C-DOT and ISRO are good examples. BARC and C-DAC offer less illustrious examples of success, as well, probably because of lower access to resources.
This is not a matter of choice, but of compulsion. India needs capable AI for strategic, national security deployments, and that AI is unlikely to be forthcoming from private enterprise. India has the talent, what the talent needs is an organisational framework within which to operate.
Dump the fiction of genetic dysfunction in the public sector. Create a good company under capable leadership, and India will have the AI it needs.
We should thank President Trump for leading by negative example.