.png)

Amit Singh, partner at KAnalysis law firm, is an expert in US patent laws and a registered agent with the Indian Patent Office.
February 27, 2026 at 5:06 AM IST
Artificial intelligence has moved faster than the rules meant to contain it. Patent law was built for a world of engineers, notebooks, and prototypes. It is now being asked to cope with systems that train themselves, generate designs, and arrive at solutions their creators barely understand.
Last year, more than 465,000 AI-related patent applications were filed worldwide. That is not a sign that the system is working smoothly. It is a sign that it is under pressure.
Much of what is now being patented would have looked implausible a decade ago. Software proposes new molecules. Models redesign factory layouts. Algorithms generate circuit architectures that few humans fully understand. In many cases, the “inventor” is no longer a person in the conventional sense, but a system guided, nudged, and audited by humans.
This has unsettled some of the oldest assumptions in patent law. Who deserves credit when a machine produces the breakthrough? How much human input is enough? At what point does computational output become engineering?
Regulators have not answered these questions in the same way. The result is a patchwork of standards in which legal strategy now matters almost as much as technical ingenuity.
Nowhere is this clearer than in the United States.
The USPTO has been quick to update its rules to fit AI breakthroughs while sticking to traditional patent laws. American examiners remain focused less on how an invention is created and more on what it actually does. Under the Alice/Mayo framework, AI claims must still avoid abstraction, but the decisive factor is practical effect. Does the system run faster? Consume less power? Interact more reliably with hardware? Solve a problem that previously resisted automation?
If the answer is yes, the claim has a fighting chance.
On inventorship, however, Washington has been blunt. Machines do not invent. People do. AI is treated as a sophisticated instrument, not a legal actor. The USPTO’s guidance in 2024 and 2025 makes this explicit: as long as a human meaningfully shaped the outcome, authorship stands.
This stance has produced results. Approval rates hover around 55%, well above most European benchmarks. The implicit bargain is straightforward. Innovate aggressively, structure your claims carefully, and the system will meet you halfway.
Europe has gone in the opposite direction. The European Patent Office remains deeply sceptical of software-led inventions. AI and machine learning are still classified as mathematical methods unless anchored to a concrete technical purpose.
Under the 2025 guidelines, improvement by itself is no longer enough. Their guidelines, refreshed for April 2025, make it plain that if a claim involves tech tools like computers or gadgets, and the AI helps fix a technical issue—maybe by fitting to certain hardware or working in areas like image analysis—then it qualifies as technical. One has to prove that technical impact with solid evidence, like descriptions, math backups, or actual data, not just claims.
Disclosure requirements are equally demanding. Applicants must explain how models are built, trained, and deployed in enough detail for others to reproduce them. Invoking “proprietary architecture” is no longer an acceptable substitute for transparency. The result is predictably lower grant rates, hovering near 38%.
Global Divergence
China is the clearest example. With more than 38,000 generative AI patent families filed between 2014 and 2023, it has turned intellectual property into an instrument of state policy. Its 2026 guidelines fold ethical screening into technical review. Systems built on dubious data or embedded bias can be stopped before they reach the register.
Applicants are required to open up the “black box” — to explain how their models function inside factories, grids, and logistics networks. Only natural persons qualify as inventors. Disclosure is exhaustive. Examination is fast. With AI-assisted review, timelines have fallen to around 15 months.
In Japan, very little has changed. Examiners still want to know what they have always wanted to know: is it genuinely new, does it solve a real problem, and can it actually be made to work? Anything that cannot clear those basic hurdles rarely gets far.
Inventors, in the legal sense, must still be people. Claims are expected to show concrete gains on factory floors and network systems, not just clever rearrangements of code. Japan prefers systems that behave predictably to ones that promise disruption and deliver uncertainty. It holds a strong spot in AI patents, with cases covering things like blending with IoT.
Britain has followed Europe’s lead. UKIPO’s 2025 reforms made it clear that neural networks are not special simply because they are fashionable. If they amount to little more than software rearranging information, they will be treated as such. The move away from the Aerotel test reflects a wider loss of appetite for judicial experimentation and a return to safer, continental habits. Applicants are expected to show how AI changes the way machines behave in the real world. Inventorship requires human involvement.
India is harder to pigeonhole. More than 86,000 AI filings since 2010 reflect a mix of start-up ambition, public research, and outsourced global work. Purely machine-generated outputs are excluded. AI-assisted inventions are permitted, provided human agency is evident. Disclosure obligations are heavy, covering models, training data, and implementation. The Indian Patent Office uses a step-by-step check for eligibility, seeing if it tackles a tech issue with solid steps, like boosting speed or linking hardware. Since 2025, approval rates have stabilised as examiners have settled on clearer internal benchmarks.
On one point, every major system is aligned: machines are not inventors. High-profile efforts such as the DABUS cases have made that boundary clear — and failed. Legal systems remain unwilling to concede moral or legal personality to code.
Fault Lines
Where they differ is in how much abstraction they tolerate. The US privileges application. Europe insists on technical character. China embeds state priorities. Japan values continuity. India balances exclusion with developmental ambition.
All are tightening disclosure. All are grappling with uneven approval rates. And all participate in multilateral forums, including WIPO, in search of limited convergence.
Yet true harmonisation remains unlikely. Patent regimes reflect national anxieties as much as technological realities. America worries about losing its edge. China prioritises control and scale. Europe guards doctrinal integrity. India seeks to channel digital growth.
For inventors, there is no universal formula for AI patents. Claims must be rewritten for each jurisdiction. Disclosures must be recalibrated. Legal theories must shift with geography. A filing that succeeds in California may fail in Munich or Delhi.
Patent law, for all its strain, continues to insist on accountability. Someone must stand behind the machine. Someone must answer for its outputs.
For now, ownership of ideas remains a human privilege, even when the ideas themselves no longer are.