.png)

Kembai Srinivasa Rao is a former banker who teaches and usually writes on Macroeconomy, Monetary policy developments, Risk Management, Corporate Governance, and the BFSI sector.
April 1, 2026 at 6:25 AM IST
Fraud in banking no longer stays contained. It leaks into markets, damages reputations and erodes investor confidence. The impact rarely ends with the reported loss.
What has changed is not just the frequency of fraud, but its ability to stay a step ahead of the systems meant to prevent it. Safeguards have improved, but so have the methods used to bypass them. Operational risk is no longer only about weak processes or faulty systems. It is about blind spots in behaviour, supervision and how seriously early warnings are taken.
Because there are always early warnings.
Frauds build gradually, through small overrides, ignored anomalies, and patterns that feel unusual but not urgent. By the time fraud becomes visible, it has usually been allowed to run for longer than it should have.
And yet, most responses remain reactive. Investigations begin, accountability is assigned, and controls are tightened. But deeper rethinks are less common. Fixes tend to address the last failure, not the next one. Meanwhile, those attempting fraud adapt just as quickly — finding new vulnerabilities or, in some cases, finding people willing to look the other way.
In 2023–24, over 36,000 frauds worth ₹112.6 billion were reported. A year later, the number of cases declined, but the amount involved rose sharply to ₹347.7 billion. In just the first half of 2025–26, ₹215 billion has already been reported. Small-value digital frauds dominate in volume, accounting for 70–80% of cases. But it is the larger, more structural failures that leave a deeper imprint.
ABG Shipyard’s loan fraud (2022–24), the UCO Bank IMPS glitch (2023–24), discrepancies at IndusInd Bank (2025–26), fund diversion at IDFC First Bank (2026), and the recent Kotak Mahindra Bank incident (March 2026) — each is different in form, but similar in implication. Controls were in place and signals likely existed too. But somewhere along the chain, they did not translate into timely action.
That is what makes these episodes less like isolated incidents and more like recurring patterns. Operational risk failures do not sit within one function. They move across credit, technology and governance, often linking small gaps into larger breakdowns.
In a fast-moving information environment, news of fraud quickly spills into the market. Share prices react instantly, often before the full picture is clear. While there may be some correction over time, losses to investors are rarely reversed in full.
Operational risk, in that sense, becomes market risk.
Investors, who have no role in the underlying failure, bear part of the cost. Reputational damage adds another layer, sometimes with tangible consequences, as seen in actions such as IDFC First Bank’s removal from the Haryana government’s approved list. The spillover effects of fraud are therefore wider, and more persistent, than they first appear.
Common Signals
Yet the most persistent weakness is behavioural.
Fraud rarely operates in complete silence. It leaves traces — repeated visits, unusual familiarity between staff and external parties, transactions that do not quite fit established patterns. These are not always dramatic signals, but they are observable.
The persistence of large frauds raises an uncomfortable question: not whether systems existed, but whether the signals they generated were taken seriously. It is difficult for wrongdoing to continue unchecked when colleagues remain alert and willing to question what does not seem right. While cyber fraud may be entirely external, most large-value frauds require some degree of internal facilitation or oversight failure.
A vigilant workforce, therefore, remains the first line of defence.
Shared Defence
Traditional mechanisms such as whistle-blower policies need sharper execution, with defined turnaround timelines and visible follow-through. Employees must also be trained to recognise behavioural red flags, not just procedural deviations.
Risk-based internal audits can expand their scope to include behavioural review alongside documentation. Many frauds begin as intent. Paper trails may not reveal them early enough, but behaviour often does.
Equally important is how institutions respond to signs of indifference. When unusual conduct is noticed but not acted upon, it creates space for larger failures.
Technology will play an important role, but it cannot replace judgement. Tools like MuleHunter can flag unusual transaction patterns. Behavioural systems such as BioCatch and LexisNexis ThreatMetrix go further, picking up on how users type, pause or navigate a screen. They can detect when a customer is being coached, or when activity deviates from normal behaviour, allowing for early intervention.
But these tools do not operate in isolation.
If people hesitate to escalate, or choose not to, even the most advanced systems will miss what matters. Escalation frameworks therefore need to capture not just financial anomalies, but behavioural ones: what appears unusual, who is interacting with whom, what does not quite add up. This also requires creating an environment where speaking up is seen as part of the job, not a risk attached to it.
Circulars and compliance checklists, by themselves, offer limited protection. Their effectiveness depends on how well they are internalised and acted upon.
Fraud prevention is not a one-time fix. It is a continuous process of vigilance, adaptation and shared accountability.
The cost of failure no longer sits with the institution alone. It spreads across investors, markets and public trust. Managing operational risk, therefore, is not just about preventing loss. It is about holding the system together.