.png)
A mature culture for innovation is not built on apology, gloss, denial or blame game. It’s built by asking why fakery seems more alluring than honesty.


Kirti Tarang Pande is a psychologist, researcher, and brand strategist specialising in the intersection of mental health, societal resilience, and organisational behaviour.
February 19, 2026 at 3:17 PM IST
Growing up, we are taught that lying is a moral failing. Good kids tell the truth and bad kids lie. That binary is misleading.
Lying is, in fact, a deeply human psychological response to fear, shame, and the need to protect identity. We lie because we are afraid—of punishment, rejection, loss of status, or exposure as less competent than we wish to appear. And sometimes, truth feels like violence, so we soften, delay, or edit it.
“It’s nothing,” we say, when it is very much something, to avoid confronting pain.
We lie to sidestep shame, which threatens our core sense of self: If I am not innovative, not cutting-edge, who am I? We cushion the ego against harsh reality through denial.
And we lie for impression management—by curating a version of ourselves or our institutions that aligns with desired perception. These mechanisms, rooted in basic human psychology, explain far more than isolated scandals.
Reputational Inflation
They explain why, on February 18, 2026, at the India AI Impact Summit in Delhi, Galgotias University presented a commercially available Chinese Unitree Go2 robot—priced at ₹200,000–300,000—as “Orion,” described as developed in-house by their Centre of Excellence.
Professor Neha Singh's on-camera claim went viral. Online users identified the mismatch. Memes exploded (“Made in China, Marketed in India,” “International Bezatti”), and the university was ordered to vacate its stall. Apologies followed, attributing the incident to “miscommunication” and an “ill-informed representative” enthusiastic for camera time.
The episode is not alien; it is merely amplified.
Lies are all around us. A child saying “It just fell” to avoid disappointment, a professional adding unproven skills to a résumé to stay competitive, a founder exaggerating product readiness to secure funding—all share the same roots.
But lying at an individual level is different from lying at an institutional level.
At the institutional level, the stakes scale when procurement blurs into “development,” demonstration into “innovation,” and ambiguity fills the gaps.
This is reputational inflation, thriving in competitive ecosystems where perception can outrun substance. No IP theft occurred here. It was a bid to signal prestige, funding potential, student enrolments, and alignment with national rhetoric on technological self-reliance.
The damage differs sharply by scale. Individual lies harm relationships; institutional lies are systemic. Universities shape minds, influence policy, and command public trust. Their claims affect thousands, framing national narratives.
Layered accountability shields them: Was it the professor? The PR team? Leadership? The “system”? Everyone can say “miscommunication”; no one admits deliberate misrepresentation.
In a nation pursuing AI sovereignty under “Make in India” and Atmanirbhar Bharat, such episodes feel like betrayal of collective aspiration. The outrage is instinctive (“How dare they?”), but outrage is not strategy.
A more effective response is to treat lies as data, not defiance. What is this lie protecting?
In Galgotias' case: status, competitive positioning, alignment with innovation narratives, anxiety about appearing behind.
Understanding motive does not excuse misconduct. It provides a diagnostic map. The episode reveals how easily procurement is rebranded as R&D, how incentive structures make hype feel safer than humility, and gaps in defining “developed by” versus “used for research.”
What now?
We need structural fixes, not spectacle. Spectacular scandals don’t build a culture of innovation.
National platforms must operationalise terms with precision—“developed in-house,” “assembled,” “customised,” “procured and deployed”—as integrity markers, not semantic luxuries.
Consequences for misrepresentation such as loss of platform access and funding scrutiny must make honesty rational. If we reward transparent collaboration instead of penalising admission of global tech reliance, pretence will lose appeal.
Memes offer catharsis. But ridicule without pattern analysis wastes opportunity.
We need to ask:
How often is imported tech rebranded as indigenous?
What disclosure norms suit public showcases?
Where does aspiration end and deception begin?
The goal here is to distinguish collaboration from fakery.
At the individual level, we must update trust models. Treat claims as claims, not facts. Demand transparency through publications, patents, R&D disclosures, documented work, open collaborations, and precise corrections instead of vague apologies. We must look beyond “Centre of Excellence” banners.
And ask ourselves the uncomfortable question:
When have we lied to save face?
By recognising our small edits—résumé embellishments, half-truths—we can understand how human systems drift into collective self-deception.
But this understanding is not tolerance.
Treating lies as data means regulating emotional spikes, separating what was untrue from what it reveals about stakeholder incentives, and assessing whether it is a one-off or a pattern.
Then we can choose: demand reform, verify cautiously, or withdraw trust.
What we must not choose is fatalism—“Everyone lies; this is how things work.” It normalises exaggeration and sanctions hype over honesty.
Scandals fade. Systems where fakery feels safer than truth endure.
Lies are rarely about evil. They are about fear.
The real question is this: how do we build incentives where honesty feels safer, and braver, than deception?