Asking Weird Questions is the New Elite Skill

If AI mirrors your logic, you're already obsolete. Don't be a data point; be a dilemma. Reclaim your agency with Socratic defiance. Ask or perish.

Alvarez/Istock.com
Article related image
Representational Image
Author
By Kirti Tarang Pande

Kirti Tarang Pande is a psychologist, researcher, and brand strategist specialising in the intersection of mental health, societal resilience, and organisational behaviour.

February 7, 2026 at 8:52 AM IST

There was a time when saying, “I sit and think all day,” did not sound like unemployment. Socrates thought. Aristotle thought. Diogenes thought so aggressively that he embarrassed entire cities. Today, if you say, “I think,” you are met with: “Yes, but what do you really do?” Thinking has no LinkedIn category. Philosophy has become a closed loop, where most people study it to teach it—a Ponzi scheme of footnotes feeding footnotes.

“Knowing things” has lost its erotic charge. Even the least curious among us has access to near-infinite information just a tap away. The Spongebobs and Patricks of the world can generate intricate code. Everyone is a “know-it-all”. Having answers is no longer a flex. Knowledge once separated elites from the rest. Now it is ambient, like spam.

So why, with all this information, do we not have solutions to climate change, inequality, institutional distrust, or loneliness disguised as productivity? Why do we seem no closer to resolving what matters most? That alone should make us suspicious.

For over a century, we were trained like biological calculators. Our value lay in how many facts we could store and how quickly we could process them. Education became an elaborate ritual of answer-production: memorise, recall, perform, be correct. In this culture of obedience, the original question disappeared. Critical thinking and independent reasoning were sidelined. Even in research, institutions rewarded statistical significance and short-term application. Theoretical insight became a ceremonial paragraph no one read. Original thought was treated as a personality quirk—a luxury belief rather than a civic skill.

Then came AI. It could optimise, summarise, apply, and scale. What it still cannot do is decide which futures are worth optimising for. That work remains human. Theoretical breakthroughs are driven by “why”. Unfortunately, we are out of practice.

Epistemic Agency
Humans have an intrinsic need to feel like authors of their own understanding. Psychologists call this epistemic agency.

But automated life, hyper-optimised schedules, and algorithmic governance have eroded it. We have eliminated daydreaming. We have removed friction. With predictability, we have weakened the conditions for doubt, inquiry, and intellectual rebellion. Socratic questioning does not thrive in calendar blocks.

For example, you click a picture and AI can turn it into a convincing Van Gogh. But it can’t create a new style. That comes from living a life. Current AI lacks the subjective experience or qualia that drives innovation. 

When Van Gogh arrived in Paris, he encountered colours unknown to his Dutch life. Too poor to waste paint, he bought wool threads and layered them to observe colour relationships. He mixed directly on canvas because palettes were expensive. Poverty produced texture. Constraint produced originality.

Our schedules leave no room for that. When productivity is the goal, daydreaming is labelled inefficient.

Not anymore.

The new epistemic power lies in problem-framing, Socratic defiance, Diogenes-like shamelessness, and Van Gogh–style friction—unfiltered by machines. It belongs to minds that can wonder, hesitate, and feel awe. Minds not optimised for output, but released.

When answers arrive faster than curiosity can form, something subtle erodes. We stop tolerating uncertainty. We reach for resolution as children reach for comfort. The relief is immediate. The cost appears later.

That is why AI feels threatening: we stopped being original long before it arrived.

We can still change this—if we stop treating AI as an existential enemy and start using it as a tool for human flourishing, for eudaimonia.

We can panic about jobs. Or we can see AI as a cognitive exoskeleton that allows us to return to our most human state: curiosity.

When humans do not frame questions, someone else does. Today, a small cluster of private labs in the Global North exercises this authority. They define what intelligence looks like and which values matter. Complexity is flattened. Disputed knowledge appears settled.

And that’s where this conversation stops being personal and starts being institutional. In a world without informational scarcity, the person who frames the question decides what is visible, solvable, and even thinkable. That power has shifted from retrieval to framing. It’s a great epistemic shift. And oddly, a eudaimonic gift.

We can offload retrieval and free mental space. What we do with that space matters. We can chase more productivity. Or we can move towards meaning. AI isn’t making us dumber, it has given us the opportunity to make philosophers sexy again.

We can remain Productive Minds, judged by output. Or become Inquiring Minds, judged by the quality of engagement with the world.

This requires redefining intelligence. Not as possession of answers, but as the capacity for “aha” moments—the ability to connect distant ideas into meaning.

We must ask:
Are our questions making us more resilient and curious?
Or are they turning us into passive consumers of AI-generated “truth”?

Do you know children ask nearly 300 questions a day. Adults ask almost none. Schooling teaches us that not knowing is embarrassing. But framing a question requires admitting ignorance.

That is why teachers must shift from dispensing facts to becoming epistemic mentors. They must teach students to recognise bias when using AI tools. We need appreciative inquiry—questions that search for strengths and possibilities, not only defects.

Critical thinking remains a hidden curriculum, confined to elite humanities programmes. We must revive the Socratic method: exploratory, uncomfortable, unfinished.

Only then can we tackle wicked problems—by combining AI’s statistical power with human causal reasoning. For example: an AI prompted to “optimise yield in Punjab” will design one future. Asked to “ensure long-term soil health and farmer debt reduction”, it will design another. Same data. Different destiny.

Likewise, asking “Which districts are failing?” invites shame. Asking “Which districts show the most self-reliance?” invites pride. Data can motivate, not merely monitor.

The next generation does not need faster answers. It needs better questions—and the courage to live with them.

So that, when necessary, it can still say to the Alexanders of the world: “Step aside. You are blocking my sunlight.”