McDonald’s India Launches Multi-Millet Burger Bun with CFTRI and CSIR to Boost Indigenous Nutrition and Support Millet Movement – Firstpost

AI start-up founders to Firstpost – Firstpost

  • Post category:Latest News
Share this Post


As policymakers debated sovereign compute, data localisation and ethical AI at the India AI Impact Summit 2026, Indian startups pitched a parallel vision—private, air-gapped and modular AI systems designed to function without sending data to global clouds.

In an interaction with Firstpost’s Dheeraj Kumar, Raj K Gopalakrishnan, CEO & Co-Founder of KOGO AI, and Angad Ahluwalia, Chief Spokesperson at Arinox AI, spoke about sovereign AI infrastructure, job disruption, bias, and why they believe India’s AI future must keep “humans in the loop.”

STORY CONTINUES BELOW THIS AD

Edited excerpts:

How do you see India’s startup ecosystem evolving in AI, especially compared with more developed markets like China or the US?

Raj K Gopalakrishnan: We need to compare apples with apples and oranges with oranges. India’s strength lies firmly in the application and orchestration layer of AI. If you look at foundational model development, countries such as the US and China are ahead—that’s a reality. But AI value is not created only at the foundation layer; it is monetised at the application layer.

India builds AI applications and orchestration systems for the world. That is where our depth lies.

If you look at our broader innovation story—whether in space technology or digital infrastructure—India has repeatedly demonstrated the ability to build world-class solutions at a fraction of global costs. The same principle applies to AI. We are developing small language models, nano models, and highly task-specific systems that can deliver outcomes at dramatically lower cost. That cost-efficiency is a structural advantage, and in that segment, we are ahead of many markets.

Angad Ahluwalia: If you examine where tangible value is created, it is in agentic systems and enterprise applications. Foundational models are important, but they are often two steps removed from revenue generation.

India is exceptionally strong at building deployable AI solutions—systems that enterprises can measure, scale and derive ROI from. Ultimately, the return on AI investment comes from the application layer, and that is precisely where India’s startup ecosystem has built meaningful capability.

Do you believe AI will significantly alter India’s job market? Which sectors could see the biggest shift?

STORY CONTINUES BELOW THIS AD

Raj: If the question is whether AI will take away jobs—the honest answer is yes, it will. Certain roles, especially those that are repetitive, rule-based or process-driven, will inevitably be automated.

But that is only half the story. AI will also create entirely new job categories. Whether the number of jobs created exceeds those displaced is something only time will tell.

What I am certain about is this: AI will never function in India without a human in the loop. That principle is fundamental.

AI will increasingly handle the “what”—processing, analysis, execution of defined tasks. Humans will determine the “why”—context, judgment, ethics, intent and accountability. Decision-making and ethical frameworks cannot be outsourced to machines.

So while routine tasks may reduce, demand will grow for roles involving oversight, interpretation, strategy and governance.

Angad: I’ll add one important point. An employee who understands how to use AI will be significantly more valuable than one who does not. That shift is already underway.

STORY CONTINUES BELOW THIS AD

Will AI eliminate certain positions? In some cases, yes. But in the Indian context, human oversight will remain central.

At every organisational level, employees who fail to adopt AI tools—especially enterprise-grade, secure AI systems—will struggle over time. The divide will not just be between companies that use AI and those that don’t, but between employees who leverage AI and those who resist it.

Many companies are cutting headcount after deploying AI. Offices that once employed large teams are now operating with significantly leaner workforces. Isn’t that a structural threat?

Raj: It depends entirely on the lens you apply.

Companies are built to expand, not merely to optimise costs. AI is fundamentally a scale multiplier. It allows organisations to increase throughput, improve speed, and unlock new revenue streams.

Yes, certain roles—especially repetitive and process-heavy functions—will see displacement. We should not pretend otherwise. But at the same time, AI enables expansion at a scale that was previously not feasible. The structural shift is toward productivity amplification, not necessarily contraction.

STORY CONTINUES BELOW THIS AD

Angad: What we are seeing in enterprises is a growing focus on revenue per employee rather than headcount reduction.

Take a simple example: if a property manager earlier handled 10 properties, with AI assistance that number could rise to 80 or even 100. The metric shifts from manpower volume to output per individual.

The objective, in most boardrooms, is not fewer employees — it is significantly higher productivity and better ROI from each employee.

What should startups, companies and governments do now to ensure AI remains fair, transparent and free from bias?

Raj: The first principle is that humans must control the “why”. AI can process information and generate outputs, but it cannot be allowed to define ethical intent. That responsibility must remain with people.

Going forward, we will see greater demand for roles centred around AI governance—ethicists, compliance specialists, oversight professionals. These are functions that cannot be automated away.

Startups and governments need to focus on two priorities. One, how do we responsibly manage and reskill human capital that may be displaced by automation? And two, how do we institutionalise ethical frameworks that ensure human supervision remains central to AI systems?

STORY CONTINUES BELOW THIS AD

Bias, misuse and opacity are ultimately governance challenges. If humans define the objectives and guardrails clearly, AI systems can operate within those boundaries.

Angad: Government has a critical role in setting those guardrails. We have already seen global AI platforms introduce stricter monitoring systems and content safeguards tailored to different geographies and age groups.

This evolution is not unique to AI. If you look at digital platforms in their early years, content moderation was reactive. Over time, systems became proactive—violations are now detected at the source.

AI safety will follow a similar trajectory. Industry, regulators and civil society will need to work together to ensure that innovation and responsibility move in parallel.

Raj: Regulatory frameworks are already tightening. Response timelines for digital compliance have shortened significantly in recent months. These are early steps, but they indicate intent.

AI governance is not static. It will evolve. The key is to build adaptable frameworks that can keep pace with technological change.

STORY CONTINUES BELOW THIS AD

Angad: There will always be a learning curve—and occasionally a “cat and mouse” phase between misuse and enforcement. But overall, we are optimistic that the ecosystem will mature in the right direction.

What excites you most about the Indian market, and what challenges do you foresee as you expand here?

Raj: We are living in extraordinary times. The scale at which AI can transform industries—from governance to manufacturing to financial services—is unprecedented. India, with its digital public infrastructure and large enterprise base, presents a massive opportunity for applied AI.

But the real challenge is not the excitement around AI. It is data.

If you want to reduce bias, eliminate hallucinations and improve reliability, you must start with clean, accurate and structured data. Data is the beacon of truth. If the underlying data is flawed, incomplete or misleading, the outputs will inevitably reflect those flaws.

If you repeatedly feed incorrect information into a system, it will confidently reproduce that inaccuracy. That is how bias and misinformation get amplified.

STORY CONTINUES BELOW THIS AD

So for us, the biggest challenge in India—and globally—is ensuring data integrity. Sanitising, structuring and validating datasets is far more critical than simply scaling compute power. Without trustworthy data, even the most advanced models will produce unreliable results.

In many ways, the future of AI will be determined not just by algorithms, but by the quality of the data we choose to build them on.

Does India’s people-centric AI approach risk leaving it behind in the global innovation race?

Raj: Why should the two be mutually exclusive? There is no inherent contradiction between people-centric AI and enterprise- or technology-driven AI. Artificial intelligence does not exist in isolation—it operates within human systems and is ultimately meant to improve human outcomes.

If policy frameworks prioritise agriculture, healthcare, education or public welfare, that does not mean innovation is being sidelined. In fact, anchoring AI in real-world societal needs ensures that innovation remains relevant and scalable.

A people-first approach is not a weakness. I would argue it is a strategic advantage. It aligns technological progress with long-term socio-economic impact rather than short-term commercial gain.

Angad: I would also question the assumption that India is neglecting enterprise AI. The country is simultaneously building solutions for corporations, startups and public-sector use cases.

If you look at global markets such as the United States or China, much of the AI momentum has been driven by large corporations. India, on the other hand, is attempting a more balanced model—combining enterprise innovation with population-scale deployment.

In that sense, India’s approach may not be slower; it may simply be more inclusive.

End of Article



Source link

Share this Post

Leave a Reply