EDITORIAL 29 December 2025

​Model conduct: On India, AI use

Source: The Hindu

Context & The Gist

The article addresses India’s approach to regulating Artificial Intelligence (AI). It’s in the news because China recently unveiled draft rules for AI, particularly concerning emotionally interactive services, prompting a comparison with India’s comparatively less intrusive but also less comprehensive regulatory framework. The central argument is that India needs to move beyond simply regulating adjacent risks (like deepfakes and financial fraud) and articulate a clear duty of care regarding AI product safety, especially concerning psychological harms, while simultaneously fostering domestic AI capabilities.

The article suggests India should avoid a ‘regulate first, build later’ approach, given its current lack of frontier AI model development capacity. It advocates for a strategy that prioritizes access to resources, workforce upskilling, and assertive regulation of downstream AI applications, rather than attempting to control the upstream development of models largely built and owned by foreign entities.

Key Arguments & Nuances

  • Reactive vs. Proactive Regulation: India’s current approach is largely reactive, responding to risks as they emerge (e.g., deepfakes) through existing laws. The article argues for a more proactive stance, defining a duty of care for AI product safety.
  • Balancing Regulation & Innovation: The article cautions against over-regulation that could stifle the development of AI within India, especially given its current dependence on foreign models.
  • Downstream vs. Upstream Regulation: Focusing on regulating how AI is *used* (downstream) is seen as more practical than attempting to control the *creation* of AI models (upstream), at least in the short term.
  • China’s Approach as a Counterpoint: China’s proposed rules on emotionally interactive AI are presented as an example of potentially overly intrusive regulation, highlighting the trade-offs between safety and individual freedom.
  • Importance of Computational Resources & Upskilling: Developing India’s own frontier AI models requires significant investment in computational infrastructure and a skilled workforce.

UPSC Syllabus Relevance

  • GS Paper II: Governance – Issues relating to development, utilization and management of New Technology.
  • GS Paper III: Economy – Science and Technology – developments and their applications in various sectors.
  • GS Paper III: Science & Technology – Awareness in computing concepts.

Prelims Data Bank

  • IT Act, 2000 & IT Rules, 2021: Currently used to regulate AI use in India, focusing on due diligence requirements for platforms.
  • Digital Personal Data Protection Act, 2023 & Rules, 2025: Relevant to data privacy and protection aspects of AI.
  • RBI’s FREE-AI Framework: Framework for governing model risk in credit.
  • SEBI Guidelines: Accountability measures for regulated entities using AI tools.
  • Nvidia: Leading manufacturer of GPUs crucial for AI development; highlighted in the context of the US-China AI race.

Mains Critical Analysis

The article highlights a critical juncture in India’s AI policy. The core issue is how to balance the need for responsible AI deployment – mitigating risks like psychological harm and misinformation – with the imperative to foster innovation and build domestic AI capabilities. A PESTLE analysis reveals the following:

  • Political: Government’s role in setting regulatory frameworks and promoting AI development through initiatives like the AI Mission.
  • Economic: Investment in computational infrastructure, upskilling the workforce, and attracting private investment in AI.
  • Social: Addressing potential societal impacts of AI, including job displacement and the spread of misinformation.
  • Technological: Bridging the gap in frontier AI model development and ensuring access to cutting-edge technology.
  • Legal: Developing a comprehensive legal framework that addresses AI-specific risks and liabilities.
  • Environmental: Addressing the energy consumption of large AI models and promoting sustainable AI practices.

The implications of a purely reactive regulatory approach are significant. It risks stifling innovation, increasing dependence on foreign technology, and leaving India vulnerable to the potential harms of AI. A critical gap lies in the absence of a clear duty of care for AI product safety, particularly concerning psychological well-being. The article rightly points out the need to regulate downstream applications more assertively, focusing on incident reporting and monitoring model behavior, rather than attempting to control the upstream development of AI models.

Value Addition

  • National Strategy for Artificial Intelligence (2018): India’s first AI strategy, focusing on ‘AI for All’.
  • AI Task Force (2018): Led by NITI Aayog, it laid the groundwork for India’s AI policy.
  • Justice B.N. Srikrishna Committee (2018): Report on Data Protection, which influenced the Digital Personal Data Protection Act, 2023.
  • Quote: “The key is not to stop progress, but to guide it.” – This sentiment encapsulates the article’s argument for a balanced approach to AI regulation.

Context & Linkages

AI models are being rolled out, guardrails and hygiene norms must follow

This past article highlighted the immediate concerns surrounding data security and misuse with the rapid adoption of AI models in India, including government restrictions on tools like ChatGPT. It underscores the urgency of establishing “guardrails and hygiene norms” – a point directly echoed in the current article’s call for a more proactive regulatory approach and a duty of care for AI product safety.

Express View: AI disclosure draft rules — a move in the right direction

The draft rules on AI-generated content discussed in this article demonstrate India’s initial steps towards regulating AI, specifically addressing misinformation and fraud. This aligns with the current article’s acknowledgement of existing regulations targeting adjacent risks, but emphasizes the need to go further and address broader safety concerns beyond unlawful content.

Playing catch-up with China in the AI race

This article emphasizes the competitive landscape in AI development, particularly the challenge posed by China. It reinforces the current article’s argument that India must avoid a ‘regulate first, build later’ approach, as it risks falling further behind in the development of frontier AI models.

The Way Forward

  • Establish a Clear Duty of Care: Define legal obligations for AI developers and deployers regarding product safety, particularly concerning psychological harms.
  • Invest in Computational Infrastructure: Increase access to high-performance computing resources for AI research and development.
  • Upskill the Workforce: Launch large-scale training programs to develop a skilled AI workforce.
  • Promote Public Procurement of AI: Encourage government agencies to adopt and develop AI solutions.
  • Foster Collaboration: Encourage collaboration between academia, industry, and government to accelerate AI innovation.
  • Develop Incident Reporting Mechanisms: Require companies to report incidents related to AI model behavior and potential harms.

Read the original article for full context.

Visit Original Source ↗
Related Context
4 Dec 2025
Cyber crackdown: On the investigation into cyber-crime

On December 5, 2025, the Supreme Court directed the Central Bureau of Investigation (CBI) to conduct a pan-India investigation into cyber-crimes, with...

Read Analysis
16 Nov 2025
Too little, much later: on the Digital Personal Data Protection Rules, 2025

The Digital Personal Data Protection Rules, 2025, notified on November 14, 2025, have been criticized for failing to address significant shortcomings ...

Read Analysis
10 Nov 2025
AI models are being rolled out, guardrails and hygiene norms must follow

The rapid proliferation and adoption of AI models globally, including in India, are raising significant concerns regarding data security and potential...

Read Analysis
7 Nov 2025
Playing catch-up with China in the AI race

The United States is facing increasing challenges in maintaining its lead in the artificial intelligence race against China, according to Nvidia chief...

Read Analysis