Government Insights on AI-Generated Content: The Latest Updates from India

Responsibility of AI Content on X
Government’s Stance on AI-generated Content
The government of India is currently examining whether the social media platform X (formerly known as Twitter) should be held accountable for the content produced by its artificial intelligence tool, Grok. A source within the government has indicated that a clear legal standpoint regarding this matter will be established soon. This inquiry follows instances where users posed questions about Indian politicians to Grok, resulting in controversial responses.
Understanding Grok: The AI Tool
Grok operates as an artificial intelligence tool on the X platform, capable of responding to user inquiries. However, its recent outputs have drawn criticism and raised concerns about the potential for misinformation or harmful content, particularly related to public figures.
Legal Scrutiny of Content Responsibility
The government source expressed that, based on initial assessments, it appears that X could indeed be held responsible for the content generated by Grok. Nevertheless, this viewpoint is subject to thorough legal examination. The Ministry of Electronics and Information Technology is currently engaging with X to better understand how Grok operates and its implications.
Government Regulations on AI Content
In response to previous incidents—specifically, when Google’s AI tool, Gemini, made disparaging comments about Prime Minister Narendra Modi—the Indian government acted swiftly to implement guidelines for managing content generated by AI. These guidelines are designed to ensure that social media platforms adhere to specific standards and do not propagate harmful information.
Section 79 of the IT Act and Legal Challenges
X is also involved in a legal dispute with the Indian government over Section 79 of the Information Technology Act. This section deals with the liability of intermediary platforms regarding the content posted by users. X has filed a lawsuit in the Karnataka High Court, arguing that the government’s interpretation of Section 79(3) is unlawful and infringes on online free speech. The court’s ruling on this matter will ultimately clarify the boundaries of responsibility for social media platforms.
Key Points on Section 79:
- Protection from Liability: Section 79(1) offers protection to social media platforms against legal action for content posted by users.
- Conditions for Protection: Section 79(2) outlines specific conditions that intermediaries must meet to retain this protection.
- Liability Risks: Failure to act within 36 hours to remove objectionable content can lead to losing safe harbor protections and potential legal repercussions under other laws, including the Indian Penal Code.
Legal Interpretations and Content Blocking
X’s legal battle highlights concerns that the government may be attempting to establish a secondary content-blocking system, bypassing the lawful procedures set out in Section 69A of the IT Act. X contends that this approach contradicts a significant Supreme Court ruling from 2015, known as the Shreya Singhal case, which mandated that content should only be blocked following a proper judicial process.
Implications for Social Media Platforms
The ongoing discussions and legal battles underscore the evolving landscape of AI and social media regulations in India. As authorities refine their guidelines and legal frameworks, social media platforms like X must navigate complex obligations related to user-generated and AI-generated content.