-
by sayum
05 December 2025 8:37 AM
“We are aware of it, we have seen the morphed video of us,” remarked Chief Justice of India BR Gavai, responding to concerns over artificial intelligence misuse during a PIL seeking judicial regulation of AI. On November 10, 2025, in a significant moment reflecting the judiciary’s growing concern over digital misinformation and emerging technologies, the Supreme Court of India, led by Chief Justice BR Gavai and Justice K Vinod Chandran, heard a Public Interest Litigation (PIL) petition seeking the formulation of guidelines to regulate the use of Artificial Intelligence (AI) in the Indian judicial system.
In a telling exchange during the hearing, CJI Gavai acknowledged the circulation of a morphed video on social media platforms, which falsely depicted a courtroom incident involving him. The comment came as the petitioner’s counsel began outlining the risks posed by unregulated AI technologies, especially Generative AI (GenAI).
“Even the Supreme Court Is Not Immune”: Morphed Videos and Data Hallucination Trigger Urgency for Regulation
As the petitioner’s counsel opened submissions with a warning that AI is being increasingly used within court processes, despite its inherent risks, the Chief Justice interjected to share that he himself had recently been targeted by such technology.
“We are aware of it, we have seen the morphed video of us (two),” the CJI said, referring to a fabricated video that falsely portrayed a shoe-throwing incident in court, and appears to have been generated or altered using AI tools.
The Bench, acknowledging the gravity of the issue, posted the matter for further hearing in two weeks. However, the Chief Justice’s remark highlighted how even the apex court is now confronting the real-world consequences of AI misuse.
Why the Petitioner Seeks Judicial Oversight on AI: Risks of “Data Opaqueness” and “Algorithmic Bias”
The petition, filed with the assistance of Advocate-on-Record Abhinav Shrivastava, urges the Supreme Court to frame a policy regulating the deployment and use of AI in the judicial ecosystem, warning that unrestricted use of Generative AI (GenAI) could severely compromise transparency, fairness, and public trust in justice delivery.
It stresses that GenAI, which operates through machine learning (ML), is designed to learn from data by identifying patterns rather than following explicit instructions. However, this process—termed “datafication”—often imports and amplifies systemic biases, embedding them into AI algorithms in ways that cannot be easily identified or corrected.
The plea explains:
“AI integrated into the Judiciary and Judicial functions should have data that is free from bias, and data ownership should be transparent enough to ensure stakeholders’ liability. One of the biggest red flags of such integration is Data Opaqueness.”
The petitioner warns against the "black box" nature of GenAI systems — algorithms whose internal logic is not fully comprehensible even to their developers. This opacity makes it extremely difficult to detect flawed or manipulated outputs, especially in an unsupervised learning environment.
Fabricated Case Laws and AI-Induced Judicial “Hallucinations” Pose Threat to Article 14 and 19 Rights
The plea highlights a disturbing trend: AI-generated hallucinations — instances where AI creates fake legal citations, fabricated court rulings, or misrepresents judicial observations — which may find their way into the legal process, either inadvertently or maliciously.
This, the petitioner contends, would directly affect Article 14 of the Constitution, which guarantees equality before law, as decisions could be based on flawed or imaginary legal foundations.
Furthermore, it argues that such AI-induced distortions infringe upon citizens’ “right to know” under Article 19, a critical component of freedom of expression guaranteed under Article 19(1)(a). If court records, orders, or arguments are manipulated or misrepresented using AI tools, the public’s access to accurate legal information is fundamentally compromised.
Judicial Data Ownership and Cybersecurity Are Central to AI Oversight
The petition also raises concerns about cybersecurity vulnerabilities arising from AI integration in court systems. In the absence of proper safeguards, judicial data — including confidential or sensitive information — may be exposed to unauthorised access, algorithmic manipulation, or targeted misinformation campaigns.
It argues that judicial data should remain under the ownership and control of the judiciary and that transparency in AI processes must be non-negotiable.
Conclusion
The Supreme Court’s acknowledgment of a morphed video targeting the Chief Justice himself has added a deeply personal and urgent dimension to the ongoing debate on regulating AI in the legal domain. As Generative AI tools increasingly intersect with the justice system, both in process automation and public perception, the call for judicial guidelines and protective frameworks becomes critical.
By initiating a broader conversation on the risks of “black box” technologies, algorithmic bias, and data hallucination, the Court appears poised to lay the groundwork for India’s first institutional policy on AI in judiciary — a development that could set a global benchmark for democratic oversight of emerging legal technologies.
Date of Hearing: November 10, 2025