-
by Admin
08 April 2026 9:23 AM
"The core of adjudication — the weighing of evidence, interpretation of law, application of legal principles to facts — belongs exclusively to the domain of the human mind"
The High Court of Gujarat has issued a comprehensive institutional policy governing the use of Artificial Intelligence in judicial and court administration, applicable to all judicial officers, court staff, legal assistants, interns, and contractual employees across the High Court and the entire District Judiciary under its supervision. The policy comes against the backdrop of documented global instances of AI-generated fictitious judgments and judicial warnings against unverified AI citations. Issued in exercise of powers under Articles 225 and 227 of the Constitution of India, the policy explicitly frames AI as a decision-support tool — never a replacement for judicial reasoning.
The policy addresses the following institutional questions: what constitutes permissible and prohibited use of AI tools within the judiciary; how confidential litigant data must be protected when AI tools are used; and what verification standards must be met before AI-generated content can be acted upon in any judicial or administrative process.
The Gujarat High Court opens its policy with a stark warning against the risk of over-reliance on technology. While acknowledging that AI offers unparalleled speed and efficiency in handling voluminous data, the Court emphasises that the constitutional mandate of dispensing justice through human conscience, impartiality, and personal accountability demands the highest degree of caution. The policy warns that unregulated AI use carries the grave risk of "less use of human mind" and "unintended biased decision making," leading to a "subtle erosion of public trust in the human-centric nature of adjudication."
"Unregulated or unchecked use of AI carries the grave risk of gradual over-reliance on AI, less use of human mind, unintended biased decision making, which may cause subtle erosion of public trust in the human-centric nature of adjudication."
The Court draws a hard constitutional line at the threshold of decision-making. The policy absolutely prohibits the use of AI — directly or indirectly — for any aspect of judicial decision, adjudication, reasoning, application of law, interpretation of facts, weighing of arguments, determination of rights and liabilities, sentencing, bail considerations, interim orders, or final judgments. Critically, the prohibition extends to the use of AI for sorting, classifying, or evaluating evidence — including summarisation of depositions, credibility assessment, and relevance filtering.
"Artificial intelligence shall never be employed for any form of decision-making, judicial reasoning, substantive order drafting or judgment preparation, bail/sentencing considerations, or any substantive adjudicatory process."
On permitted uses, the policy adopts a deliberately narrow scope. AI is allowed for legal research, retrieval and analysis of judgments, extraction of ratio decidendi, and identification of precedents — but only with human conscience applied throughout and with outputs independently verified against approved journals and authoritative sources. AI may also assist in improving the language, structure, and clarity of draft orders and judgments, provided the substantive legal analysis and reasoning remains entirely that of the judge. Administrative functions such as cause list management, equitable distribution of cases, and statistical reporting are also permissible, but only on the basis of anonymised metadata.
"Such research work must be supported and confirmed by comparing with the approved journals of the case laws."
The policy imposes sweeping confidentiality protections. No confidential case information, personal data of parties or witnesses, privileged communications, or sensitive data as defined under the Digital Personal Data Protection Act, 2023 shall ever be entered into any public AI tool. Public AI tools — including free-tier versions of large language models such as ChatGPT, Gemini, Copilot, DeepSeek, Claude, and Grok — are restricted to general, non-case-specific research only. Even in approved enterprise deployments, entry of witness identities in pending criminal matters and information subject to court-imposed confidentiality orders remains strictly prohibited.
"No confidential information or data shall be entered into any public AI tool."
A robust verification framework forms the backbone of the policy's reliability safeguards. Every case citation, statutory provision, and legal proposition generated by an AI tool must be independently verified against the full text of the original judgment or statute from an authoritative source — AIJEL, SCC Online, AIR, the Supreme Court website, or official government gazettes — before use. The policy categorically states that a correctly formatted or internally consistent citation shall not, by itself, be treated as evidence of its existence or accuracy — a direct response to the well-documented problem of AI hallucinations generating fictitious but plausible-sounding judgments.
"The fact that a citation appears correctly formatted or internally consistent shall not be treated as evidence of its existence or accuracy."
On personal accountability, the Court leaves no ambiguity. Every judge remains personally responsible for every order, judgment, and observation issued in their name — a responsibility that cannot be delegated to, shared with, or diminished by any AI tool. Users cannot disclaim responsibility by attributing errors to AI. Any AI-generated output, once signed or authenticated by a user, becomes the sole and exclusive responsibility of that user. Legal Assistants and Research Associates using AI to assist a judge must ensure the judge is informed of such use.
"The use of AI does not constitute a defence to a finding of error, misconduct, or professional negligence. Users cannot disclaim responsibility by attributing errors to an AI tool."
Violations of any provision of the policy constitute misconduct and attract departmental and disciplinary proceedings under applicable service rules, in addition to civil and criminal liability under the Information Technology Act, 2000, the DPDP Act, 2023, and the Bharatiya Nyay Sanhita, 2023. The policy will remain in force until revised by the Gujarat High Court or superseded by directions from the Supreme Court of India or its e-Committee.
Date of Policy: 2025 (as issued by the High Court of Gujarat)
Issuing Authority: High Court of Gujarat
Powers Exercised Under: Articles 225 and 227, Constitution of India
Applicable Laws: Information Technology Act, 2000; Digital Personal Data Protection Act, 2023; Contempt of Courts Act, 1971; High Court of Gujarat Rules, 1993; Bharatiya Nyay Sanhita, 2023
Institutions Covered: High Court of Gujarat Registry, Gujarat State Judicial Academy, Gujarat High Court Arbitration Centre, Gujarat State Legal Services Authority, and entire District Judiciary of Gujarat