Indiscriminate Use of AI in Legal Submissions
Indiscriminate use of AI, particularly Generative AI, to generate legal submissions has led to the inclusion of non-existent, “hallucinated” case laws in court filings globally and in India, attracting severe judicial reprimand and exemplary costs.
Courts are emphasizing that the responsibility for verification rests solely with the legal professional, not the AI tool.
Why Generative AI Produces Hallucinated Case Laws
These hallucinations occur because generative AI, like ChatGPT, predicts words based on patterns rather than factual accuracy.
- The evidential relevance of ChatGPT is low
- It can produce incorrect responses
- It may generate fictional case laws
- It may create imaginative data
Judicial Caution on the Use of AI in India
Indian courts have emphasized “utmost caution” regarding AI usage.
While AI is acknowledged for legal research, reliance on unverified ChatGPT-generated content for final submissions is considered reckless.
India’s AI-Generated Judgments also raise verification concerns.
Alarming Rise of Non-Existent Case Citations
There is an alarming rise of non-existent cases being cited in Court.
Reasons for Reliance on AI Platforms
One of the major factors is that there is a lot of work pressure on lawyers as well as Judges of the subordinate judiciary coupled with the fact that generally they do not have access to expensive Law Journals and online legal portals, therefore they are sometimes constrained to resort to AI platforms for legal research and citations.
Judicial Response to Non-Existent Citations
It cannot be denied that these AI platforms often refer to certain non-existent citations.
The Higher Courts / Tribunals deprecate this practice and often impose exemplary costs for misleading the Courts, although the lawyers never meant it and did not conceive of this fallacy.
Similarly, enquiries are conducted for members of the subordinate judiciary, in respect of non-existent citations referred in the judgments / orders pronounced by them.
Key Legal Questions Arising
Question therefore arises as to what should be the impact of quoting such non-existent citations?
- Should the Court impose costs on the lawyer or the client?
- Would such non-existent citations in the written submissions force the judge to reject the submissions?
- Would such judgment of subordinate judiciary invalidate the judicial orders?
Let us examine these aspects with reference to the case laws declared by various Courts, which have answered these concerns judicially.
Judicial Precedent on AI-Generated Hallucinations
1. Mr. Deepak s/o Shivkumar Bahry v. Heart & Soul Entertainment Ltd. (Bombay High Court)
| Neutral Citation | 2026:BHC-AS:828 |
|---|---|
| Court | Bombay High Court |
| Costs Imposed | ₹50,000 |
| Reason | Submission of AI-generated arguments citing a non-existent case |
The Bench imposed ₹50,000 costs on the litigant (Heart & Soul Entertainment Ltd., in person) for submitting AI-generated arguments citing the non-existent case.
The order arose from a leave and license eviction dispute in Mumbai’s Oshiwara, upholding the eviction while deprecating “dumping” unverified AI content.
The Court ordered the fine payable to High Court Employees Medical Fund within two weeks of the judgment.
Observations of the Court
The Court in strong words observed thus:
“A strong pointer is seen from a reference made to one alleged caselaw ‘Jyoti w/o Dinesh Tulsiani Vs. Elegant Associates’. Neither citation is given nor a copy of judgment is supplied by the Respondent. This Court and its law clerks were at pains to find out this caselaw but could not find. This has resulted in waste of precious judicial time. If an AI tool is used in aid of research, it is welcome; however, there is great responsibility upon the party, even an advocate using such tools, to cross verify the references and make sure that the material generated by the machine / computer is really relevant, genuine and in existence. This Court finds that the Respondent has simply filed written submissions by signing them without verifying its contents.”
Warning Against Misuse of AI
The Court expressed concern over this practice and warned against the misuse and observed thus:
“This practice of dumping documents / submissions on the Court and making the Court go through irrelevant or non-existing material must be deprecated and nipped at bud. This is not assistance to the Court. This is a hurdle in swift delivery of justice. This Court will not take such practices kindly and it is going to result in costs. If an advocate is found to be indulging in such practice, then even stricter action of referring to Bar Council may follow.”
Global Prevalence of AI-Generated Hallucinated Case Laws
2. Use of Hallucinated Case Laws in the United States
This practice of quoting non-existent case laws exists almost across the globe as dependence on AI platforms is increasing everyday.
It would be worthwhile to deal with such cases in USA.
In September 2025, Los Angeles attorney Amir Mostafavi was fined $10,000 by a California appellate court for submitting a brief containing 21 fabricated case citations generated by ChatGPT.
| Jurisdiction | California, USA |
|---|---|
| Penalty Imposed | $10,000 |
| Total Cases Cited | 23 |
| Fabricated Citations | 21 |
The court issued a warning that lawyers must personally verify all citations, marking a significant exemplary fine for citing hallucinated case laws.
It is worth mentioning that out of 23 cases cited in the opening brief, 21 were invented by the AI tool.
The $10,000 penalty is considered the largest issued by a California court for AI-generated fabrication.
The attorney had used ChatGPT to “improve” his appeal, but failed to review the AI-generated content before filing.
The court emphasized that attorneys are responsible for verifying every citation and that fake legal authority is a growing issue.
3. ByoPlanet International, LLC v. Johansson and Gilstrap (Southern District of Florida)
In yet another landmark case decided in August 2025, the Southern District of Florida imposed nearly $86000 in sanctions against plaintiffs’ counsel in the case, ByoPlanet International, LLC v. Johansson and Gilstrap.
It is now the largest sanction to date for filing hallucinated AI-generated legal authority, and a watershed moment for the profession.
| Court | Southern District of Florida |
|---|---|
| Sanctions Imposed | Nearly $86000 |
| Nature of Misconduct | Hallucinated AI-generated legal authority |
The Court did not dismiss this blunder as an inadvertent error or a misunderstanding of new technology.
The court cited repeated, systemic and bad-faith misuse of generative AI, despite multiple warnings, motions to dismiss and explicit notice that citations were false.
The result involves dismissed cases, fee-shifting sanctions and, most significantly, a judicial opinion that will be cited for years.
Indian Judicial Approach to AI-Generated Citations
4. Gummadi Usha Rani vs Sure Mallikarjuna Rao
Gummadi Usha Rani vs Sure Mallikarjuna Rao (CRP No. 2487/2025), decided recently on 21 January 2026 by the Andhra Pradesh High Court.
The Court ruled that non-existent citations generated by Artificial Intelligence (AI) tools do not automatically invalidate judicial orders when the underlying legal reasoning remains sound and correctly applied.
Facts of the Case
The brief facts of the case are that the trial court (V Additional Junior Civil Judge, Vijayawada) cited four fictitious judgments on August 19, 2025 in it’s judgment.
When the High Court sought a report on September 26, 2025, the judicial officer admitted these were AI-generated, incorporated in good faith without verification.
Observations of the High Court
The Bench of the High Court expressed serious concern over the risks involved in relying on AI tools.
He elucidated that Artificial Intelligence may not have complete access to all relevant laws and therefore there is a fair chance that it may fail to correctly understand the true import of the question of law and may ignore important and binding precedents.
The Bench warned about indiscriminate use of AI tools and observed thus:
“While AI tools may appear to provide convincing and effective answers, there is a real risk that such responses may be factually or legally incorrect. In some cases, AI may even generate judgments that do not exist or wrongly apply judgments that are unrelated to the issue at hand. This is a matter of grave concern”.
The court further warned the lawyer community as well as the judges of the subordinate judiciary that excessive reliance on AI could compromise privacy and erode public confidence in the judicial system.
“Judges, too, would be compelled to devote valuable time to verify the correctness of AI-generated citations, resulting in delays in the delivery of justice,” he maintained.
Legal Principle Laid Down
However, the Court held that in spite of quoting non-existent case laws, if the arguments of the litigant is sound or the order of the judicial officer is sound, the same has to be judicially respected.
The Court observed thus:
“Mere mentioning of the non-existent citations/rulings generated by Artificial Intelligence in the order would not vitiate the order if the law as considered in the order is the correct law of the land and there is no fault in applying the correct law, correctly to the facts of the case.”
Judicial Warnings from England and Wales
5. Venkateshwarlu Bandla Vs. Solicitors Regulation Authority
In Venkateshwarlu Bandla Vs. Solicitors Regulation Authority, Case No. AC-2024-LON-003457, England and Wales High Court (Administrative Court), decided on 13.05.2025 mandates global and Indian warnings on AI risks.
The judgment buttresses caution against unverified AI, fake citations, waste resources and erode process integrity.
6. Frederick Ayinde v. London Borough of Haringey
In Frederick Ayinde v. London Borough of Haringey (MANU/UKAD/0304/2025) it was observed that lawyers must verify that the authorities cited at bar are not fictitious and thus refrain from misleading the courts.
Judicial Guidance on AI-Generated Pleadings and Citations
7. Annaya Kocha Shetty v. Laxmibai Narayan Satose
Annaya Kocha Shetty v. Laxmibai Narayan Satose ([2025] 5 S.C.R. 58 ; 2025 INSC 466)
The Apex Court acknowledged the usefulness of AI generated statements as they enhance the efficiency & efficacy but warned against lengthy pleadings & repetition.
The Court observed thus:
“Courts are also confronted with AI-generated or computer-generated statements. While technology is useful in enhancing efficiency and efficacy, the placid pleadings will disorient the cause in a case. It is time that the approach to pleadings is re-invented and re-introduced to be brief and precise.”
8. Arabyads Holdings Limited v. Gulrez Alam Marghoob Alam
Arabyads Holdings Limited v Gulrez Alam Marghoob Alam [2025] ADGMCFI 0032 (Abu Dhabi Global Market Courts)
Indemnity costs were awarded against a legal firm (MIO Legal Consultants) for including false legal authorities in their defence, with the judge ruling that failure to verify AI-generated research is reckless, regardless of the intention to mislead.
| Court | Abu Dhabi Global Market Courts |
|---|---|
| Penalty | Indemnity Costs |
| Reason | Failure to verify AI-generated legal research |
9. UK High Court Case (2025)
UK High Court Case (2025): In a £89m damages case against Qatar National Bank, 18 out of 45 cited cases were found to be fabricated by AI, prompting the court to warn lawyers against using such tools without proper checks.
| Total Cases Cited | 45 |
|---|---|
| Fabricated Cases | 18 |
| Claim Value | £89 Million |
10. Mata v. Avianca, Inc.
Mata v. Avianca, Inc. (2023) (USA)
A seminal case where a lawyer used ChatGPT for research, resulting in fake cases being submitted to a federal court in New York.
The lawyer was fined for acting in bad faith.
Misuse of Generative AI in Indian Insolvency Proceedings
11. Omkara–Gstaad Dispute (10.12.25)
In the Omkara Assets Reconstruction vs. Gstaad Hotels insolvency dispute under Section 7 of the IBC, India’s Supreme Court (Justices Dipankar Datta and A G Masih) identified a major misuse of generative AI in a rejoinder filed by Gstaad promoter Deepak Raheja’s counsel, which cited hundreds of fabricated or distorted non-existent case laws and misreported real precedents.
Counsel C A Sundaram admitted AI assistance, apologized, and sought withdrawal, while Omkara’s Neeraj Kishan Kaul highlighted the issue.
The Court refused to dismiss the appeal outright, opting to proceed on merits while sternly noting it “cannot be brushed aside,” amid NCLT / NCLAT rulings admitting insolvency against Gstaad and Neo Capricorn Plaza.
Repercussions include heightened judicial alarms over AI “hallucinations” eroding legal credibility—echoing US cases like Mata v. Avianca with sanctions—prompting calls for lawyers’ independent verification of AI outputs, potential new disclosure rules, ethical guidelines from bar bodies, and this serving as a precedent for balancing AI assistance with human accountability in Indian pleadings.
Key Takeaways
- Liability: The lawyer of record is solely liable for the content, even if generated by AI.
- Professional Duty: Failing to verify citations is a breach of professional conduct.
- Consequences: Courts are imposing “wasted costs” (indemnity costs) to punish the reckless use of AI tools.
Written By: Inder Chand Jain
Ph no: 8279945021, Email: [email protected]


