Get Updates
Get notified of breaking news, exclusive insights, and must-see stories!

AI Misuse In Supreme Court Filings In India Raises Accountability Questions

The Supreme Court has uncovered a rare case of artificial intelligence misuse in litigation, after a rejoinder filed in a corporate dispute allegedly relied on hundreds of fictitious case laws. The discovery has raised deep concern within India’s legal system, with the bench stressing that the matter is too serious to overlook and must still be examined on its merits.

The controversy has emerged in the ongoing battle between Omkara Assets Reconstruction Private Limited and Gstaad Hotels Private Limited, linked to promoter Deepak Raheja. Arguments unfolded before a bench of Justices Dipankar Datta and A. G. Masih, which is hearing the dispute involving high financial stakes and complex recovery issues.

AI Summary

AI-generated summary, reviewed by editors

The Supreme Court of India is examining the alleged misuse of AI in a corporate dispute between Omkara Assets Reconstruction Private Limited and Gstaad Hotels Private Limited, where a rejoinder citing fabricated case laws was filed, raising concerns about the use of AI in legal proceedings. Justices Dipankar Datta and A. G. Masih are presiding over the case, highlighting the potential risks of AI-generated content and the need for accountability in legal practices.

artificial intelligence misuse in Supreme Court filings

The alleged misuse of artificial intelligence came to light when senior advocate Neeraj Kishan Kaul, appearing for Omkara Asset Reconstruction, flagged serious irregularities in the rejoinder filed by the opposite side. Kaul informed the bench that the document cited numerous judicial precedents that could not be traced in any legal database.

According to Kaul, a few case titles mentioned in the rejoinder were genuine, yet the legal propositions linked to those rulings were wholly invented. Kaul argued that the pattern pointed to intentional use of AI tools to manufacture support, and added that this was not merely "an AI mistake" but a conscious attempt to mislead the Supreme Court.

artificial intelligence risks for judicial process

Kaul cautioned the judges about the pressure on the higher judiciary, noting that “A bench hears 70–80 matters a day. If the court unknowingly relies on AI-generated falsehoods, the results for the judicial system could be disastrous," Kaul observed. The submission highlighted the dangers of fabricated precedents slipping into arguments during heavy court dockets.

Senior advocate C. A. Sundaram, representing Gstaad Hotels promoter Deepak Raheja, did not attempt to justify the controversial rejoinder. Instead, Sundaram expressed deep personal embarrassment, describing the incident as among the most difficult moments of a long legal career, while placing an affidavit from the advocate-on-record before the court.

artificial intelligence and accountability before the court

The affidavit, read in court, carried an unconditional apology from the advocate-on-record and requested permission to withdraw the rejoinder entirely. However, the document also mentioned that the rejoinder had been prepared under the guidance of the litigant, prompting the bench to question where responsibility truly lay within the team handling the case.

The judges responded with a pointed observation that "We cannot simply brush it aside," indicating that the court intends to examine how the document was produced and who directed the use of AI. Nonetheless, the bench decided that the underlying corporate dispute would continue to be heard on its merits, separate from any possible disciplinary angle.

The episode has triggered a broader discussion across legal circles on the unchecked use of artificial intelligence in drafting court documents. AI tools are often promoted for speeding up research and helping with backlog, but this case has underlined how unverified outputs can contaminate pleadings and affect judicial trust when misused by litigants or lawyers.

Legal experts have outlined several specific concerns about such artificial intelligence use in litigation, which are summarised below for clarity.

Issue Risk identified
Unverified AI outputs AI-generated text may appear credible but contain inaccurate or invented material.
Fabricated precedents Non-existent case laws can be cited and escape notice during busy hearings.
Overreliance on tools Dependence on AI may erode diligence, professional ethics and judicial integrity.

For many practitioners, the Omkara Assets Reconstruction versus Gstaad Hotels matter now stands as a warning that technology cannot replace human responsibility in legal practice. The Supreme Court is expected to consider whether the false filing attracts disciplinary consequences, even as the main commercial dispute continues through the regular judicial process.

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+