AI AND THE ADMISSIBILITY AND RELIABILITY OF AI GENERATED EVIDENCE IN INDIAN COURTS: THE LEGAL CHALLENGES IN DIGITAL AGE”
ABSTRACT
The integration of Artificial Intelligence (AI) into the legal system, from judicial assistance to forensic evidence analysis, presents a new set of challenges to established legal principles in India. While AI offers transformative benefits like increased efficiency and faster justice delivery through platforms such as SUPACE, its use as a source of evidence raises complex legal and constitutional questions. This article analyse the legal and constitutional implications of AI-generated evidence under Indian law. It examines the admissibility and reliability of such evidence in the context of the Bharatiya Sakshya Adhiniyam, 2023, and the Information Technology Act, 2000, highlighting their limitations in addressing issues like algorithmic bias, the “black box” problem, and error rates. The article further explores how the use of AI evidence intersects with fundamental constitutional rights, specifically the right against self-incrimination (Article 20(3)) and the right to privacy and a fair trial (Article 21). By categorizing AI evidence into human-input, purely automated, and hybrid outputs, it explains the distinct challenges each type poses. Finally, the article suggests key reforms to ensure a balanced and accountable use of AI in Indian courts, including amending the BSA, mandating transparency, establishing certified expert testimony, and maintaining robust judicial oversight. The conclusion argues that without a clear legal framework, AI risks undermining, rather than strengthening, the core tenets of a just and fair legal system.
INTRODUCTION
In almost every field and profession Artificial Intelligence has emerged as a transformative force and legal industry is no exception. For Example AI is being used in revolutionary industries,
services, in medical field for earlier disease detection or operations, in Indian defense agencies and in many departments of government. In legal profession AI is being used for legal practice, legal research, translation, case tracking and even how judges approach decision making and makes justice delivery faster. Apart from this, AI plays a significant role in digital forensics, fingerprint and facial recognition, voice and audio analysis, pattern recognition and ballistic analysis. Lawyers and judges uses these tools in investigation and evidence collection. Platforms like SUPACE ( Supreme court Portal for assistance in court efficiency ) and other AI driven legal systems helps judiciary in fast justice delivery.
But along with the integration of perks of AI into the legal system, some unresolved disadvantages and concerns have also been raised regarding the privacy of parties and the possibility of leaking sensitive information if AI systems have not been secured properly and have been left with loopholes. At the same time it brings up complex issues related to constitutionality, legal admissibility and the protection of individual rights.
This article shed light on the evolving landscape where technology meets the law, offering a comprehensive perspective on the intricate relationship between AI and the Indian legal system. It points out the admissibility of AI generated evidence and exploring how it may impact legal safeguards and existing legal norms specially Article 20 (3) and Article 21 of Indian Constitution. It also shed light on Bhartiya Sakshya Adhiniam, other pertinent laws and AI as it is interlink with Bhartiya Sakshya Adhiniam.
RESEARCH GAP
There have been increasingly reported the use of AI in various kind of legal works mentioned above including platforms like SUPACE and even it has been used for predicting case outcomes and risk assessment. Now the issue is that how the emergence of Artificial Intelligence into legal works and processes, including evidence gathering and presentation, impacts the constitutional and legal rights in India.
RESEARCH OBJECTIVES
This research is guided by the following objectives:
- To study the use of AI in legal processes in India, particularly in evidence gathering and judicial assistance.
- To analyse the admissibility and reliability of AI-generated evidence under Indian evidence law, including the Bharatiya Sakshya Adhiniyam, 2023.
- To consider the constitutional implications of such evidence, focusing on Articles 20(3) and
- To examine the role of the Information Technology Act, 2000 in regulating digital and AI- related evidence.
- To suggest reforms that can ensure the fair and accountable use of AI in Indian
A Brief History of Artificial Intelligence
The origins of AI can be traced to the mid-20th century. The field formally began in 1956 at the Dartmouth Conference, where researchers like John McCarthy and Marvin Minsky explored the idea of creating machines that could simulate human intelligence. Early AI focused on symbolic reasoning and problem-solving, with significant developments in natural language processing and machine learning emerging in later decades.
By the 1980s and 1990s, AI began to find practical applications in fields like medicine, finance, and logistics. The rise of machine learning and deep learning in the 21st century, fuelled by massive computational power and data availability, has enabled today’s advanced AI tools. In the legal field, AI now supports predictive analytics, e-discovery, contract review, and judicial assistance. Its integration into courts worldwide, including India, marks the next phase of this evolution.
ADMISSIBILITY AND RELIABILITY OF AI AND AI-GENERATED EVIDENCE IN INDIAN COURTS
The Indian Evidence Act, 1872 laid the foundation of the law of evidence. As technology advanced, Sections 65A and 65B were added to govern the admissibility of electronic records, requiring certification for their acceptance in court. The Supreme Court in Anvar P.V. v P.K. Basheer and Arjun Panditrao Khotkar v Kailash Khotkar confirmed the importance of strict compliance with these provisions.
The Bharatiya Sakshya Adhiniyam, 2023 (BSA) replaced the Evidence Act and modernised evidentiary rules by explicitly recognising digital and electronic records as documents. While this reform makes it easier to present digital material, it still does not provide detailed guidance on AI-generated outputs, leaving courts to navigate questions of reliability and authentication on a case-by-case basis. The Information Technology Act of 2000 also plays an important role as it establish assumptions regarding the veracity of secure documents (section 85B) and it permits cyber forensics labs to be designated
as official examiners of electronic evidence (section79). Although these provisions were not created with AI in mind, despite their usefulness. They don’t deal with issues like algorithmic bias, explainability, or responsibility for AI mistakes.
Admissibility and Reliability of AI Evidence-
For evidence to be admitted in court, it must meet two fundamental requirements: it must be relevant to the case and reliable in terms of accuracy and credibility. While this test is straightforward for traditional forms of evidence, the rise of AI- generated material has made both requirements significantly harder to apply.
Relevance
AI-generated evidence can certainly be relevant if it directly relates to the facts in issue. For instance, an AI-based transcription of a dying declaration could be admitted under Section 32 of the Indian Evidence Act (now reflected in the Bharatiya Sakshya Adhiniyam, 2023). Similarly, facial recognition outputs that place an accused at the scene of a crime might appear highly relevant. However, relevance alone does not guarantee admissibility. Courts must still be convinced that the evidence is both authentic and credible.
Reliability
Reliability is where the greatest challenge lies. AI systems frequently function as “black boxes,” producing outputs without transparent reasoning. This lack of explainability makes it difficult for courts to test whether the evidence is trustworthy.
Concerns arise in several areas:
- Opacity of reasoning – Most AI systems, particularly those using deep learning, cannot provide a clear explanation of how they reached a For example, if an AI tool identifies a suspect from CCTV footage, the court may be unable to scrutinise the reasoning behind the match.
- Bias in data – AI systems reflect the data on which they are trained. If the training data carries social or historical biases, the outputs may disproportionately disadvantage certain groups, raising constitutional concerns under Article 14 and Article 21.
- Error rates – Even advanced AI systems are not error-free. In criminal trials, where the stakes are high, a wrongful conviction based on a flawed AI output could amount to a miscarriage of justice.
- Manipulation risks – AI can also be misused to generate false material, such as Courts must therefore be cautious in admitting AI evidence without strong safeguards against fabrication.
Comparative Insights
Other jurisdictions have developed frameworks to assess novel forms of scientific and technological evidence. A leading example is the ‘Daubert standard’ in the United States (Daubert v Merrell Dow Pharmaceuticals, Inc 509 US 579 (1993)), which sets out factors for admissibility:
- Whether there is a known error rate
- Whether the methods used are generally acceptable as reliable within the relevant scientific or technical community that is familiar with the methodology
- Whether the methodology has been subject to peer review by others knowledgeable in the specified
- If the standard procedures or protocols are applicable to the methodology, whether they are compiled
Similar principles could be applied to AI, even though Indian courts have not yet created a comparable framework. For example, in accordance with Section 45 of the Bharatiya Sakshya Adhiniyam, courts may demand proof of error rates, independent algorithm validation, and expert testimony outlining the system’s operation before admitting AI-generated outputs.
The Indian Point of View
Currently, electronic records are recognized by Indian law through the Information Technology Act of 2000 and the Bharatiya Sakshya Adhiniyam of 2023, but AI-generated evidence is not specifically regulated. In cases like Anvar P.V. v.
P.K. Basheer and Arjun Panditrao Khotkar v. Kailash Khotkar, courts have already struggled to admit digital records and have insisted on rigorous adherence to certification requirements.
The challenge with AI will be to ensure transparency, accountability, and fairness in addition to certification.
Reliance on AI-generated outputs runs the risk of allowing evidence that is problematic from a scientific or constitutional standpoint unless a framework is established. Therefore, it might be crucial to create a judicial standard, possibly a localized version of the “Indian Daubert.”
Constitutional Implications
AI’s impact on fundamental rights can be assessed from multiple perspectives. One approach involves examining the use of AI and AI-generated evidence as primary sources of evidence. Another perspective delves into the role of AI in gathering and processing additional evidence.
The development of AI in assisting the electronic and scientific evidence:
- AI is being used to analyze complex forensic data. For example, in DNA analysis, highly sensitive modern techniques can detect DNA from multiple people at a crime scene, making it difficult to identify individual Researchers are using machine learning to help separate and identify these profiles, making the evidence more reliable. Similarly, AI algorithms are being developed to analyze gunshot patterns, helping investigators more accurately determine what happened. This AI-assisted analysis can make scientific evidence stronger and more trustworthy.
Navigating Constitutional Right
Article 21 of the Indian Constitution states that “No person shall be deprived of his life or personal liberty except according to a procedure established by law”. It has been held by the Court of Law that the Right to privacy is recognized under Article 21 and it is not absolute in nature and must comply with the public interest.44 “According to Article 21, an invasion of privacy must be justified on the basis of a law which stipulates a procedure which is fair, just, and reasonable. The law must also be valid with reference to the encroachment on life and personal liberty under Article 21.45 Also “when conflict arises as to the right to privacy and the right to a fair trial46 (all stages- stage of the investigation, inquiry, trial, appeal,
revision, and retrial) both under the ambit of Art 21, the right to privacy may have to yield to the right to a fair trial”47 Merely because evidence is collected illegally doesn’t render it inadmissible in the Courts.48 Though the Right to Privacy is a serious concern in the digital arena, the mere usage of AI-based evidence will not amount to a breach of Privacy. Article 20(3) upholds the Right against self-incrimination i.e., “No person accused of any offense shall be compelled to be a
witness against himself”. In the Selvi v. State of Karnataka case, the Supreme Court held that “various evidence-gathering techniques such as Narcoanalysis, brain mapping, lie detection, and polygraphy tests are violative of Article 20 (3) and
Article 21 of the Constitution. These tests cannot be regarded as completely truthful and reliable, hence there are chances of false results and a scope of errors. It comes in conflict with the standard of proof ‘beyond reasonable proof’ which is an essential element of criminal trials. However, on voluntary administration of the test by the accused, the results are however could be admitted in consonance with Section 27 of the Indian Evidence Act”.49 With the support of AI devices and their methodologies further substantiating the reliability, authenticity, and trustworthiness of the techniques wouldn’t amount to self-incrimination, but rather make sure the trial is speedy and proved beyond a reasonable doubt. As discussed in 4.1 it would make the scientific evidence stronger and reliable making significant changes in the criminal trial.
TYPES OF AI EVIDENCES
Legal experts have identified three main categories of AI evidence, and each raises unique questions about its admissibility and reliability in court.
- human-input recordings, where AI simply processes information that a person has A great example is an AI that transcribes a voice recording. The core evidence is the original recording, and the AI’s role is to make it more accessible. The main legal challenge here is proving that the AI’s processing is accurate and hasn’t introduced errors, since a simple mistake in transcription could change the meaning of a key conversation.
- purely automated outputs, which are generated by machines with little to no human involvement. Think of an automatic number plate recognition In these cases, the evidence is treated much like traditional machine data, so the key questions for the court revolve around whether the system was properly calibrated, maintained, and if the data could have been tampered with. Proving the reliability of such a system is crucial, as the machine itself can’t be cross-examined.
- hybrid outputs, which are created when a predictive algorithm combines human input with machine This is where AI truly gets tricky, as seen in predictive policing or sentencing algorithms. The biggest issue is the “black box” problem: it’s often impossible to understand how the algorithm reached its conclusion, making it incredibly difficult for a defendant to challenge the evidence. Furthermore, these systems are trained on historical data, which can contain and amplify existing human biases, raising serious concerns about fairness and equality. As a result, the admissibility of hybrid evidence poses the greatest challenge to a fair and just legal system, requiring a careful balancing of technological benefits against fundamental constitutional rights.
SUGGESTIONS:
1. Amend the Bharatiya Sakshya Adhiniyam (BSA) to include clear provisions on AI evidence.
The BSA, which replaced the old Indian Evidence Act, has made some strides in addressing electronic and digital evidence. However, it needs to be updated to specifically deal with the unique challenges of AI-generated evidence. Currently, AI
outputs are often shoehorned into existing categories of “electronic records,” which is an inadequate approach. The new law should create a distinct category for AI evidence, classifying it based on its type (e.g., automated, hybrid) and the level of human intervention. It should set out specific rules for how this evidence is to be collected, preserved, authenticated, and presented in court. This will provide legal certainty, clarity for judges, and a predictable framework for both prosecution and defense. Without a specific legal framework, the admissibility of AI evidence will continue to be a source of confusion and inconsistent judicial decisions.
2. Mandate transparency so parties can challenge AI outputs.
The “black box” problem of AI, where the reasoning behind an algorithm’s output is opaque, is one of the most significant barriers to a fair trial. To combat this, the law must mandate a “right to explanation” for AI-generated evidence. This means that if AI is used to produce evidence, the prosecution or the party presenting it must be able to explain how the AI arrived at its conclusion. This includes disclosing the algorithm used, the dataset it was trained on, and any known limitations or error rates. This transparency is essential for procedural fairness, as it allows the opposing party to scrutinize the evidence, identify potential biases, and mount an effective defense. Without this, a defendant could be convicted based on evidence they cannot meaningfully question, which is a clear violation of due process.
3. Rely on expert testimony from certified forensic specialists (s 45 BSA).
The complexity of AI systems requires that judges and lawyers rely on the opinions of experts, as provided for in Section 45 of the BSA. However, the law needs to go further by establishing a formal process for certifying and regulating forensic specialists in AI. These experts would not only have to be knowledgeable about the technology but also understand the legal and ethical implications of its use in court. Their role would be to provide independent analysis of the AI system’s reliability, potential biases, and the validity of its output. By having a pool of certified experts, the court can ensure that the scientific rigor of the evidence is properly evaluated, moving beyond a simple trust in technology. This will ensure that the quality, not just the quantity, of evidence is what matters in a trial.
4. Ensure privacy compliance with the Digital Personal Data Protection Act, 2023.
The widespread use of AI in investigations, such as for facial recognition or data analysis, involves the collection and processing of vast amounts of personal data. The Digital Personal Data Protection Act (DPDPA), 2023, provides a
framework for protecting personal data and can serve as a foundation. However, the law needs to clarify how it applies to state agencies and law enforcement. Specific provisions should be enacted to govern how personal data can be collected by AI systems for legal purposes, requiring clear consent or a lawful basis, and ensuring that the data is used only for the purpose for which it was collected. The Act’s principles of purpose limitation and data minimization are particularly relevant here. Without strong privacy safeguards, the use of AI in law enforcement could lead to mass surveillance and an erosion of individual rights, making the justice system more intrusive rather than more just.
5. Maintain judicial oversight, ensuring AI remains assistive, not determinative.
The final and most crucial safeguard is the role of the judge. AI must always be treated as a tool to assist judicial decision- making, never to replace it. The ultimate responsibility for a verdict must remain with a human judge who can weigh all the evidence, including the AI’s output, in its proper context. The court must have the discretion to admit or reject AI evidence, especially if its reliability is in doubt or if it violates constitutional rights. This principle ensures that human judgment, empathy, and the consideration of nuance—qualities that AI lacks—remain at the core of the judicial process. Judicial oversight is the ultimate check and balance, preventing the justice system from becoming a purely algorithmic one where human lives and liberties are decided by code.
CONCLUSION
Artificial Intelligence is already influencing Indian legal practice, from SUPACE in the Supreme Court to forensic tools in criminal investigations. Yet, when it comes to evidence, AI poses difficult challenges. The IT Act, 2000 and the Bharatiya Sakshya Adhiniyam, 2023 provide a framework for admitting electronic records, but neither adequately addresses the special problems of AI—its opacity, potential for bias, and risks to privacy and fairness.
The constitutional protections of Article 20(3) and Article 21 remain vital safeguards. They remind us that efficiency cannot come at the cost of rights. If AI is to play a greater role in evidence gathering and presentation, Indian law must evolve to ensure transparency, reliability, and accountability. Without such reforms, the risk is that AI will weaken, rather than strengthen, the justice system.
References
- Abinaya S, ‘Admissibility and Reliability of AI-Generated Evidence in Indian Courts’ (2024) 10(4) International Journal of Law, Legal Research
- Human Rights Law Review, ‘Artificial Intelligence in the Legal Sector’ (2024) https://humanrightlawreview.in/wp- content/uploads/2024/08/Artificial-Intelligence-in-the-Legal-Sector.pdf accessed 26 September 2025.
- Lawful Legal, ‘The Role of Artificial Intelligence in the Indian Legal System: Challenges and Prospects’ (2023) https://lawfullegal.in/the-role-of-artificial-intelligence-in-the-indian-legal-system-challenges-and-prospects/ accessed 26 September 2025.
- Lawful Legal, ‘The Role of AI in the Indian Legal System’ (2023) https://lawfullegal.in/the-role-of-ai-in-the-indian- legal-system/ accessed 26 September 2025.
- Anvar V. v P.K. Basheer (2014) 10 SCC 473.
- Arjun Panditrao Khotkar v Kailash Kishanrao Gorantyal (2020) 7 SCC
- Selvi v State of Karnataka (2010) 7 SCC
- Justice S. Puttaswamy v Union of India (2017) 10 SCC 1.
- Pooran Mal v Director of Inspection (1974) 1 SCC
- Bharatiya Sakshya Adhiniyam,
- Information Technology Act, AI AND THE ADMISSIBILITY AND RELIABILITY OF AI GENERATED EVIDENCE IN INDIAN COURTS: THE LEGAL CHALLENGES IN DIGITAL AGE