In August 2025, Maharashtra Police used AI to solve a hit-and-run in record time. While a “red mark” on a truck is usually impossible to find, smart software scanned thousands of videos to identify the vehicle in just 36 hours. This shows how AI acts like a powerful searchlight, helping police find hidden clues faster than any human could.
However, catching a suspect is only the first step. Under India’s new law, the BSA, a computer’s “guess” is not enough for a conviction. Lawyers and judges must now decide how to turn a machine’s data into solid proof. For example, a 90% facial recognition match must be backed by real evidence to be accepted in a court of law.
From Investigative “Lead” to Trial Exhibit
The Nagpur case shows how AI helps police catch suspects quickly. However, experts warn that technology is just a tool, not a replacement for human judgment. Under the new BSA law, a digital lead must pass many strict tests before a judge will ever allow it to be used as evidence.
For an AI match to count in court, it must follow traditional rules. A judge will not simply accept the computer’s answer. Instead, the court asks how the AI found the result, who was operating the software, and if the original video files were changed or tampered with at all.
To be trusted, AI results must be repeatable. This means another expert should be able to get the same answer using the same data. If the process is a “black box” that nobody can explain, the AI’s finding remains just a clue for the police, not proof for a trial.
The New Statutory Ledger: Understanding the BSA
The new law, called the Bharatiya Sakshya Adhiniyam (BSA), has brought India’s evidence rules into the modern age. It replaces the old 1872 laws to better handle technology like AI. There are three main rules that now explain how digital evidence works in court:
- Digital Files Are Real Documents: In the past, computer records were often seen as “extra” or less important than paper. Now, the law says they have Documentary Parity. This means a digital file, like an email or a system log, is just as official and powerful as a signed paper document.
- Original Digital Evidence: Under Section 57, the law now recognizes certain digital files as Primary Evidence. For example, an original forensic image of a hard drive can be treated like the “original” source. This makes it easier for courts to trust digital data without needing to find a physical “master copy” that might not exist.
- The Power of Certificates: Even with these updates, the law is very strict about how you bring digital evidence to court. Under Sections 62 and 63, if you cannot bring the actual computer or server into the courtroom, you must provide a mandatory certificate. This document is a formal promise that the computer was working correctly and that the data hasn’t been changed. Without this certificate, the judge can simply refuse to look at the AI’s findings.
The Ghost in the Machine: Who Speaks for AI?
One of the biggest challenges with using AI in court is the “Black Box” problem. This means that even if a computer gives an answer, it is hard to see exactly how it reached that conclusion. Since you cannot ask a computer program questions in a witness box, the law needs a way to explain the machine’s logic to a judge.
The solution is found in Section 79A of the IT Act, which recognizes special “Examiners of Electronic Evidence.” These are highly trained experts who serve as the official voice for the technology. They act as “translators” who turn complex computer code into information that the court can understand and use to make a fair decision.
An examiner’s job goes far beyond just handing over a printout. They must prove that the digital evidence was never tampered with and explain how often the AI might make mistakes. They also look for “biases,” which are hidden flaws in the software that might lead to unfair or incorrect results against certain people.
Without this human expert to explain the process, an AI’s findings are treated as just a “hint” for the police to follow. On its own, the machine’s output cannot be used as final proof in a trial. To be accepted by a judge, the “Ghost in the Machine” must have a qualified person to speak for it and verify its work.
The Four Pillars of Admissibility
To make sure AI evidence is accepted in court, lawyers follow four main rules:
- Human Checking: AI is not enough on its own. Every result from a computer must be backed up by real-world proof, such as witness statements, physical objects, or phone records.
- Tracking the Data: There must be a clear record of where the data went. Every step—from the original video to the final AI report—must be saved with digital signatures and time stamps to prove nothing was changed.
- Extra Support: Courts don’t trust simple percentages or “matches.” Even if an AI says there is a 90% match, the police must find other evidence to prove the person is guilty beyond any doubt.
- The Official Certificate: Under the law, a special certificate is required for any digital evidence. This document is like a “golden ticket”—without it, the court may refuse to look at the AI findings at all.
Privacy in the Age of Algorithms
Using AI in police work does not happen in a vacuum. A new law called the DPDPA sets rules for how personal data must be handled. While the police have some special permissions to use data for investigations, they must still be fair. They are only allowed to collect and use what is truly necessary for the case, ensuring they don’t overreach.
Defense lawyers are now watching closely for something called “function creep.” This happens when data is collected for one simple reason but is later used by AI for a completely different purpose without permission. If the police use data in a way they weren’t supposed to, a judge might decide that the evidence is less reliable or shouldn’t be used at all.
Finally, legal rules ensure that people accused of crimes have the right to see the digital evidence against them. They can ask for exact copies of the digital files to run their own tests. This creates a “battle of the experts” in the courtroom, where both sides use technology to check the facts and ensure the trial is fair for everyone.
A New 12-Step Strategy for Digital Evidence
To bridge the gap between technology and the law, experts have created a 12-step plan for investigators. This guide helps ensure that AI evidence is handled correctly from the very start. For example, officers must write down their goals before using AI to prevent them from changing their story later to fit the results.
The protocol also requires detailed record-keeping. Police must document the exact version of the AI software used and the specific settings they chose. To make the evidence even stronger, they should send copies of the digital files to independent labs. This proves that the results are consistent and can be repeated by other experts.
Ultimately, while new laws like the BSA allow for better technology, the standards for proof remain very high. AI is meant to make police work faster, but it is not a shortcut. Every piece of digital evidence must still follow the strict rules of the court to be considered fair and trustworthy.
Recent Court Rulings on Digital / AI Evidence
Recent rulings from the Supreme Court and various High Courts demonstrate a growing caution toward AI, shifting from general curiosity to strict judicial oversight.
The “Hallucination” Trap – Fake Precedents: In a landmark 2026 observation, the Supreme Court of India addressed the dangerous rise of “AI hallucinations.” The Court ruled that relying on AI-generated case laws that do not exist is not a mere technical error but a matter of professional misconduct. This followed a startling instance where a lower court cited several judgments—complete with plausible-sounding citations—that were entirely fabricated by a Large Language Model (LLM). The apex court emphasized that while AI can assist in research, the duty to verify every citation remains a human responsibility.
Protecting Personality Rights from Deepfakes: The Delhi High Court has been at the forefront of tackling AI-generated misinformation, particularly deepfakes. In a significant personality rights case involving Gautam Gambhir, the Court issued an omnibus injunction to prevent the unauthorized creation and distribution of AI-generated videos and images using his likeness. The Court noted that the ease of creating “digital clones” poses a direct threat to a person’s reputation and the integrity of evidence, reinforcing that AI-generated content cannot be treated as a “fair use” shortcut when it infringes on individual rights.
Reliability in Administrative Law: The Bombay High Court recently scrutinized the use of AI in administrative decision-making. In a key tax matter, a substantial recovery notice was stayed because the department’s reasoning was based on AI-generated summaries and “rulings” that were found to be legally inaccurate. The Court clarified that while the government can use technology to flag discrepancies, the final “application of mind” must be human. This builds on the principle from Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal (2020), which insists that the source and integrity of any digital record must be strictly proven via Section 63 BSA (formerly 65B IEA) certification.
Judicial Caution and Human Oversight: Across these cases, the judiciary has sent a clear message: AI is a “lead” generator, not a judge. These rulings reinforce the “Four Pillars” of admissibility, ensuring that no AI output—whether it is a legal summary or a facial recognition match—is accepted without human verification, expert testimony, and a clear, documented chain of custody.
Problems in Presenting “Silicon Evidence” in Courts
The use of digital or AI-generated evidence—often called “silicon evidence”—faces a major challenge in authenticity and admissibility. Courts require strict compliance with legal standards such as Section 65B of the Indian Evidence Act, 1872 (now Section 63 of the Bharatiya Sakshya Adhiniyam, 2023 – BSA), which mandates certification for electronic records. In practice, this becomes complicated when dealing with cloud-stored data, AI-generated outputs, or blockchain records. For example, CCTV footage from a private server may be rejected if the proper certification is missing, even if it clearly shows the accused at the scene. Similarly, AI-generated reports (like predictive policing outputs) may be questioned because the underlying algorithm is not transparent or easily verifiable.
Another critical issue is the lack of technical expertise among legal stakeholders. Judges, lawyers, and even investigators may not fully understand how complex algorithms, metadata, or digital forensics operate. This creates a gap between technological evidence and its legal interpretation. For instance, in cases involving deepfake videos or manipulated audio, the defence may challenge the reliability of forensic analysis, and the court may struggle to assess competing expert opinions. Without adequate training or independent technical experts, courts risk either over-relying on or completely disregarding crucial digital evidence.
Finally, concerns of data integrity, privacy, and potential manipulation further complicate the use of silicon evidence. Digital evidence is highly susceptible to tampering, hacking, or unauthorized alteration if the chain of custody is not strictly maintained. For example, a WhatsApp chat presented as evidence can be disputed on grounds of editing or lack of original device verification. Additionally, the use of AI tools for surveillance or evidence gathering may raise constitutional concerns under the right to privacy, as recognized in Justice K.S. Puttaswamy v. Union of India. Thus, while silicon evidence offers powerful capabilities, its reliability and legality remain subject to significant scrutiny in courtroom proceedings.
Conclusion
India is updating its legal system to handle AI technology properly. Instead of seeing AI as a magic solution, the courts now treat it as a complex digital record that needs careful human checking. To be used in a trial, AI results must follow strict rules, including official certificates that prove the data is real and has not been changed.
This approach ensures that technology helps the law rather than replacing it. When AI evidence is presented, an expert must be there to explain how it works. By following these clear steps, India makes sure that new tech tools remain fair and reliable for everyone.


