Introduction
Artificial intelligence has rapidly transformed various sectors, including healthcare, transportation, finance, and law enforcement. While AI systems enhance efficiency and decision-making, their increasing autonomy raises serious concerns when criminal harm occurs. Traditional criminal law is based on human conduct and intent, making it difficult to attribute liability to artificial intelligence. This research paper examines the challenges AI poses to established principles of criminal liability and explores possible legal approaches to ensure accountability in the age of intelligent machines.
Legal Framework and Provisions
At present, there is no specific legislation governing criminal liability of artificial intelligence. Existing criminal laws, such as the Indian Penal Code, 1860 (now Bharatiya Nyaya Sanhita, 2023), are based on human intention and conduct. Liability related to AI is generally addressed through principles of vicarious liability, negligence, and corporate criminal liability. Emerging frameworks like data protection laws and proposed AI regulations seek to address accountability gaps arising from autonomous technologies.
Applicable Acts and Laws
- Bharatiya Nyaya Sanhita (BNS), 2023 – Governs criminal liability, including negligence, abetment, and intent. Human actors responsible for AI actions are punished under relevant sections.
- Information Technology Act, 2000 – Addresses unauthorized access, data breaches, and misuse of digital/AI systems. Sections 43 and 66 are often cited.
- Indian Penal Code, 1860 (for older cases) – Principles of criminal liability, including negligence, abetment, and corporate responsibility, can be applied to AI-related incidents.
- Emerging AI Regulations – Proposed policies and guidelines (e.g., NITI Aayog AI policy) may influence future liability frameworks.
Punishments in India – Relevant Sections
Since AI cannot be punished, liability is fixed on humans or corporations under existing laws.
| Law / Section | Nature of Liability | Punishment |
|---|---|---|
| Section 106, Bharatiya Nyaya Sanhita (BNS), 2023 | Culpable mental state for offences committed through AI | Imprisonment or fine, depending on the offence |
| Section 111, BNS, 2023 | Rash or negligent acts causing harm | Imprisonment up to two years, or fine, or both |
| Sections on Abetment (BNS) | Use of AI as a tool to commit an offence | Same punishment as the principal offence |
| Section 43 & Section 66, Information Technology Act, 2000 | Unauthorized access or misuse of AI-driven systems | Compensation, imprisonment up to three years, or fine |
| Corporate Criminal Liability (BNS & IT Act) | Offences committed through corporate-controlled AI systems | Heavy fines, compensation, and regulatory sanctions |
Case Law Summary
1. Mrs. X vs Union of India and Ors (26 April 2023)
Facts of the Case
- The petitioner, Mrs. X, was a victim of non-consensual intimate images (NCII) circulated online.
- The accused coerced her, took explicit images, and leaked them on websites and a YouTube channel.
- She attempted to get these images removed through intermediaries such as Google and Microsoft, but they failed to act promptly.
- She filed a writ petition under Article 226 of the Constitution and Section 482 CrPC seeking:
- Protection of her rights
- Removal of the offensive content
- Registration of an FIR
Issues Considered
- Whether intermediaries are legally obliged to remove NCII promptly under the Information Technology Act.
- How to protect the fundamental rights of victims, including privacy and dignity.
- What guidelines courts can issue for swift action against online sexual abuse.
Arguments
- The petitioner argued that immediate removal was necessary to prevent irreparable harm.
- She contended that failure of intermediaries violated her right to privacy and dignity.
- The Union of India and intermediaries claimed that existing IT Act guidelines regulate content removal.
- They cautioned against judicial overreach.
- The Court examined the need to balance freedom of speech with the right to privacy and dignity, emphasizing timely compliance to prevent trauma.
Judgment
- The Court directed intermediaries to act promptly in removing NCII content.
- It strengthened complaint and takedown mechanisms under the IT Act.
- The judgment highlighted the protection of victims’ privacy and dignity.
- It ensured speedy redressal of online sexual abuse cases.
- The case was disposed of with guidelines for authorities and intermediaries to follow, reducing harm to victims.
2. Shri Harish Chandra Singh Rawat vs Union of India & Another
Court, Bench, and Date
| High Court | High Court of Uttarakhand at Nainital |
|---|---|
| Judgment Date | 21 April 2016 |
| Bench | Chief Justice K.M. Joseph & Justice V.K. Bist |
Background and What the Case Was About
Harish Chandra Singh Rawat, then Chief Minister of Uttarakhand, challenged the proclamation of President’s Rule in Uttarakhand issued on 27 March 2016 under Article 356 of the Indian Constitution.
Rawat’s petition argued that the President’s Rule proclamation and the Union Government’s recommendation were unconstitutional, illegal, and based on false or insufficient material. He alleged that the process leading to the invocation of Article 356 was mala fide and aimed at toppling a democratically elected Congress government.
He sought orders to:
- Quash the Article 356 proclamation
- Restore his government
- Invalidate all consequential acts and orders passed during President’s Rule
The case arose during a political crisis in the Uttarakhand Assembly involving disputes over a money (appropriation) bill, demands for a division of votes, and alleged attempts by rebel MLAs to destabilize the government.
Key Legal Issues Raised
Whether There Was a Constitutional Breakdown
- Rawat argued that the Centre’s action under Article 356 was unjustified.
- He claimed there was no valid loss of majority in the Assembly.
- The Governor and Union Government relied on a Speaker’s procedural dispute and a questionable bill passage to conclude constitutional breakdown.
Allegations of Horse-Trading and Political Motive
- The petitioner alleged that a doctored video and other evidence were improperly used.
- He claimed these materials were unverified and unjustified grounds for imposing President’s Rule.
Procedural Issues Before President’s Rule
- Rawat argued he was never given a fair opportunity to respond.
- He asserted that the chronology of events showed unfair conduct by the Governor and the Centre.
Respondents’ (Union and State) Position
- The Union of India asserted that sufficient material existed to justify invoking Article 356.
- They relied on a representation by 27 MLAs demanding a division of vote.
- This was said to indicate a potential loss of confidence.
- They argued that continuation of the government without a confidence vote amounted to constitutional breakdown.
- They also claimed the petitioner had misleadingly presented facts regarding voting and majority.
Outcome (Judgment)
The publicly available excerpts do not fully record the final order of the High Court. The document contains the oral judgment and detailed arguments up to a point but does not clearly state whether the writ petition was allowed or dismissed.
In Article 356 cases, courts typically examine:
- Whether the material justified the President’s satisfaction
- Whether constitutional processes were followed
- Whether there was a real breakdown of constitutional machinery
To determine the exact ruling, the complete judgment must be obtained from an official High Court repository.
Legal Importance
This case is an example of judicial review of Article 356 (President’s Rule) and echoes principles laid down in S.R. Bommai vs Union of India.
- Article 356 must be used sparingly.
- It can be invoked only when a genuine breakdown of constitutional machinery exists.
- Courts act as “sentinels on the qui vive” to protect federal structure and prevent arbitrary dismissal of elected governments.
3. Shilpa Shetty Kundra vs Getoutlive.in & Ors — Interim Application (L) 38469 of 2025
Facts of the Case
Parties Involved
- Applicant: Shilpa Shetty Kundra, a Bollywood actor and public figure.
- Respondents: Getoutlive.in and multiple other websites/platforms (Respondent Nos. 1–28), including governmental bodies (MeitY and DoT).
Nature of Complaint
- Unauthorized use of the applicant’s photographs and likeness.
- AI-generated deepfake content depicting her in obscene/sexually explicit scenarios.
- Content circulated across multiple online platforms without consent.
Alleged Harms
- Violation of her fundamental rights under Article 21 (privacy, dignity).
- Irreparable harm to reputation and public image.
- Harassment and humiliation due to online circulation.
Reliefs Sought
- Immediate takedown of infringing URLs/content.
- Blocking access to AI-generated obscene deepfake content.
Issues Raised
- Prima Facie Violation of Personality Rights: Whether the AI-generated deepfake content infringes on Shilpa Shetty’s rights to privacy, dignity, and personality.
- Interim Relief: Whether the Court should direct takedown/blocking of content pending final adjudication.
- Role of Government and Platforms: Responsibility of hosting platforms and government bodies (MeitY and DoT) to prevent unlawful circulation of content.
Interim Judgment / Directions
Protection of Privacy and Digital Identity
- Court recognized prima facie violation of privacy, dignity, and reputation.
- Noted that personal identity cannot be reconstructed or circulated in harmful ways without consent.
Takedown Orders
- All defendants directed to delete infringing URLs/content immediately.
- MeitY and DoT directed to block all links and websites infringing on applicant’s rights.
Substantive Issues Reserved
- Final determination on personality rights claims and AI/deepfake issues deferred for regular hearing.
- Interim order focused solely on urgent protection.
Legal Principles Highlighted
Personality Rights and Privacy
- Right to privacy, dignity, and reputation is constitutionally protected.
- Digital selfhood and image must be safeguarded, especially for women public figures.
AI and Deepfakes
- Unauthorized creation and dissemination of AI-generated content constitutes prima facie violation of personality rights.
- Courts willing to intervene promptly to prevent irreparable harm.
Contemporary Significance
- Demonstrates India’s judiciary responding to modern digital threats.
- Reinforces legal accountability for AI misuse, deepfakes, and online harassment.
- Highlights growing importance of interim reliefs to safeguard reputation and privacy in the digital age.
Impact on Society
The rise of artificial intelligence has transformed everyday life, but it also poses significant risks to society when misused. AI-driven decisions in healthcare, finance, law enforcement, and social media can cause harm, including privacy violations, financial fraud, wrongful arrests, or dissemination of fake content. The inability to hold AI itself accountable shifts responsibility to developers, operators, and corporations, creating legal and ethical dilemmas.
This uncertainty can erode public trust in technology, hinder adoption, and increase social anxiety. Clear legal frameworks and accountability mechanisms are essential to balance innovation with societal protection.
Conclusion
Artificial intelligence is reshaping society, offering unprecedented efficiency and innovation, yet it simultaneously challenges traditional concepts of criminal liability. Since AI lacks consciousness and intent, existing criminal law cannot directly punish AI systems. Consequently, responsibility falls on human actors, including developers, operators, and corporations, highlighting gaps in current legal frameworks.
Indian laws, such as the Bharatiya Nyaya Sanhita, 2023, and the Information Technology Act, 2000, are being applied to address AI-related harm, but these provisions are often inadequate to fully capture the complexities of autonomous decision-making. International examples, such as self-driving car accidents, demonstrate the global struggle to assign liability appropriately.
To safeguard society and foster responsible AI development, there is an urgent need for comprehensive legislation, clear regulatory guidelines, and ethical standards that ensure accountability without stifling innovation. Establishing these frameworks will protect public safety, uphold justice, and promote trust in AI technologies.
References
- Source: Indian Kanoon – https://share.google/GWGCRSoqxCTg0C2vS
- Source: Indian Kanoon – https://share.google/MvEGTgY4i19JDpauL
- Source: Indian Kanoon – https://share.google/UPpp5QEak3Bt2fypB
Written By: A.Kherin Trufina


