1. Introduction
1.1 Background And Development Of Artificial Intelligence In India
Artificial Intelligence (AI) has been one of the most disruptive and revolutionary technologies of the twenty-first century, transforming industries, governance, and society in general. Throughout the world, AI is being applied in high-stakes areas like healthcare, financial services, education, and law enforcement, where it holds out the prospect of increased efficiency, predictive precision, and creative solutions to long-standing problems. India, with its fast-growing digital economy and high-skilled workforce, has welcomed this technology revolution with incredible speed.
National programs like Digital India, Make in India, and Startup India have encouraged an ecosystem for technological innovation and entrepreneurship, which has allowed AI to penetrate various segments of the economy.
Activities of AI in India are already evident in various fields:
- Hospitals use predictive AI models for treatment planning and early diagnosis
- Fin-tech firms leverage machine learning algorithms for risk assessment and detecting fraud
- E-commerce businesses offer personalized suggestions based on AI algorithms
- Law enforcement agencies increasingly depend on AI-based surveillance and face recognition technology
However, as AI adoption is picking up pace, it also raises deeply significant ethical, social, and legal issues. Concerns regarding accountability for damage done by AI systems, enabling algorithmic prejudice, abuse of personal information, and lack of strong mechanisms for human monitoring indicate large gaps in governance. While the Indian courts have in the past addressed questions of technology through enabling statutes, India does not yet have an overarching legislative framework specifically designed to govern AI. The lack of such a framework exposes individuals and institutions to risk, and creates uncertainty for innovators and corporations.
2. Current Legal And Regulatory Framework
2.1 Overview Of Existing Indian Legislation Regulating Technology And Data
Currently, AI is indirectly governed by wider laws related to technology, data protection, and intellectual property. The Information Technology Act, 2000 is the main cyber law of India, establishing a legal regime for electronic governance, recognition of digital records, prevention of cybercrime, and intermediary liability. The Act does not, however, specifically deal with the unique challenges of AI, such as autonomous decision-making, liability for harm by algorithms, or regulation of adaptive machine-learning systems.
Equally so, the Digital Personal Data Protection Act, 2023 is concerned with regulating personal data processing and protecting privacy. Although extremely pertinent to AI as the technology is based on large-scale data sets, the Act does not address any provisions regarding algorithmic transparency, explainability, or responsibility in automated decision-making processes.
Intellectual property is regulated by the Copyright Act, 1957, which assumes human authorship. The emergence of AI-generated content causes uncertainty since Indian law has not yet addressed whether an AI can be considered an “author,” or who owns rights in such works.
Summary Of Current Legal Coverage
| Law | Scope | Gap In AI Context |
|---|---|---|
| Information Technology Act, 2000 | Cyber law, digital records, cybercrime | No AI-specific liability or autonomous system regulation |
| Digital Personal Data Protection Act, 2023 | Data protection and privacy | No algorithmic transparency or explainability provisions |
| Copyright Act, 1957 | Intellectual property rights | No clarity on AI-generated works ownership |
Collectively, these statutes only incidentally govern AI and are not adequate to deal with its multifaceted and novel risks.
2.2 Statutory Provisions Lacking For Addressing AI-Specific Challenges
Notwithstanding the existence of technology generic legislation, there are still significant loopholes in dealing with AI-specific challenges.
- Liability Issues: Current laws do not specify who should be held accountable if an AI system does harm
- Intellectual Property Ambiguity: It is unclear whether AI-generated works can be copyrighted and under whose name
- Automated Decision-Making Risks: No protections ensuring fairness, transparency, or accountability
- Impact Areas: Loan approvals, hiring decisions, criminal risk assessments
The lack of such protection underlines the pressing need for AI-specific law.
2.3 Comparative Analysis: Global AI Regulations Vs. Indian Scenario
Comparative analysis of regulatory developments across the globe highlights the regulatory gap in India.
| Region | Regulatory Framework | Key Features |
|---|---|---|
| European Union | AI Act (2024) | Risk-based classification (minimal to unacceptable risk), strict compliance for high-risk AI |
| United States | NIST AI Risk Management Framework | Voluntary guidelines focusing on transparency and accountability |
| China | AI & Algorithm Regulations | Strict control over generative AI, deepfakes, and recommendation systems |
| India | No dedicated AI law | Fragmented, piecemeal legal approach |
By contrast, India is dependent on piecemeal frameworks and has not passed a specific AI law, without providing businesses and citizens with clear protections or guidelines.
Legal Issues Caused by AI Tools
3.1 Data Protection and Privacy Issues
The dependence of AI upon enormous datasets gives rise to urgent issues of data protection and privacy. In Justice K.S. Puttaswamy (Retd.) v. Union of India (2017), the Supreme Court established the right to privacy as a constitutional right under Article 21 of the Constitution. However, AI systems tend to lack transparency in many cases, gathering sensitive personal information without proper protection or informed consent. Mass surveillance, unauthorized profiling, and widespread data breaches are worsened with opaque consent requirements under existing law. This builds tension between technological advancement and constitutional safeguards, prompting the need for greater precautions to safeguard data autonomy and responsibility.
Key Concerns
- Mass surveillance and unauthorized data collection
- Lack of informed user consent
- Opaque AI data processing systems
- Increased risk of data breaches
3.2 Algorithmic Bias and Discrimination
A further important challenge is the issue of algorithmic bias. As AI systems learn based on past data, they can perpetuate and exaggerate existing social disparities. In India, this can manifest as caste, religious, gender, or economic discrimination. Recruitment algorithms, for example, can inadvertently discriminate against marginalized groups if they are trained on discriminatory datasets. Though Article 14 of the Constitution ensures equality and prevents arbitrariness, Indian courts have yet to rule over algorithmic discrimination directly. In the absence of statutory provisions for fairness audits, algorithmic transparency, and explainability, the danger of perpetuating systemic bias remains unmonitored.
Examples of Algorithmic Bias
| Area | Potential Bias |
|---|---|
| Recruitment | Discrimination against marginalized communities |
| Finance | Unfair loan approvals or denials |
| Law Enforcement | Biased profiling and surveillance |
3.3 Intellectual Property and Ownership Issues
Intellectual property is yet another legal issue. In Eastern Book Company v. D.B. Modak (2008), the Supreme Court reiterated the requirements of human creativity in works under copyright law. Therefore, works created by AI are outside the ambit of existing protection schemes. Patent law poses similar issues, as inventorship is currently limited to human creators, leaving AI-created inventions in a state of legal ambiguity. Such open questions hinder innovation and commercialization, as companies are unclear about their rights in AI-created products.
Key Intellectual Property Challenges
- Absence of legal recognition for AI-generated works
- Unclear ownership rights
- Limitations in patent laws for AI inventorship
- Barriers to commercialization
3.4 Liability and Responsibility for AI-Based Decisions
Harm caused by AI is still the most challenging issue. In Avnish Bajaj v. State (2005), the Supreme Court explained intermediary liability under the IT Act in the context of e-commerce. However, when a self-sufficient AI system generates injurious decisions like a driverless car triggering an accident or an AI-driven platform unjustly denying a loan, it is not yet apparent who is to carry the liability. Existing legal structures do not give clear answers, and there are risks of both gaps in accountability and ineffective remedies for victims.
Key Liability Concerns
- Unclear accountability for AI decisions
- Gaps in existing legal frameworks
- Difficulty in assigning responsibility (developer, user, or platform)
- Limited remedies for victims
Judicial and Policy Perspectives
4.1 Indian Judicial Approach Towards Emerging Technologies
The Indian judiciary has traditionally taken a cautious approach to the regulation of technology. In Shreya Singhal v. Union of India (2015), the Supreme Court invalidated Section 66A of the IT Act on grounds of ambiguity and interference with freedom of speech, highlighting risks associated with excessive technological regulation. Courts have not addressed AI-related controversies as yet directly. As artificial intelligence becomes more integrated into governance, trade, and personal lives, judicial interpretation of constitutional rights will play a key role in determining the shape of AI regulation.
4.2 Government Policies and NITI Aayog Guidelines Role
The Government of India has started working towards AI governance through policy efforts. NITI Aayog’s National Strategy for Artificial Intelligence (2018) envisions “AI for All” and lists priority sectors in healthcare, agriculture, education, smart mobility, and smart cities. These, however, are policy documents and lack legal compulsions. The Draft Digital India Act (2023) aims to replace the IT Act with new provisions but still does not include AI-specific safeguards. Thus, India’s regulatory initiatives are yet policy-driven and aspirational, not binding and enforceable.
Priority Sectors Identified
- Healthcare
- Agriculture
- Education
- Smart Mobility
- Smart Cities
4.3 International Best Practices and Their Relevance for India
International experience is useful. The EU AI Act illustrates the success of a coherent, risk-based approach. The OECD AI Principles of 2019, highlighting transparency, accountability, and human intervention, offer a soft-law template that can be integrated into Indian law. The diametrically opposite strategies of the United States, where there is market-led innovation, and China, where the state holds tight reins, show two poles on the regulatory end of the spectrum. India needs to adopt a hybrid model that encourages innovation while protecting constitutional rights and democratic values.
Global AI Regulation Comparison
| Region | Approach |
|---|---|
| European Union | Risk-based regulatory framework (EU AI Act) |
| United States | Market-driven innovation |
| China | State-controlled regulation |
| India | Hybrid approach (emerging) |
Need for a Comprehensive Statutory Framework
5.1 Importance of Dedicated AI Legislation
Due to the specific risks of AI, overdependence on dispersed regulation is insufficient. A specialist AI law would ensure legal clarity, create liability frameworks, and safeguard basic rights. Without it, adoption of AI threatens to destroy democratic values, create legal uncertainty, and compromise public confidence in technology.
5.2 Suggested Regulatory Models and Control Mechanisms
A holistic AI law can create an AI Regulatory Authority of India to ensure compliance and accountability. It can take a risk-based approach similar to the EU AI Act, imposing higher levels of protection for high-risk applications in healthcare, policing, and finance. Requirements for algorithmic transparency, periodic fairness audits, and explainability frameworks would make automated decision-making accountable. Moreover, the legislation can mandate human monitoring in high-risk or sensitive situations in order to prevent excessive dependency on autonomous systems.
- Creation of a central regulatory authority for AI governance
- Risk-based classification of AI systems
- Mandatory algorithmic transparency and explainability
- Periodic fairness and bias audits
- Human oversight in high-risk applications
5.3 Equilibrium between Innovation, Core Rights, and Ethics
One of the most serious challenges in regulation design is striking a balance between innovation and constitutional rights. Over-regulation can stifle innovation, push investment out, and undermine India’s competitive edge in the global economy of AI. Under-regulation can endanger rights to privacy, equality, and justice. Judicial supervision under Articles 14, 19, and 21 of the Constitution will be necessary to balance regulation in achieving this while keeping it in tune with India’s constitutional values.
| Regulatory Approach | Impact on Innovation | Impact on Rights |
|---|---|---|
| Over-Regulation | Stifles innovation and investment | Strong protection of rights |
| Under-Regulation | Encourages rapid innovation | Risks violation of fundamental rights |
| Balanced Regulation | Sustainable growth and innovation | Protection aligned with constitutional values |
Conclusion
6.1 Key Findings
The research demonstrates that India does not have an overarching, AI-centered legal framework that can meet the many-faceted challenges presented by artificial intelligence today. Though current acts like the Information Technology Act, 2000, the Digital Personal Data Protection Act, 2023, and the Copyright Act, 1957 offer scant regulatory protection, they lack provision to address issues specific to AI like autonomous decision-making, liability for harm caused by AI, and the protection of AI-generated intellectual property. Judicial decisions, particularly those concerning privacy, responsibility, and freedom of expression, reflect the sensitivity of the courts towards issues of constitutional relevance in the digital landscape. None of these decisions, however, explicitly focus on challenges related to AI, thus leaving substantial legal and ethical issues unanswered. This lacuna reflects the need for India to establish a holistic legal framework that is future-proof and constitutional.
- No comprehensive AI-specific legal framework in India
- Existing laws provide limited and indirect coverage
- Lack of clarity on AI liability and accountability
- Judicial approach is evolving but not AI-focused
- Urgent need for a future-ready legal structure
6.2 Recommendations
Against this background, the establishment of an exhaustive AI-specific legislative framework has become an imperative. The framework should take a risk-based approach to regulation, similar to the European Union’s AI Act, to provide proportionate protections based on the potential influence of AI systems. Additionally, judicial and institutional capacity has to be strengthened to cope with AI-related disputes satisfactorily, so that the courts and regulation authorities are in possession of technical as well as legal capabilities. Above all, any regulation of AI in India has to be based on constitutional principles to ensure that the application of AI technologies is consistent with the values of equality, liberty, and dignity. Balancing innovation with protection of basic rights will be critical for India to realize the revolutionary potential of AI in a way that limits its negatives.
- Adopt a risk-based AI regulatory model
- Establish a dedicated AI legislation in India
- Strengthen judicial and institutional capacity
- Ensure alignment with constitutional principles
- Promote innovation while safeguarding fundamental rights
References
- Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 SCC 1 (India).
- Shreya Singhal v. Union of India, (2015) 5 SCC 1 (India).
- Avnish Bajaj v. State (NCT of Delhi), (2005) 3 Comp LJ 364 (SC) (India).
- Eastern Book Company v. D.B. Modak, (2008) 1 SCC 1 (India).
- Information Technology Act, No. 21 of 2000, India Code (2000).
- Digital Personal Data Protection Act, No. 22 of 2023, India Code (2023).
- Copyright Act, No. 14 of 1957, India Code (1957).
- NITI Aayog. (2018). National Strategy for Artificial Intelligence. Government of India. https://www.niti.gov.in
- European Union. (2024). Artificial Intelligence Act. Official Journal of the European Union.
- Organisation for Economic Co-operation and Development. (2019). OECD Principles on Artificial Intelligence. OECD Publishing.
Written By: Nayan Gupta, Pursuing LLB (Hons.), School of Law, Model Institute of Engineering and Technology (MIET), Jammu


