Introduction
Article 15 of the Indian Constitution provides protection to citizens against discrimination based on religion, race, caste, sex, place of birth, etc. The conflict between current constitutional principles and artificial intelligence raises serious questions that may disrupt fundamental rights. Errors arising from bias in technology use and automated decision-making depend largely on the ethical considerations of intelligence.
AI Bias and Discrimination
Examples of Bias in AI Systems
- An article by the Australian Human Rights Commission demonstrated with examples that if AI is trained by dividing people into two groups, then during the selection process it may prefer candidates from a specific group rather than selecting candidates based solely on merit.
- Amazon’s machine learning experts had to discard an AI model, because it was generating bias against women.
- Google’s incident of labeling images of Black people as “gorillas” also highlights this discriminatory tendency.
Causes of Bias in AI
| Source of Bias | Explanation |
|---|---|
| Training Data | From the collection of training data for target variables and class levels, discrimination can arise. |
| Feature Selection | Improper selection of features can embed systemic bias. |
| Proxy Variables | AI may unintentionally use proxies for protected characteristics. |
The legal limitations of the system are the main reason for the creation of such social biases.
Privacy and Surveillance Concerns
The same applies to the privacy of personal data. For instance, the Delhi Police used an automated facial recognition system to identify protesters in Delhi during the anti-citizenship law protests. There are serious questions regarding its legal validity. Such use of AI violates the Supreme Court judgment in K.S. Puttaswamy vs. Union of India, where it was held that Article 21 of the Constitution guarantees citizens’ right to privacy, and any intrusion by the state must be within specified limits.
Misuse of Facial Recognition Technology
The Delhi Police claim that the legal framework under which they used this facial recognition system is based on the Delhi High Court’s judgment in Sadhan Halder vs. Government of NCT of Delhi. However, in that case, the court had allowed the police to use this technology solely for the purpose of tracing missing children. But the technology was instead used for mass surveillance by the Delhi Police.
- Facial data of citizens protesting against the government were utilized.
- This potentially deprived them of their fundamental right to freedom of expression.
- The use of AI for identifying protesters in assemblies, protests, and gatherings puts their personal safety at risk.
Legal Gaps in Indian Law
At present, Indian law lacks clear provisions to address these emerging concerns. The vast amount of data required for AI decision-making is built on information drawn from the daily usage of the public. Illegal data collection, algorithmic bias, and data leaks constantly undermine individuals’ right to privacy.
Digital Personal Data Protection Act, 2023
The Digital Personal Data Protection Act, 2023 ensures protection of citizens’ personal data and its proper use in the digital domain. Data fiduciary companies can collect personal data online only with users’ consent. Data collected offline and later digitized may also be retained. However, any information kept online for personal purposes cannot be accessed by data fiduciary companies.
Key Features of the Act
- The new law lays special emphasis on personal privacy.
- It also covers the transfer of personal data outside India, allowing transfer of Indian citizens’ personal data abroad under special conditions when necessary.
- If a responsible entity fails to implement adequate security measures, a fine of ₹250 crore may be imposed.
- In case of failure to notify a data breach, a financial penalty of up to ₹200 crore can be levied.
Limitations of the Act in the Context of AI
However, The Digital Personal Data Protection Act has certain limitations in the context of AI. Artificial intelligence does not merely process data — it learns and trains from it. The Act does not adequately regulate algorithmic transparency or bias.
Accountability Challenges
Determining who should be held responsible for AI-driven actions is a complex issue, as AI-powered technologies are becoming increasingly autonomous, creating ambiguity around accountability. The current data protection law is a generally framed legislation and does not provide specific guidelines on how AI models should be audited or how to prevent bias.
In this context, sector-specific guidelines are urgently needed. Restricting AI ethics only to privacy will not be sufficient to fully safeguard citizens’ fundamental rights.
Written By: Saurav Sarker, Student, University of Burdwan.
Phone : 7001932926


