The Development Of Hyper Realistic Fake Media And The Crisis Of Consent
The development of hyper realistic fake media, which is also referred to as deepfakes, has caused a critical ontological crisis within the digital era, especially with regards to the holiness of personal consent and the credibility of audio-visual testimony.
In the Indian case, the spread of generative artificial intelligence (AI) has passed past the phase of technological newness and turned into a ubiquitous menace to personal dignity, democratic integrity, and even national security in a country typified by a digital population of well over 850 million users and a dynamic and unstable social media landscape.
Primarily, deep fake phenomenon is a disruption in the historical concept of consent; when the image, voice, or identity of a person can be precisely copied and manipulated without his/her permission, even the principles of informational self determination are compromised.
This discussion outlines the complex legislative environment in India, considering the overlap between constitutional protections, new legal frameworks, and critical judicial interventions aiming to balance technological creation with the key right to human decency in the digital realm.
The Technological Genesis And The Ontological Threat
The technical process of deepfake production, which is mainly facilitated by Generative Adversarial Networks (GANs), is the first aspect that needs to be appreciated to comprehend the legal issue.
This architecture uses two neural networks that are competing, a generator and a discriminator that attempts to differentiate the fake and the real.
With repeated iterations of these networks, the output can be so faithful that it is hard to discern the manipulation even by human senses and most conventional forensic instruments.
This has made manipulation of falsified information democratic, where actors with very little technical skills can make believable fake information at a fraction of the cost of around fifty dollars within hours.
Key Characteristics Of Deepfake Technology
- Use of Generative Adversarial Networks (GANs)
- High fidelity outputs difficult to detect
- Low cost and rapid production timelines
- Accessibility to individuals with minimal technical skills
The crisis of consent is made worse by the Liar’s Dividend; a psychological and social process where the very presence of deepfakes allows people to discard real evidence as fake and undermines the collective reality that is required by legal and democratic procedures.
The vulnerability situation in India is particularly urgent, considering that the smartphone penetration is substantial and the socio-cultural environment is varied, where any misinformation may trigger communal strains within the real world rapidly.
The process of the so-called fake photograph by Hippolyte Bayard in 1840 being replaced by the real-time video conferences including AI-transformed ones in 2025 shows the trend whereby the manipulation of the reality has become much more available and harmful exponentially.
Constitutional Jurisprudence: The Right To Digital Dignity
The legal reaction of India to the deepfake crisis is based on the broad understanding of the provisions of the Constitution in Article 21 that provides the Right to Life and Personal Liberty.
The case Justice K.S. Puttaswamy v. The privacy was defined by Union of India (2017) as a universal right, including the informational privacy, the right to decide on personal identity.
Deepfakes are an unparalleled infringement of this right as they steal the so-called persona the intangible set of attributes that define the public and private identity of a person without their permission.
The courts have always believed that any use of the AI-created replicas without the authorisation is an infringement of the dignity of an individual.
When it comes to cases that are of high-profile celebrities, the courts have laid stress on the fact that the right to live and right to livelihood is compromised when one is used in obscene material or as part of fraudulent endorsement.
It is the constitutional umbrella, which gives the personality rights of the modern time the teeth, where the digital likeness is not seen as a data, but as a part of the personhood.
Fiction Between Expression And Personality Rights
One of the long-term dilemmas in the control of deepfakes is the conflict between Article 19(1)(a) (Freedom of Speech and Expression) and the right of an individual to the digital integrity.
The Indian courts usually use a functional balancing test, when the synthetic media is employed commercially, to commit malicious impersonation, or to create nonconsensual intimate images (NCII), then the individual right to dignity and privacy takes precedence.
On the other hand, there are protections of satire, parody and genuine criticism, although they do not lead to a tarnishment or blackening of the reputation of the person.
This difference is necessary at the time of AI, when the boundaries between creative parody and identity theft are becoming more blurred.
Statutory Evolution: Between The IT Act And The BNS And BSA
India has a fast-changing legislative system.
The Information Technology (IT) Act, 2000 worked as the main tool of dealing with cybercrimes over the last 20 years.
Its technology neutral language, however, tended to fail to tread the details of AI-generated harms.
By July 1, 2024, it was also with the introduction of the Bharatiya Nyaya Sanhita (BNS) and the Bharatiya Sakshya Adhiniyam (BSA) that the modernisation of the criminal and evidentiary environment was historic.
The IT Act And The 2025 Amendments
IT Act in Section 66C (Identity Theft), 66D (Cheating by Personation), and 66E (Violation of Privacy) also offers a fundamental basis to prosecute offences on deepfakes.
Nevertheless, the most influential changes in regulations have been finalised in the form of the IT Rules, 2021, and the suggested amendments of 2025.
These regulations require intermediaries-social media platforms and AI tools providers to make proactive due diligence efforts to get rid of harmful synthetic content.
Proposed 2025 Draft Amendments
Amendments made to the draft in 2025 propose the first statutory definition of what is meant by the term “synthetically generated information” (SGI) which includes any material that was artificially generated or altered that seems reasonably authentic.
Codified labelling requirements and metadata traceability will help the government to transfer responsibility to the passive hosting platform and the active providers of the generative tools.
Crime Liability and the Bharatiya Nyaya Sanhita (BNS)
The BNS 2023 substitutes the Indian Penal Code of the colonial time and directly covers the crimes of the digital age. Section 319 (Cheating by Personation) and Section 336 (Electronic Forgery) are now the key areas in which deepfake makers are prosecuted. Also, Section 356 applies the old rules against defamation to synthetics, making the creation of harmful communications using AI no less serious than written or spoken libel.
Evidence and the Bharatiya Sakshya Adhiniyam (BSA)
It is arguably the most challenging issue the judiciary might face because of deepfakes: the admissibility of digital evidence. The BSA 2023 upgrades electronic records to primary evidence, although it has difficult authentication requirements in Section 63. To be admissible, a digital record should be supported by the certificate signed by an individual in charge of the device and an expert and check the integrity of the data, the chain of custody. Even though these safeguards exist, the inability of the laboratories to keep abreast of the GAN based manipulation is a critical weak point in the criminal justice system.
Key Authentication Requirements Under the BSA
- Electronic records are treated as primary evidence.
- A certificate must be signed by the person in charge of the device.
- An expert must verify the integrity of the data.
- The chain of custody must be established.
Personality Rights: Identity Protection Case Studies
The judicial acknowledgment of personality rights has become a leading trend in India due to some high-profile litigation in the Delhi and Bombay High Courts which led to the recognition of personality rights. These instances depict how celebrities are employing the law to take a back control on their internet images.
The Baritone Precedent and Amitabh Bachchan
In 2022 veteran actor Amitabh Bachchan obtained a landmark John Doe order against the world at large barring the unauthorised use of his name, voice, image and even his unique baritone. The court realised that in the case of the personality such as Bachchan, his voice and image are commercial properties that cannot be used without his express permission. The decision paved the way to a series of petitions by actors such as Abhishek Bachchan and Aishwarya Rai Bachchan, who wanted to wipe out AI chatbots and YouTube channels that used their images to produce so-called personalised content and mostly vulgar content.
Jhakaas Protection and Anil Kapoor
In 2023, the Delhi High Court granted an injunction against Anil Kapoor specifically against the use of deepfakes and generative AI in order to touch up his photos into different characters and actors. More importantly, the court guarded his signature catchphrase jhakaas, which stated that even distinctive movements and language peculiarities should be considered as a part of a celebrity persona which could be defended. According to the court, in the era of AI, it is not possible to blindly ignore the possibility that fame can extend into the right to earn a living and dignity being damaged by the court.
The Protection of Artistic Voice and Arijit Singh
The case of the Bombay High Court which intervened in the Arijit Singh case (2024) pointed to how AI voice cloning is a threat to the music industry. The court barred the use of AI-generated voice of Singh to produce what it considered as fake songs as the court saw that mis-directing such voice would harm the career of an artist besides the digital equipment may be used unscrupulously by unscrupulous people.
Deepfakes in the Democratic Arena: The 2024 General Election
The 2024 Indian General Election was a colossal real-life experiment in the regulation of deepfakes. The entire spectrum of political parties applied AI both innovatively to reach people and with malicious intent to disseminate information, which also provoked the reaction of the Election Commission of India (ECI).
The Dual Edge of Political AI
AI enabled politicians to overcome the language barrier in a nation where there are 22 official languages and thousands of dialects. Deepfakes impersonations sent out personalised messages and hologram avatars to electorates in their local languages. But the same technology was employed in electoral sabotage. Another prominent event was one that created a deepfake of the Home Minister, Amit Shah, with false information stating that the BJP was planning to scrap the reservation policies. In reaction, the ECI declared a 3 hour takedown order over political misinformation in the implementation of the Model Code of Conduct (MCC).
Ethical Consent and Posthumous Politics
One of the scandalous trends during the 2024 elections was the use of deepfakes to revive former political leaders who had died. Deeper fake was applied by the DMK party to showcase a deceased leader Muthuvel Karunanidhi, who died in 2018, to campaign in favour of his son, MK Stalin. Although technically legal, these occurrences cause the most intense moral dilemmas about the consent of the dead and how it is possible to influence voter emotions with the help of digital ghosts of popular leaders.
The Bhagwant Mann Case: The Global Takedown
In October 2025, the Mohali Magistrate passed an important order in which the world was ordered to remove deepfakes against the Chief Minister of Punjab, Bhagwant Mann. This court decided that since the content was uploaded in a foreign country (i.e., Canada) and threatened the order and the dignity of a high ranking official, local block was inadequate. The judge directed Google and Meta to guarantee worldwide, extensive, and effective eradication, which represents a change in the judicial system methodology of tackling the issue of jurisdiction in the digital age.
The Financial Scam Ecosystem: Digital Arrests and Espionage
Deepfakes have a financial effect in India within a more complex criminal ecosystem. India saw cybercrime losses of more than 22,845 crore, 890 percent higher than 2022.
The “Digital Arrest” Trap
Among the most nefarious ones is the so-called Digital Arrest, in which scammers present the deepfakes of senior police officers or CBI officials when on video calls. Victims are threatened with the fake FIRs and informed that they should not leave the camera during the day as investigations on the supposed drug trafficking or money laundering take their course. The legitimacy of these threats is horrifying because the hyperrealistic avatars of familiar officials are being used.
Corporate Vishing and Espionage
Voice cloning and video conference fraud are also other methods that are increasingly targeting the Indian corporate sector. Statistically, it is demonstrated that AI facilitated financial frauds are expected to surpass ₹20,000 crore by 2025. The Hong Kong CFO scam costing 25 million dollars in which a whole video conference was filled with deepfakes has been used since as a textbook example by Indian organisations now introducing multimodal and multi factor verification to overcome vishing (voice phishing).
Gender-based Abuse and the De Facto of Public Safety
Nonconsensual intimate imagery (NCII) is the most widespread and harmful use of deepfakes. Statistics always indicate that 96% of all deepfake materials on the Internet are pornographic content and almost 100% of the victims are females.
The Selective Harassment of Women
In late 2023, high profile cases of actors such as Rashmika Mandanna and Katrina Kaif made the problem of “deepnudes” the centre of the national debate in India. Nonetheless, the damage is much greater than just celebrities. In nonpublic victims, sextortion, cyberbullying, and reputational destruction are practiced using sexual deepfakes, and, in most cases, cause serious psychological trauma. According to the 2024–25 data, deepfakes became a weapon of mass gender based violence that is being increasingly applied to silence women in the public life.
Takedown Effectiveness And The GAC
The government formed the Grievance Appellate Committee (GAC) in reaction to the inability of most platforms to act with utmost quickness when harassment was reported to them. This web portal enables the victims to appeal the decisions (or absence of it) of platform grievance officers.
- The GAC has become a special, free-to-access justice mechanism.
- The intermediaries have been compelled to abide by stipulated rigid 24 hours removal obligations on sexual contents and impersonation.
Hurdle To Forensic Analysis And Admissibility Of Synthetic Media
The implementation of the Bharatiya Sakshya Adhiniyam (BSA) in 2023 was an intention to create procedural transparency in digital evidence. Nevertheless, the technical fact of the deepfakes introduces a major disagreement on the judicial system.
The Authentication Problem
With the BSA, electronic records do not necessarily become credible. To confirm that a file is not modified, the courts have to confirm the validity of the timestamps, logs, hash values and metadata.
Deepfakes are created to avoid such forensic checks, however. Newer versions of GAN are capable of producing predictable metadata and believable artefacts that can deceive human assessors and even most automated detection systems.
Institutional Gaps
The lack of digital forensic infrastructure, especially in lower courts and rural regions, is the barrier to the implementation of the BSA.
Lots of judicial officers are still unprepared about the specifics of AI manipulation, which causes the risk of either:
- Admitting fabricated material as a primary source, or
- Refusing to accept valid recordings due to an abundance of caution, resembling the so-called liar dividend in action.
Digital Personal Data Protection (DPDP) Act And AI
An additional protection is given by the DPDP Act, 2023, which governs the content of the deepfakes raw material: personal data.
Consent As Obstacle To Deepfakes
The DPDP Act section 6 dictates that personal data may be processed only when this person gives a definite consent.
Since deepfakes need large quantities of facial and biometric information to be trained, scraping of images posted on social media platforms without authorisation is therefore a significant violation.
Any fiduciaries (platforms or creators) who do not take appropriate measures to deter such misuse may be fined up to [?]250 crore.
The act supplements the IT Rules by placing a huge financial burden on individuals who enable their platforms to turn into data goldmines to AI-powered harassment.
Comparison Analysis: India And Global View
Since India is advancing towards a dedicated Artificial Intelligence Regulatory Act, it is educative to compare its approach with other key jurisdictions.
| Jurisdiction | Regulatory Approach |
|---|---|
| India | The regulatory approach that is in place in India is a combination of the principles-based approach and the strict intermediary liability. |
| European Union | The EU AI Act categorises AI systems by risk, outlawing AI systems that pose unacceptable risk and imposing stringent regulation on high-risk systems. |
India’s Hybrid Model
As opposed to EU AI Act that would categorise AI systems by risk, India has emphasised the result of AI system the harm caused by synthetic content.
The most prescriptive of the IT Rule amendments proposed in India, the 10% labelling requirement, is one of the most nationalistic in the world, and is seen to demonstrate a sense of urgency among the population of India to safeguard the population against a growing crisis of deepfakes.
Prophesying The Future: The Way To Reform
Law enforcement of deep fakes in India is the modern war on the battle against the legislation of legislative inertia through judicial activism.
Although High Courts have plugged the gap with very strong interim orders, a lasting solution cannot be found by mere reactive takedowns.
The Boon of Personality Codification
Researchers and professionals have urged the enactment of the rights of personality in one law. This would offer more assurance to both the celebrities and creators, with clear lines established on the boundaries between commercial stealing and creativity. With fame itself becoming money in the digital era, the right to being oneself, needs to be guarded against a technology capable of creating any identity on demand.
Advancing Cyber Forensics
The effectiveness of the BSA 2023 will be reliant on the huge investment on cyber forensic capabilities. This comprises of the creation of standardised forensic procedures to detect AI and the accreditation of specialised labs which can deliver certificates of hash and source verification which will be resistant to any technological examination scrutinised in a court.
International Co-Operation and Harmonisation of Jurisdictions
As is observed in the case of Bhagwant Mann, the deepfakes are an international menace. A content creator in Canada is capable of disrupting the societal order of things in Punjab in the space of a few clicks. Strong regulation would one day need international procedures to cooperate in investigations and share forensic resources across the borders as the effectiveness of national legislation in fighting anonymous, decentralised attackers is becoming less and less effective.
Derived Conclusions and Policy Recommendations
The digital consent crisis in India is a paradigmatic change in how the digital state and the citizen interact. These gaps are the deepfakes which began to proliferate in 2023 and 2025, but they also brought the result of the solidified judicial and regulatory reaction against them.
Mandatory Technical Standards
Government bodies must stop putting out advisories and enforce technical requirements such as the C2PA (Coalition for Content Provenance and Authenticity) to all generative AI systems working in India. This would make sure that metadata traceability is imprinted in the content at the time of creation.
Quick Takedowns of NonPublic Figures
Although celebrities have access to the resources to do so, the 36hour window is frequently too lengthy to allow individual persons to address sexual harassment. The GAC must create a special 2hour priority lane of checkable NCII material.
Judicial Training Programs
BSA 2023 performance is based on the technical literacy of the bench. The AI and digital evidence training in the National and State Judicial Academies should take precedence to avoid the authority of the criminal trials being eroded with the so-called Liar of Dividend.
Corporate Liability of Training Data
The DPDP Act should impose strict liability on the origin of the data on which their generative models are being trained on the intermediaries. In case a model has been trained on biometric data that has not been provided on a non consent basis, the platform must be fined the highest penalty of [?]250 crore in order to prevent informational capitalism at the cost of privacy.
Concluding Observation
It is not just the fight against deepfakes, but rather the reappropriation of the right to a dignified, consensual digital existence. In a nation where celebrity is gold and self a thing of flux; the law should still be that sword which keeps the personality locked up as a slave in an artificial marketplace.


