Introduction
The rapid advancement of artificial intelligence (AI) and machine learning technologies has given rise to sophisticated tools capable of generating hyper-realistic media, commonly known as deepfakes. These digitally manipulated videos or images often superimpose an individual’s likeness onto another’s body or voice, creating content that appears authentic but is entirely fabricated. While deepfakes have potential for creative and benign uses, their misuse—particularly without the consent of the depicted individual—poses significant ethical, legal, and societal challenges. This essay argues that creating deepfakes of others without consent should be made illegal, focusing on the risks to individual privacy, the potential for reputational harm, and the broader societal implications of unchecked deepfake proliferation. Drawing on existing legal frameworks and emerging policy discussions within the field of AI law, this analysis explores why legislative intervention is necessary to address this pressing issue in the digital age.
The Threat to Individual Privacy and Autonomy
At the heart of the deepfake debate lies the fundamental issue of privacy. Deepfakes can replicate an individual’s likeness with alarming precision, often using publicly available data such as social media images or videos. Once created, these falsified media can be disseminated widely without the subject’s knowledge or permission, violating their right to control their own image and identity. As Solove (2006) argues, privacy is not merely about seclusion but encompasses the right to manage one’s personal information and public persona. The unauthorised creation of deepfakes directly undermines this autonomy, leaving individuals vulnerable to exploitation.
A notable example is the non-consensual use of deepfakes in pornography, where women, in particular, have been disproportionately targeted. Studies by Sensity AI (2019) report that over 90% of deepfake content online is pornographic, with the vast majority featuring women who did not consent to such depictions. This not only invades personal privacy but also perpetuates harm by objectifying individuals and exposing them to potential harassment or blackmail. Current UK laws, such as the Data Protection Act 2018, provide some protection against misuse of personal data, but they do not explicitly address the unique challenges posed by deepfakes. Thus, specific legislation is arguably needed to criminalise non-consensual deepfake creation as a distinct violation of privacy rights.
Reputational Harm and the Spread of Misinformation
Beyond privacy, deepfakes pose a severe risk to personal and professional reputations. A fabricated video depicting an individual engaging in unethical or illegal behaviour can spread rapidly online, causing irreparable damage before the truth is established. Unlike traditional forms of defamation, deepfakes are often so convincing that they can deceive even discerning viewers, making it difficult to disprove their authenticity. As Chesney and Citron (2019) warn, this technology has the potential to “weaponise” falsehoods, targeting individuals for personal or political gain.
In a political context, for instance, deepfakes could undermine public trust by portraying politicians making inflammatory statements or engaging in misconduct. While no major UK-specific incidents have been widely documented at the time of writing, global examples—such as manipulated videos of world leaders—highlight the potential for chaos. The absence of explicit legal prohibitions in the UK against creating such content without consent leaves individuals and institutions vulnerable. Existing defamation laws may offer post hoc remedies, but they are reactive rather than preventive. Therefore, proactive legislation criminalising non-consensual deepfake creation could serve as a deterrent, protecting individuals from reputational harm before it occurs.
Societal Implications and the Erosion of Trust
On a broader scale, the unchecked proliferation of deepfakes threatens to erode trust in digital media as a whole. If the public can no longer distinguish between genuine and fabricated content, the integrity of information ecosystems—already strained by fake news and misinformation—will be further compromised. Indeed, as Wardle and Derakhshan (2017) note in their analysis of information disorders, visual media plays a critical role in shaping public perception, and deepfakes exacerbate the challenges of verification. This loss of trust has far-reaching implications, from influencing elections to amplifying social divisions.
Moreover, the psychological impact on society cannot be overlooked. Victims of non-consensual deepfakes often experience significant distress, anxiety, and social stigma, particularly when content is shared widely. The lack of legal recourse in many jurisdictions, including aspects of UK law, compounds this harm by leaving victims without clear avenues for justice. A legislative ban on creating deepfakes without consent could, therefore, serve a dual purpose: protecting individual well-being and safeguarding societal trust in digital communications. While some might argue that such laws risk stifling innovation or free expression, these concerns must be balanced against the tangible harms caused by malicious deepfake use.
Legal and Practical Challenges in Implementation
Despite the clear need for regulation, implementing laws against non-consensual deepfake creation presents several challenges. First, defining the scope of such legislation is complex. Should it apply only to malicious content, or to all non-consensual deepfakes, including those created for satire or entertainment? Additionally, enforcing these laws across borders is problematic, as deepfakes are often shared on platforms hosted outside national jurisdictions. The UK’s Online Safety Bill, currently under discussion as of 2023, aims to address harmful online content, but its provisions for deepfakes remain ambiguous and require further specificity (UK Government, 2023).
Furthermore, there is the issue of detection and attribution. Deepfake technology evolves rapidly, often outpacing forensic tools designed to detect manipulation. While AI-driven detection methods are improving, they are not foolproof, complicating efforts to hold creators accountable. Nonetheless, these challenges should not deter legislative action but rather spur collaboration between policymakers, technologists, and legal experts to develop robust frameworks. A starting point could be aligning UK laws with emerging EU regulations, such as the AI Act, which categorises high-risk AI systems and imposes strict obligations on developers (European Commission, 2021).
Conclusion
In conclusion, the creation of deepfakes without consent represents a profound threat to individual privacy, reputation, and societal trust. The hyper-realistic nature of this technology, coupled with its potential for misuse, necessitates urgent legal intervention in the UK. By criminalising non-consensual deepfake creation, lawmakers can protect vulnerable individuals from exploitation, deter malicious actors, and preserve confidence in digital media. While challenges in defining and enforcing such laws remain, they are not insurmountable and must be addressed through coordinated policy efforts. Ultimately, the ethical imperative to safeguard personal autonomy and public wellbeing outweighs the risks of overregulation, making a strong case for legislation in this rapidly evolving domain of AI law. This issue, if left unchecked, could redefine the boundaries of harm in the digital age, with consequences that extend far beyond the individual to the very fabric of society.
References
- Chesney, R. and Citron, D. (2019) Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics. Foreign Affairs, 98(1), pp. 147-155.
- European Commission (2021) Proposal for a Regulation on Artificial Intelligence (AI Act). European Commission.
- Sensity AI (2019) The State of Deepfakes: Landscape, Threats, and Impact. Sensity AI Research Report.
- Solove, D. J. (2006) A Taxonomy of Privacy. University of Pennsylvania Law Review, 154(3), pp. 477-564.
- UK Government (2023) Online Safety Bill: Supporting Documents. UK Government.
- Wardle, C. and Derakhshan, H. (2017) Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Council of Europe Report.

