Regulation of Deepfakes and Public Expression: Balancing Freedom of Expression and Protection from Disinformation
Main Article Content
Abstract
This study examines the regulatory challenges posed by the rapid proliferation of deepfake technology in Indonesia, particularly its impact on public communication, democratic processes, and information integrity. The research aims to analyze the adequacy of Indonesia’s current legal framework, primarily the Electronic Information and Transactions Law (ITE Law), the Personal Data Protection Law (PDP Law), and relevant provisions of the Criminal Code, in addressing deepfake-related harms, while evaluating the extent to which these regulations embody the principles of responsive regulation. Using a normative juridical method, this study employs statutory, conceptual, and comparative approaches to scrutinize existing laws and assess their capacity to respond adaptively to emerging technological risks. The findings reveal that although Indonesia possesses several legal instruments that can be applied to deepfake misuse, these regulations remain fragmented, reactive, and insufficiently aligned with the evolving nature of AI-generated disinformation. Analysis using Responsive Regulation Theory indicates that the government’s current stance leans predominantly on punitive and deterrence-oriented measures, with limited engagement of collaborative, restorative, and preventive strategies that are essential in addressing complex digital harms. The study concludes that Indonesia requires a more integrated and anticipatory regulatory model, including the development of a lex specialis for AI-generated content, mandatory transparency standards for generative-AI platforms, strengthened digital forensic capacities, and enhanced public digital literacy. Such a framework would enable the law to balance freedom of expression with the need to protect the public sphere from manipulation, ensuring that regulatory responses remain proportionate, flexible, and technologically informed.