The world has witnessed a remarkable advancement in artificial intelligence technologies, most notably the “Deepfake” technology, which is used to generate fake videos or audio recordings that are difficult to distinguish from real ones. While this technology has legitimate applications, its use for harmful purposes — such as defamation, blackmail, and media misinformation — has become an increasing threat to individuals and society.<br /><br />In this context, a key question arises: Can creators of such harmful content be held legally accountable under objective liability, even without proving fault or criminal intent?<br /><br />Objective liability is a form of civil liability that does not require proof of fault; it is sufficient that damage has occurred and that there is a causal link between the act and the harm. This type of liability is often imposed in cases involving high-risk activities or the use of advanced technological or industrial means. It is based on the notion of “risk-bearing” or “social risk,” meaning that those who use dangerous or complex tools must bear the consequences of their use.<br /><br />Deepfake technologies are considered among the tools that carry significant technical and social risks, including:<br /><br />Violation of the right to image and privacy.<br /><br />Harm to reputation and dignity.<br /><br />Spreading false news that threatens public security or influences election outcomes.<br /><br />Manipulating sexual or political content with the intent to harm or defame.<br /><br />It is often difficult — if not impossible — to identify the perpetrator or prove malicious intent, making fault-based liability under traditional tort law ineffective.<br /><br />Al-Mustaqbal University ranks first among public and private universities.