Project Home
PDF access

Authors

Tasneem Wael, Adel Ahmed, Muhhamed Adel   

Publishing Date

10 November 2021

Abstract

The identification of edited videos is a critical issue in multimedia forensics that has received a lot of attention in recent years. However, a frequent feature of published research is that forensic analysis is usually performed on data before it is used. This project tackles the difficult situation of altered videos that are uploaded on social media networks. In this regard, a large-scale performance evaluation using general-purpose deep learning has been carried out. The project shows that the output of differently trained networks can carry useful forensic information for the identification of the specific technique used for visual manipulation, both for shared and non-shared data.

1.1 Background

Deepfake is synthetic media where a person’s face in an existing image or a video is replaced with someone else’s face. The Deepfake technology has begun in the 19th century and started developing since. Its algorithm is built to manipulate visual or audio content using artificial intelligence and machine learning. Moreover, it involves training generative neural network architecture such as “autoencoder ” and “GANs”; which stands for generative adversarial networks. Deepfake is used in blackmailing, pornography, politics, art, internet comics, acting, social media, socket puppet. The Deepfake technology is always being upgraded which gives the opportunity to develop scams, fraud credibility and authenticity. Multiple social media platforms have already taken a step and developed technologies to detect the Deepfake.

1.2 Motivation

Deepfake was first invented in 2017 when a Reddit user used a celebrities’ faces in pornography. Deepfake is generally used to harm individuals and to cause threats to the national security system. For example, the recent Saudi-Qatar crisis may have been fueled by a hack in which someone injected fake stories (with fake quotes by Qatar’s emir) into a Qatari news site. Another video went viral in which the previous president of the United States “Barack Obama” appeared to be insulting public figures. Recent videos used celebrities’ faces in harmful nonconsensual pornography. A lot of deepfake detection methods have been presented since the problem occurred, many have succeed such as: GAN architectural based applications. Deepfake detection techniques are a must to protect the image of public figures and prevent unnecessary uses in harmful content.

1.3 Problem Statement

Blackmailers might use fake videos to extract money or confidential information from public figures and celebrities. Reputations could be decimated, even if the videos are ultimately exposed as fakes; salacious harms will spread rapidly. Moreover the spread of deepfakes will threaten to erode the trust necessary for democracy to function effectively, as the public may become more willing to disbelieve true but uncomfortable facts.