

Orbsen | Deepfake Detection
Supported by the Monash AI Institute, this joint project brings together the Faculties of Arts, Law, and Information Technology to research deepfake technology from multiple angles—technological, criminological, sociological, and legal. AiLECS is leading the technological component of this project.
Deepfakes are created using advanced machine learning techniques – especially autoencoders and generative adversarial networks (GANs) – that learn to generate realistic fake images, videos, and audio by training on across huge amounts of data. Unfortunately, this is taken advantage of by criminals, with a major concern being the role of deepfakes in spreading misinformation and enabling technology-facilitated abuse.
Project Orbsen is developing and testing a deepfake detection system using an audio-visual deepfake dataset, which includes a diversity of people and various types of deepfake manipulations. We are doing this by using deep learning algorithms like convolutional neural networks (CNN) and recurrent neural networks (RNN). These algorithms can automatically identify signature traits or ‘fingerprints’ left behind when data is manipulated, making it easier to detect if something is real or fake.
This will help law enforcement accelerate identification of deepfake material, disrupt further distribution and quickly identify those in real-harm that need to be protected.