Constella Intelligence

Constella’s Deepfake Video Detection Capabilities Provide Enhanced Brand and Executive Protection Solutions

Last year, Constella Intelligence introduced new proprietary capabilities for the detection of synthetic media, such as deepfake videos. This capability allowed us to expand upon our integrated offering of cyber-intelligence and digital risk protection services (DRPS) which includes solutions for Executive, Brand, Identity, Fraud, Threat, and Geopolitical Protection.

Deepfakes are synthetic media using a Machine Learning (ML) technique known as a Generative Adversarial Network (GAN) in which a person in an existing image or video is replaced with someone else’s likeness. As the technology used to produce deepfakes improves at an astounding rate, experts predict that synthetic media, like deepfake videos and images, will be indistinguishable from real content in the near future. Already, and to an increasing degree in the coming months and years, deepfakes are poised to grow from an Internet anomaly to a major social, business, and political risk, threatening the financial health and reputational security of organizations, high-profile individuals, and even democratic processes.

Currently, our team is developing and actively applying AI techniques for the identification of fake content that is difficult for a human observer to detect. Amidst the current and projected environment in which video deepfakes will increasingly pose a risk to companies and individuals, Constella’s team of data scientists has piloted the development of several algorithms for early detection of video and audio content with a high probability of alteration or manipulation. The emphasis of Constella’s efforts has not only been on developing and training high-precision, accurate models but also on creating the appropriate architecture that can allow for real-time integration and deployment in diverse existing analytics environments. Furthermore, Constella’s data scientists are exploring the creation of unique data sets to train our own models, as the industry has only recently produced a data set of reference or other public benchmarks for these purposes.

A range of advanced AI-enabled techniques allows for the identification of otherwise imperceptible footprints that the tools used to produce deepfakes leave behind. Constella’s goal has been to implement a solution to produce detailed reports resulting from image or video analysis, highlighting information about the faces detected in each frame of a video and the aggregate probability of alteration or manipulation from the original content. Constella’s solution has been carefully designed to enable fluid integration with different state-of-the-art deepfake detection approaches, such that it can be constantly updated based on the latest emerging trends in content manipulation, ensuring accuracy and adaptability for the processing of new deepfake generation techniques as they emerge. In addition to implementing certain algorithms for video pre-processing and face detection, Constella’s Deepfake Detection technology is trained on a set of models based on an EfficientNet B7 network, a class of convolutional neural network with a modified architecture designed to allow better performance when scaling the network to different dimensions (depths/widths) enabling improved results for different image sizes. Moreover, this architecture allows for the training and refinement of existing AI models using proprietary datasets.

 Constella’s deepfake detection model uses AI/ML algorithms that analyze videos to assess the probability of manipulation or alteration from the original content in each frame. In the example above, the probabilities of alteration registered for each frame indicate that the video analyzed was not manipulated from its original content.

The current development of Constella’s deepfake detection technology enables our analysts to upload images or videos to be analyzed, or to specify links from public platforms including Twitter or YouTube, for example. Finally, APIs for integration into Constella’s proprietary platform for the analysis of billions of data points in the public digital sphere, Analyzer, will enable automatic analysis of trending images and videos for alteration, and additional analyses can be triggered with the click of a button. Integrated into a comprehensive monitoring program that continuously analyzes the footprint of an organization and its key individuals across the billions of data points and thousands of sources in the digital sphere, early detection and response to deepfakes that can be used to extract sensitive information, damage corporate reputation, or influence public opinion can be addressed and remediated as part of a broader security strategy. In this way, extensive monitoring combined with premium, AI-enabled detection technology can enhance any digital risk protection program focused on proactive Brand and Executive Protection.

 

Constella’s Deepfake Detection Model uses AI/ML algorithms that analyze videos to evaluate the probability of manipulation or alteration from the original content in each frame. Analysis of the reference video indicates that several of the frames analyzed demonstrate a sufficient probability of alteration to be flagged as manipulated.

At Constella Intelligence we understand that deepfakes are, as are many other adversarial challenges, a constantly moving target, in some ways similar to what spam or identity theft and their corresponding techniques have meant to our clients in past eras.

For more information on the increasingly relevant risks of deepfakes and the specific implications for organizations and executives operating in this space, we invite you to learn more about our solutions for Executive Protection and Brand Protection enabled by Analyzer. We also invite you to contact Constella’s team for a demo and a discussion on how your business can be vigilant and prepared amidst a new, complex risk landscape.