Constella Intelligence

Addressing the Multi-Faceted Risks of Deepfakes

The novelty surrounding deepfake videos has garnered more intrigue than fear in years past, but synthetic media technologies are continuing to markedly improve – to the point where experts predict deepfakes will become indistinguishable from real content. Enterprises and executives would be wise to prepare now and safeguard against the social, business, and political threats associated with deepfakes.

To help us explore these challenges further, Alex Romero, COO and Co-Founder at Constella Intelligence, offered a unique perspective into the risks deepfakes pose to companies’ reputational and financial health.

When did deepfakes first gain notoriety? Have they always been used for malicious purposes?

Deepfakes refer specifically to a type of synthetic media involving the manipulation or swapping of a person’s likeness with another’s in an image or video. The term itself didn’t really exist before three or four years ago when videos that used this open-source technology began to gain traction on sites like Reddit. At first, the term was understood to refer to pornographic material that incorporated this face-swapping technology in specific online forums and other spaces. Since then, the notion of a “deepfake” has grown to encompass a broader range of applications of synthetic media that fundamentally produce realistic images or videos of people that either don’t exist or were not the original subjects of the manipulated content.

More recently, deepfakes have stepped into the audio realm, with producers of synthetic content adeptly recreating individuals’ voices by using surprisingly little original content. Within the context of the wide range of applications of this technology, there are also positive uses, particularly for businesses that leverage these capabilities for marketing and branding purposes. Many well-known brands have utilized synthetic media to circumvent the physical and in-person limitations imposed by the Covid-19 pandemic and continue producing high-impact media critical to their businesses. However, as these technologies improve, the sophistication of malign actors to inflict harm on companies and high-profile individuals increases in parallel.

What impact can deepfakes have on brand and executive reputation?

In mid-2019, Moody’s published a research announcement declaring that artificial intelligence (AI) is making it easier to damage companies via fake videos and images and, as AI advances, these deepfakes could harm companies’ creditworthiness. Serious stuff. We already know that synthetic media can be employed as another module or expression of content that can effectively propagate misinformation to mislead audiences and influence public opinion. These capabilities can create risks for organizations and individuals from both financial and reputational perspectives.

Social engineering—or duping employees into sharing confidential information—can lead to exposure of sensitive data or the facilitation of unapproved transactions. There have already been cases of corporate funds being transferred to criminals using synthetic audio content to impersonate high-level executives seeking additional credentials or the direct transfer of funds by employees. On a macro level, the ability to influence public opinion at key moments can negatively affect stock prices and even client or consumer confidence, not to mention the integrity of electoral processes and citizen confidence in public institutions in general, as many experts have noted.

Most importantly, deepfakes are just one building block of multiple ways in which an executive or brand can be attacked by leveraging current capabilities within the digital ecosystem.

“We predict that as these digital attacks become more sophisticated, they will employ deepfakes as an additional building block in distributed, multi-layered efforts to target high-profile individuals and brands.”

Has Constella Intelligence observed a rise in deepfake content over the course of the past few years?

Although the applications of deepfakes are currently diverse, there is no doubt that the overall volume of deepfake content is exploding. About a year ago, Forbes reported that the number of deepfake videos online had nearly doubled from 2019 to 2020. Mind you, this only refers to videos, not taking into account the proliferation of still images and audio content that can also be employed for an expansive range of uses.

What is driving the proliferation of synthetic media content? Well, due to the open-source nature of the algorithms and programs that drive improvement and refinement of the technology required to produce deepfakes, the quality and ease of production of deepfakes appear to be improving at a much higher rate than initially anticipated. This is noted even by its own creators and pioneers, based on some of their recent public remarks.

“At Constella, through constant, real-time monitoring of the digital sphere, we have been able to establish eaarly warning systems for our clients to immediately alert them when content related to their brands or executives is circulating in the digital sphere”

Any relevant content identified is simultaneously analyzed to evaluate the probability of manipulation, offering threat intelligence, risk and reputation, and cyber intelligence teams an invaluable resource at a time when synthetic media can inflict damage in a matter of moments.

Although it can be difficult for a human observer to recognize fake content, what are some warning signs?

One project designed to identify techniques to counteract AI-generated misinformation, the Detect Fakes project at the MIT Media Lab, offers several helpful tips to spot deepfakes. Their research determined that—given the quality of the technology and algorithms used to generate deepfakes today—there are a number of subtle hints that can indicate synthetic manipulation of digital media content. Most experts strongly recommend paying close attention to the face, as high-end deepfake manipulations nearly always involve facial modifications or transformations. Despite an increased focus from everyday users, the rate of progress in the quality of deepfakes is astounding, making it increasingly challenging for any single person to identify well-produced synthetic media.

What steps can enterprises take to safeguard against emerging threats like deepfake videos and other synthetic media?

Deepfakes and other similar AI-enabled advancements will ultimately require businesses to adopt approaches to security that are both more agile and more holistic, protecting devices, applications, data, and cloud-service ecosystems. The massive distribution of synthetic content across the digital public sphere, especially in cases where it is weaponized by actors behaving with malicious intent, can have severe reputational consequences nearly instantaneously. Given the rapidly accelerating sophistication of deepfake technology, the most significant threat may be from sophisticated disinformation efforts that tailor ML models for targeted purposes and employ deepfakes as yet another building block in the arsenal of capabilities leveraged by malign actors in the digital sphere.

A comprehensive monitoring program that continuously analyzes the footprint of an organization and its key individuals across the multitude of data points, actors, and sources in the digital sphere is critical for the security of internal and external assets. Organizations can detect, remediate, and respond to deepfakes earlier as part of a broader security strategy that mitigates against malign actors’ attempts to extract sensitive information, damage corporate reputation, or influence public opinion.