Constella Intelligence

The Emerging Threat of Deepfakes to Brands and Executives

Emerging Threats of Deepfakes to Brand and Executives

In 2019, Moody’s published an official research announcement highlighting the new reality of the digital world within which today’s organizations operate — a digital world characterized by sophisticated threats and malicious actors. The announcement stated two unsettling facts:

  • Artificial intelligence (AI) is making it easier to damage companies’ credit, reputation, and financial health via creating fake videos and images
  • These deepfakes could harm a company’s creditworthiness and this risk will become harder to manage and mitigate as the AI that enables synthetic media advances

To delve deeper into the nature of the risk of deepfakes to companies’ financial stability and corporate reputation, Constella Intelligence has examined the emergence of this threat and tangible examples of the risks that deepfakes present followed by recommendations on how companies can take steps to protect themselves and safeguard their reputations and assets amidst a digital ecosystem witnessing the proliferation of deepfake technology.

What Exactly is a Deepfake?

A deepfake is content created using human image or audio synthesis based on artificial intelligence – this means that it’s an image, video, or audio impersonation of someone powered by AI, usually making it more convincing and difficult to distinguish as false. This is achieved by merging or superimposing existing audio, image, and video content onto source content by applying an advanced machine learning technique known as a generative adversarial network (GAN). Given these characteristics, deepfakes have already been used in a wide array of contexts, including in the production of “fake news” and manipulated content or malicious impersonations with the objective of obtaining sensitive data for financial gain (also known as “social engineering” within this context) or influencing public opinion for corporate or political reputational damage.

While the ease and effectiveness of the production and deployment of synthetic media improves, a pressing scenario is one in which a convincing fake could affect a company’s share price to exploit the market, damaging corporate reputation and inflicting severe financial harm in a mere digital instant, for example. This possibility becomes more lucid when you imagine a recognizable figure such as Mark Zuckerberg, for example, disclosing a massive corporate policy change, leak, or platform defect days before a merger, acquisition, or major product launch — the company’s share price could easily shrivel.

 

Take a quick glance at a deepfake of Facebook’s Mark Zuckerberg, in this case, produced by a creative agency for media publicity and thankfully not for a malicious or exploitative end.

“Imagine a fake but realistic-looking video of a CEO making racist comments or bragging about corrupt acts,” notes Leroy Terrelonge, AVP-Cyber Risk Analyst at Moody’s. “Advances in artificial intelligence will make it easier to create such deepfakes and harder to debunk them. Disinformation attacks could be a severe credit negative for victim companies.” The corporate context and the geopolitical context are similar but not congruent.

This risk is not limited to the corporate space, and it is becoming apparent that this is an issue that governments and public institutions must take seriously as well. In a 2020 report, the Brookings Institution encapsulated the political and social risk presented by the advent of deepfakes: “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”

Without concerted mitigation, public confidence in the media that we consume is undermined, fundamentally hindering the way in which we engage with brands, society at large, and one another. It is fundamental for companies to understand deepfakes as another building block of the disinformation ecosystem, as a new possibility to attack the truth deceiving and misleading others into believing false information or just doubting verifiable reality. Hence companies’ senior management needs to realize that cyber-intelligence that includes continuous, comprehensive, and sophisticated analysis of the digital public space will undoubtedly be a critical business asset going forward, and as such, it should unequivocally be a top priority for both organizations and their executives.

Are You Equipped to Safeguard Against the Threat of Deepfakes? 5 Questions Your Organization Needs to Answer

    1. 1. Is your organization, with particular reference to executives and key team leaders, familiar with the most likely threat scenarios posed by deepfakes — including phishing attacks, potential stock market manipulation, and blackmail or extortion?

 

    1. 2. Can you accurately identify where, when, and how your brand is mentioned in the public digital sphere in real-time to be able to establish a basis for mitigation of incidents in real-time?

 

    1. 3. Are your employees trained and prepared to spot and report deepfakes or suspicious synthetic or manipulated content that can enhance the effectiveness of attacks like business email compromise (BEC) or phishing?

 

    1. 4. Do you have an incident response plan that clearly details steps for security and communications remediation once an incident occurs?

 

  1. 5. Has your organization evaluated the feasibility of deepfake detection technology for mitigation of threats against your brand, executives, and key individuals?

Experts across a wide range of fields increasingly agree that the battle against the malign use of deepfakes will necessitate the development of advanced solutions for continuous monitoring and analysis of the digital sphere, including threat monitoring and mitigation technologies integrated into existing security infrastructures.

Using Artificial Intelligence to Solve the Deep Fake Problem

With the development of Constella’s Deepfake Detection Model, our team of data scientists is actively applying AI/ML techniques for the identification of fake content that is difficult for a human observer to detect. Amidst the current and projected environment in which video deepfakes will increasingly pose a tangible risk to companies and individuals, Constella’s team has piloted the development of several algorithms for early detection of video and audio content with a high probability of alteration or manipulation. The emphasis of these efforts has not only been on developing and training high-precision, accurate models but also on creating the appropriate architecture that allows for real-time integration and deployment in diverse existing analytics environments.

Constella’s Deepfake Detection Model uses AI/ML algorithms that analyze videos to evaluate the probability of manipulation or alteration from the original content in each frame. Analysis of the reference video indicates that several of the frames analyzed demonstrate a sufficient probability of alteration to be flagged as manipulated.

At Constella Intelligence, we understand that deepfakes and synthetic media are adversarial challenges that present a constantly moving target, similar to what spam or identity theft and their corresponding techniques meant to our clients in past eras. Actively supported collaboration across academia, media, public institutions, and technology companies will be critical in developing the appropriate safety infrastructure to mitigate against these and other emerging risks.