Deepfakes: When Stolen Identities Cost More Than Millions of Dollars
Mitigating the negative effects of deepfakes is crucial for preserving human civilization.
Deepfake technology is revolutionizing the film industry in very spectacular ways. In recent years, filmmakers have harnessed deepfakes to achieve various objectives, unlocking unprecedented avenues of creativity and storytelling. These applications include resurrecting deceased actors, generating hyper-realistic visual effects, and seamlessly integrating digital doubles. For example, deepfakes were utilized to digitally de-age Robert DeNiro in The Irishman and to construct a virtual doppelganger of Paul Walker in Fast & Furious 7.
Nonetheless, the harmful effects of deepfakes on society extend far beyond the film industry's awe-inspiring uses of the technology. A recent incident saw a company in Hong Kong fall victim to a fraud, losing $25.6 million due to deepfake technology. The fraudsters leveraged AI to fabricate a multi-participant video call where all participants, except the unsuspecting victim, were digital imitations of actual employees. The spoofed chief financial officer then instructed the victim to transfer funds, leading to a substantial financial loss.
Deepfake technology has played a role in numerous scams aimed at corporations. In 2021, swindlers exploited deepfake to imitate the voice of an executive from a Japanese company, tricking a manager into transferring $35 million to them.
In another instance, a pair of fraudsters infiltrated China's tax system by fabricating identities with facial images procured from the black market. They set up a shell company that produced counterfeit tax invoices amounting to as much as $76.2 million. Last year, a deepfake image depicting an explosion at the Pentagon led to a staggering $500 billion drop in the stock market!
Deepfakes are synthetic media that utilize artificial intelligence (AI) to modify the appearance or voice of actual individuals. They can be used for entertainment, education, and malevolent purposes. For example, deepfakes can create the illusion of someone saying or doing something they never did or generate convincing imitations of celebrities or public figures.
The term deepfake is derived from deep learning, the type of AI utilized in their creation. Deepfakes are generated by employing machine learning algorithms, such as generative adversarial networks (GANs), to train a neural network on a substantial volume of authentic video or audio data. This trained network is then used to produce counterfeit videos or audio that appear or sound authentic.
Deepfakes pose significant challenges to security, privacy, and trust in the digital world. They can be exploited to disseminate misinformation, defame individuals, impersonate authorities, or sway public sentiment. One of the most detrimental applications of deepfakes is identity fraud, which can inflict harm on both individuals and organizations.
Identity fraud involves the illegal use of another person's personal details, such as their name, address, social security number, or credit card information, to gain financial or other advantages. Deepfakes can facilitate identity fraud and lend it credibility by producing convincing and seemingly trustworthy replicas of the victims or authorities.
Deepfakes and identity theft have been around for a while, but they have become more common and advanced in recent years, due to the progress in AI and the accessibility of data and tools. Samsub’s Identity Fraud Report 2023 shows that the global detection of deepfakes across all industries increased by 10 times from 2022 to 2023, with significant regional variations: 1740% deepfake spike in North America, 1530% in APAC, and 780% in Europe.
The data indicates that the security risk of deepfakes created by AI is rising, with Onfido’s research revealing a 3,000% surge in deepfake fraud attempts in 2023. In 2022, 66% of cybersecurity professionals faced deepfake attacks within their organizations. In the last year, 26% of smaller and 38% of larger companies suffered deepfake fraud resulting in losses of up to $480,000.
Deepfakes extend beyond merely being instruments for identity theft or business fraud. They pose a challenge to the trustworthiness and authenticity of the information we obtain and the individuals we interact with online, thereby jeopardizing the very bedrock of our society.
Deepfakes pose a serious threat to individuals and their rights. These counterfeit media can imitate anyone, making them appear to do or say anything. They can be exploited by malicious entities to invade people's privacy, steal their identity, tarnish their reputation, and expose them to sexual, financial, or legal harm. Deepfakes have the potential to shatter people's lives and dignity in an instant.
Deepfake technology presents a significant risk to financial institutions and businesses. These organizations are susceptible to deepfakes, which can perpetrate fraud, distort market values, disseminate false content, and fabricate any information. Deepfakes can be used to mislead customers, competitors, or regulators, impair operations, or steal trade secrets, leading to the loss of trust, assets, opportunities, and reputation damage.
As the US gears up for an election year, the perils associated with deepfakes become increasingly alarming. Deepfakes have the potential to sway election results by disseminating disinformation, propaganda, or manipulation concerning political candidates or issues. They can instigate societal discord by undermining faith in the political system, provoking violence, or inciting disorder. This public emergency poses a formidable threat to the pillars of democracy, security, and stability, thereby significantly impacting American society.
Neglecting to tackle the issue of deepfakes can result in widespread consequences across various sectors. Hence, a comprehensive strategy is required to counteract deepfakes. This involves the development of advanced detection tools to reveal inconsistencies, heightening public awareness to spot potential dangers, and implementing verification systems to authenticate content.
Moreso, advocating for responsible use among creators, implementing regulations on platforms, and enforcing technical protective measures can help limit their dissemination. Prompt content removal processes, comprehensive fact-checking, and the promotion of digital literacy are essential elements of a successful counter strategy.