Deepfake is a combination of the terms “deep learning” and “fake”. It denotes a type of hyper-realistic image, video or audio file generated by an AI. The AIs are fed large amounts of data in the form of images or audio files. The algorithms behind the AI then learn to reproduce incredibly lifelike recordings, images and sounds of events that may have never happened.
There is a growing recognition that disinformation campaigns, or “fake news,” could negatively impact businesses, made all the more easier by technological advances in AI. Foremost among those concerns is the rapid advancement of deepfake technology. In a few years, deepfake technology may be indistinguishable from reality, according to Hao Li, founder and CEO of Pinscreen, Inc., associate professor of Computer Science at the University of Southern California, and one of the most prominent pioneers in computer graphics and vision.
“We are working together on an approach that assumes that deepfakes will be perfect,” Li said. “Our guess is that, in two to three years, it’s going to be perfect. There will be no way to tell if it’s real or not, so we have to take a different approach.”
Li has proven that deepfakes can present numerous benefits to the world of e-commerce, such as in the entertainment and fashion industries. However, the implications for visible or high-level employees to be extorted, blackmailed or otherwise held hostage is becoming a serious concern as the technology progresses. Beyond the potential harm to employees themselves, these types of cyberattacks circumvent current methods of financial security for minimizing the impact of spammers, scammers and con-artists.
“If you want to be able to detect deepfakes, you have to also see what the limits are,” Li said, explaining that the issue isn’t the existence of the technology but the intentions of how that technology is used. “The real question is, how can we detect [media] where the intention is . . . to deceive people or . . . [have] a harmful consequence.”
Take a moment to consider the perceived security of certain biometric data, such as voice and facial recognition software. In a few short years, deepfake technology used for nefarious means could allow individuals to bypass current security pathways with startling ease.
Concerns about deepfake technology are most prominent in discussions around fraud, extortion and market manipulation. However, the threats to brand reputation cannot be overlooked. These threats, far from being idle considerations for a tech-centric future, are a part of the evolving reality of doing business. According to a report by New Knowledge, 78% of consumers surveyed agreed that “disinformation damages a brand’s reputation.”