Why smart neural networks are the next generation of fraud

As an accountant, you’re likely up to speed on all the latest fraud tactics when it comes to finances and identity theft. So perhaps you’re also one of the seven million people who watched the video of actor and impersonator Bill Hader on David Letterman’s talk show1 - a ‘deepfake master’ deftly turned Hader into Tom Cruise, then Seth Rogan, then back to himself, toggling among the three for several minutes. Since then, deepfake has popped up in videos of political speeches2 Hollywood films3, and only about 15,000 porn videos4.

Unlike ‘cheapfake’ which tampers with audio and video content to generate a new recording that most people would identify as fraudulent, or ‘dumbfake’ where existing content is spliced to alter someone’s statement, ‘deepfake’ uses advancements in machine learning to manipulate the subject’s facial movements and language to generate remarkably believable and utterly inauthentic content. The proliferation of deepfakes ahead of the U.S. presidential election this year has spurred Facebook to ban the videos on their platform.2

Deepfakes began two years ago when someone whose Reddit handle was ‘deepfake’ used A.I. technology to superimpose the faces of female celebrities onto nude bodies in pornographic videos. This application rapidly took off within the siloed world of social media.5

Today, rapid advancements in machine learning have made it possible for computers to generate deepfakes. Generative adversarial networks (GAN) are the mechanism for one computer to learn from the other. Images and video are uploaded into software called a neural network which generates statistical connections between strategic areas, such as the mouth and eyes, on the actual person and the aspects to be faked in order to create unique images. The software competes with a copy of itself which acts as an adversary and whose job is to identify the fakes. The neural network continues to spew out images until its computer adversary cannot distinguish the real from the fake. The exponential speed of machine learning means that computers will soon be able to create fake content with few, if any, detectable glitches.3

Good Machine

Like any new technology, deepfakes have some positive attributes. For example, female gamers are often the target of abuse online. Technology, called ‘audio skins’ which enables them to customize the voices of their avatars, may protect women better—although it doesn’t address the root cause of the abuse, which is misogyny.6 Hollywood makes ample use of deepfakes to de-age actors, although their creaky hips still give their true age away.7

The trend of film sequels and prequels benefits from deepfakes by bringing back original characters or saving the production should a major character fall ill or die during filming. Outside of entertainment, deepfakes has been used to create new MRI images in medical training, to recreate the voices and images of deceased loved ones, and to help those suffering from ALS to regain better voice control.8 In a 2019 global educational ad campaign about malaria featuring David Beckham, deepfake allowed him to appear multilingual in order to reach more people.10

Bad Machine

Unfortunately, bad actors don’t just live in Hollywood. Criminals are using deepfake technology to extort, exploit, and steal. The growing adoption of voice-activated transactions and AI-based virtual assistants in financial services has also given rise to a sharp uptick in voice fraud—where a fraudster mimics a client’s voice in order to access their private information.6,11 

According to a recent cybersecurity survey, one of the top risks is the use of deepfake technology by cybercriminals to manipulate the appearances and voices of corporate and government leaders.12 This could involve brand sabotage, stock price manipulation, making false statements, and sharing confidential information.9

There have already been reported cases of CEOs being duped by deepfake to transfer large sums into criminals’ accounts. Public figures are relatively easy to spoof because their voices and images appear frequently in the media.13

Detect/Protect: Where do we go from here?

One defensive approach is to strengthen our ‘herd immunity’ to deepfakes with education and training, especially if you’re a professional accountant. For example, learning to spot subtle ‘tells’ such as a face wobble or inconsistencies in lighting or shadows could be a tip that the content is faked.9 However, rapid advancements in machine learning will eventually make these defenses obsolete.14

Just as luxury brands imbed codes or other markers of authenticity into their goods, creators of legitimate content could employ solutions such as ‘watermarking’6 or integrating cryptographic techniques into the video or audio that could flag tampering.8

In the financial sector, using voice biometrics would be another way to authenticate a human voice.

Legislation may be another avenue of defense against deepfakes if they are found to infringe on privacy or copyright laws. Because deepfakes are not copies of existing content this is a murky legal area and, in any event, it may be impossible to find and then sue criminal organizations.

Rita Silvan, CIM™️, is personal finance and investment writer and editor. She is the former editor-in-chief of ELLE Canada magazine and is an award-winning journalist and tv media personality. Rita is the editor-in-chief of Golden Girl Finance, an online magazine focusing on women’s financial success. When not writing about all things financial, Rita explores Toronto’s parks with her standard poodle.

Rita Silvan is a paid spokesperson of Sonnet Insurance.
Accountants and other professionals can save even more with an exclusive Sonnet discount. https://www.youtube.com/watch?v=VWrhRBb-1Ig https://www.theglobeandmail.com/business/international-business/us-business/article-facebook-bans-deepfakes-in-fight-against-online-manipulation/ https://www.economist.com/the-economist-explains/2019/08/07/what-is-a-deepfake https://www.ft.com/content/9df280dc-e9dd-11e9-a240-3b065ef5fc55 https://www.nytimes.com/2019/11/24/technology/tech-companies-deepfakes.html https://qz.com/1620073/voice-skins-make-it-possible-to-change-your-voice-online-and-thats-scary/ https://www.theatlantic.com/entertainment/archive/2019/12/irishman-gemini-man-and-rise-de-aging/603130/ https://www.csis.org/analysis/trust-your-eyes-deepfakes-policy-brief https://timreview.ca/article/1282 https://www.wsj.com/articles/jpmorgan-cio-says-ai-holds-promise-for-helping-people-save-11576097646 https://www.barrons.com/articles/PR-CO-20191202-903397?tesla=y&tesla=y https://www.zdnet.com/article/forget-email-scammers-use-ceo-voice-deepfakes-to-con-workers-into-wiring-cash/ https://www.ft.com/content/4183b400-f960-11e9-98fd-4d6c20050229 https://www.torys.com/insights/publications/2019/05/a-short-take-on-deepfakes