Deepfakes are already big business
In films and movies, hacking this kind of security is predictably dramatic – a hero might hold a hapless henchman up to a retinal scanner, while in Silence of the Lambs Hannibal Lecter drapes a victim’s severed face over his own to fool an ambulance crew. Today’s techniques rely less on blood and guts and more on AI-augmented tools. The bad news is that they’re getting better by the day – and the results could be grisly for your balance sheet.
Consumers have been hit by a wave of deepfake scams in the last 12 months. Incidents have seen criminals use AI-generated voices to convince elderly couples that their children or grandchildren are in jail and need legal fees urgently. Voice technology is believed to have been responsible for $11 million worth of scams in the US last year, while the Australian Cyber Security Centre warns of “new technologies” and “growing sophistication in scam approaches”.
AI fraud and scams can hit harder than ransomware
Criminals have companies in their sights too – and deepfake fraud can hit even harder than ransomware. In 2020, the branch manager of a Hong Kong business received a call asking him to authorise a transaction for a major acquisition. The manager recognized his director’s voice, found identical requests in his email, and transferred $35 million – only to find the voice had been cloned and the emails spoofed.
In 2021, Chinese hackers pulled an even more damaging scam by using images purchased on the dark web to create deepfake videos that fooled biometric security systems. They then used fake tax invoices to claim $76 million. Then, this year, a journalist found that the “voiceprint” used by Services Australia’s Centrelink service could be hacked by an AI using just four minutes’ worth of audio, opening the door for cybercriminals to target personal accounts.
Attackers can now use videos, databases and AI to fool biometric security
Deepfakes are also increasingly dangerous because we’re sharing more of our lives online. A scammer looking for video or audio footage may be able to scrape it from LinkedIn, Twitter, Tiktok, Instagram or Facebook.
Some tools need only a few seconds of audio which they can break down into syllables and build new words. Scammers using such tactics will probably use them sparingly – the ruse of having a real human (perhaps pretending to be a lawyer or PA) layer an artificially generated voice on top of theirs allows criminals to get by with limited audio/video samples. But more sophisticated AI, powered by chatbots, can generate adaptive, real-time responses. When that is combined with high-end voice generation, a deepfake can be dangerously convincing.
Biometric hacks can start in the same way, with hackers gathering material from publicly available photos or video, and even physical identity markers. In 2021, researchers found a fingerprint left on anything from a wine glass to a car door can also be used for authentication, while some testers have fooled facial recognition software using simple plastic masks. But data breaches also offer ammunition, with facial biometrics on passports and fingerprints being leaked in data hacks in recent years.
Every business should be prepared for deepfakes
So how can businesses stay safe in an age when neither a familiar voice nor a cutting-edge biometric check can be trusted? First, don’t assume it won’t happen to you. Smaller companies may have less rigorous processes or structures and be more likely to jump when the boss asks for an urgent money transfer. On the other end, larger corporations are likely to attract more sophisticated attacks from criminals chasing a bigger payday. Every organisation should be prepared.
Whether a scammer is seeking to use deepfakes to commit fraud, or gather biometrics to access sensitive data, social engineering will almost always be involved at some point. Users should be trained in common scams and drilled on how to deal with suspicious incidents. CISOs should stay abreast of dangers by following threat news and keeping an eye on the latest research.
Where possible, you should limit what you post on social media, and manage your company’s online presence via formal social media guidelines. While you can’t stop fraudsters trying, you can make it much harder for them to gather material.
Set up controls and security tools
Work with your finance team to set and implement controls over major transactions. Faked audio combined with a spoofed email can be a persuasive combination. Requiring a call back as standard can help verify requests, and requiring sign-off from multiple executives on large sums can limit your exposure to risk. While biometrics are an important security tool offering low-friction verification, no organisation should rely on biometrics alone. They are far stronger as part of multi-factor authentication (MFA) strategy.
AI, machine learning and fraud protection software can bolster your defence. Sophisticated tools can spot suspicious behaviour and identify spoofed biometrics. Information such as digital footprints and IP addresses factors can build a risk score, with high-risk cases then escalated for further verification. Eventually, an attack may succeed, but you can limit the damage an incursion can cause via network segmentation or zero trust.
Multi-pronged security can keep deepfakes at bay
AI-augmented tools are helping make impersonation scams and biometric hacks more widespread every day. Organisations can no longer trust phone calls or emails that look or sound familiar, and that ‘ultra-secure’ retina scan could be bypassed by a criminal with a phone and a few pieces of DIY kit.
But companies can limit the damage by training staff to spot common scams and introducing robust verification measures. Modern tech has made it easier than ever for a scammer to fool you once, so the answer is to erect multiple lines of defence, requiring multiple sign-off on large transfers and MFA on authentication tools – and showing criminals up for the fakes they are.
Comments:0
Add comment