AI & Corporate Fraud: The New Face of Digital Threats

AI & Corporate Fraud: The New Face of Digital Threats

SHARE IT

29 April 2025

Artificial intelligence (AI) is revolutionising the business world, radically transforming the way organisations operate. From automating repetitive tasks and reducing costs to boosting efficiency, AI is emerging as a key driver of digital evolution. According to Gartner estimates, in October 2023, 55% of enterprises worldwide were already in the pilot or production phase of using genetic artificial intelligence (GenAI) - a figure that is now estimated to have increased even further.

AI significantly improves the customer experience, supports software development and provides deeper, more insightful information for strategic decision making. But this progress is not only being leveraged by businesses.

Addressing this new threat landscape requires a multi-layered approach based on three pillars: people, processes and technology. Only then can a business stay ahead of evolving threats and fully leverage the benefits of AI, safely and responsibly.

What are the latest AI and deepfake threats

Cybercriminals are leveraging AI and deepfakes in a variety of ways:

  • Fake employees: North Koreans posing as freelance IT professionals working remotely have reportedly infiltrated hundreds of companies. They use AI tools to create fake CVs and other documents, including fake AI-edited photos, to get past checks. Their goal is to financially empower the North Korean regime, as well as to steal data, spy and even spread ransomware.
  • Business Email Compromise (BEC) scams: deepfake audio and video clips are used to enhance BEC scams, where financial sector workers are tricked into transferring funds to accounts controlled by the scammers. Recently, a company accounting worker was persuaded to transfer $25 million to scammers who used deepfakes to pose as the company's CFO and company executives on a conference call. However, such scams are not a new phenomenon. In 2019, fraudsters used a deepfake to trick an energy company executive in the UK into believing he was on the phone with his boss, and convinced him to transfer £200,000 to them.
  • Bypassing authentication services: Fraudsters are exploiting sophisticated techniques to impersonate legitimate customers, create fake identities and bypass authentication checks when creating accounts or logging into services. One particularly advanced malware, GoldPickaxe, is designed to collect facial recognition data, which is then used to create deepfake videos. According to a recent report, 13.5% of new digital accounts worldwide last year were suspected of being fraudulent.
  • Deepfake scams: Cybercriminals are leveraging deepfake technologies not only for targeted attacks but also to impersonate corporate executives and high-ranking officials on social media. Through this tactic, they can promote investment fraud and other malicious actions. Cybercriminals use deepfakes and corporate names in social media posts to lure unsuspecting victims into a new type of investment scam known as Nomani.
  • "Cracking" passwords: AI algorithms can be used to decrypt customer and employee passwords, enabling data theft, ransomware attacks and identity fraud. PassGAN, for example, is said to be able to crack passwords in less than 30 seconds.
  • Document forgery: document forgery is another way to bypass customer due diligence (KYC) checks in banks and other companies. In addition, it can be used for insurance fraud. According to surveys, 94% of claims managers suspect that at least 5% of claims have been manipulated through artificial intelligence.
  • E-fishing and target identification: The UK's National Cyber Security Centre (NCSC) has warned of the ever-increasing use of AI by cybercriminals. In early 2024, the NCSC said that this technology "is almost certain to increase both the volume and impact of cyber attacks over the next two years". Of particular concern is the improving effectiveness of social engineering and target identification techniques, which is amplifying ransomware attacks, data theft and large-scale customer phishing attacks.

The impact of AI fraud mainly translates into financial and reputational damage. According to a report, 38% of revenue lost to fraud in the past year was attributed to AI-based techniques.

Considering the implications:

  • Bypassing the KYC (know your customer) process allows fraudsters to increase credit and drain money from legitimate customer accounts.
  • Fake employees can steal sensitive and regulated customer information, causing financial losses, reputational damage and compliance issues.
  • BEC (Business Email Compromise) scams can result in huge losses. In 2023, these types of attacks netted cybercriminals over $2.9 billion.
    Impersonation scams threaten customer loyalty. Research shows that a third of consumers will walk away from a company after a single bad experience.

Curbing Fraud in the Age of Artificial Intelligence

Combating fraud using AI requires a multi-level approach, with a focus on people, processes and technology.

This should include:

  • Frequent fraud risk assessments
  • Modifying and updating anti-fraud policies to be relevant to AI
  • Comprehensive staff training and awareness programmes (e.g. to detect phishing and deepfakes).
  • Training and awareness programmes for customers
  • Enable multi-factor authentication (MFA) for all sensitive corporate accounts and customers
  • Enhanced background checks for employees, including scanning resumes for career inconsistencies
  • Ensuring that all employees are interviewed via video before hiring
  • Improving collaboration between human resources and cybersecurity teams

AI can be a powerful ally in this battle. Indicatively, it can be leveraged to:

  • Detecting deepfakes through AI tools, particularly in authentication (KYC) processes.
  • Analysis of patterns of suspicious behaviour in employee and customer data, using machine learning algorithms.
  • Generating synthetic data through GenAI to develop, test and train new fraud detection models.

As the conflict between malicious and positive use of AI enters a new, more challenging phase, businesses need to review and evolve their cybersecurity and anti-fraud strategies. Adapting to the ever-changing threat landscape is no longer optional; it is imperative.

Failure to respond can erode customer trust, hurt brand equity and derail critical digital transformation initiatives.

AI has the power to change the terms of the game for cybercriminals. However, it can do the same for enterprise cybersecurity and risk management teams.

View them all