Excerpted from a HR Digest Blog by Anuradha Mukherjee

Has your business witnessed a rise in fraud attacks? With AI designed to manufacture and manipulate content, AI-based fraud attacks are expected to rise in 2026, with the issue being of particular concern in hiring. 

Fraud in the hiring process is not a new concept, and over the years, recruiters and hiring teams have leveled up their own tools and techniques for dealing with fraudsters. Unfortunately for employers, AI impersonation scams are on the rise, with new technology offering scammers additional tools and conveniences to make a more believable case for themselves. 

Fraud prevention firm Nametag recently released its 2026 Workforce Impersonation Report, warning businesses against the growing accessibility of deepfake technology and the threats that come with this change. AI attacks and scams in the hiring process don’t just overcomplicate the hiring process but instead threaten the well-being of the business at large. 

AI Fraud Attacks Threaten the Hiring Process Across the Globe

Hiring may look like a routine, mechanical, everyday process, but its very structure is built on the critical element of trust. Job seekers trust the recruiter and employer with their careers, hoping to earn a living with the business for years to come. Employers trust the candidate to be proficient at their jobs and capable of handling company secrets, assigning them work to manage their duties in the business independently to a degree. Fraud in the hiring process threatens the entire structure with deceit, vying to topple the system for reasons unknown. 

The use of AI in hiring is controversial for many reasons, but identifying and eliminating fraud in the hiring process is perhaps one of the biggest concerns for HR. The easy accessibility of generative artificial intelligence tools and their simplicity of use have made it possible for anyone to develop a basic understanding of their operations, which is often enough for AI impersonation scams to rise. 

The Nametag report identified six workforce impersonation trends that are expected to threaten businesses in 2026, but they all bring us back to the root cause: Artificial Intelligence.

Generative AI and Deepfake

AI attacks in the hiring process are now considerably harder to identify. Not only can candidates now fake their applications and personas entirely, but deepfake technology also makes it much easier to create a digital persona on audio and video to convincingly portray themselves as someone else. Easy accessibility to data online furthers this risk, as fraudsters can now rely on old LinkedIn and social media profiles to build a persona that is based on legitimate data.

Background checks and reference calls may help to verify the legitimacy of a profile, but these aspects are easier to fake now than ever before. The risk of fraud in the hiring process is significantly higher for remote workers, as candidates do not have to come into the office and risk being exposed. 

There is a growing threat from cybercrime fraud networks across Asia that are vying to gain access to U.S. businesses. The threat has reportedly been most evident from agents in North Korea, who have been accused of ramping up their attacks in 2025. Late last year, Amazon revealed that it had blocked over 1,800 fake applications from these supposed agents. Governmental regulatory crackdowns are expected in 2026, penalizing businesses that don’t sufficiently invest in background checks to ensure they aren’t employing workers from the DPRK and other sanctioned entities. 

Social Engineering, Phishing, and the Risks of AI

AI fraud attacks are certainly a growing threat to hiring in 2026, but the dangers don’t stop there. 

Nametag also warned against helpdesk social engineering attacks, which could target the IT support desk to gain access to victim information or falsely ask for help with aspects like resetting passwords. Most support-related conversations happen over call or chat, where it can be nearly impossible for support staff to verify the identity of the caller. Phishing threats are also growing today, with previous reports having proven that fake emails from HR impersonators put employees at risk.

Multi-factor authentication systems are a handy support set-up in situations like this, designed to provide multiple stages of reassurance regarding the authenticity of an employee or customer.

For the full story, please click here.