Automations empowered by artificial intelligence are reshaping the business landscape. They give companies the capability to connect with, guide, and care for customers in more efficient ways, resulting in streamlined processes that are less costly to support.
However, AI-powered automations also have a dark side. The same capabilities they provide for improving legitimate operations can also be used by criminals intent on identity theft. The rise of low-cost AI and its use in automations has empowered scammers to widen their nets and increase their effectiveness, leading to a drastic increase in identity theft scenarios.
Using AI to enhance phishing attacks
AI is being used increasingly in today’s business world for process automation. For example, AI can automate data collection and analysis to enhance marketing efforts.
Criminals running identity theft schemes can use the same type of business process animation to gather and analyze data on potential targets. With automated phishing, for example, AI can scour the web for details about targets and then use those details to construct more believable phishing messages. The content of those messages has a higher degree of relevance and authenticity, making it more effective.
AI automations also allow criminals to identify phishing targets and prepare phishing messages more quickly, which means they can deploy more attacks. They even provide the capability for real-time targeting triggered by an event that could increase a target’s vulnerability. In the aftermath of a natural disaster, criminals could use AI automations to launch an attack targeting the communities affected by the disaster with fraudulent offers for aid.
AI can also automate the learning process needed to increase the effectiveness of phishing and other attacks. It can analyze data on attacks, determine which are most effective, and shift future attacks to take the route of least resistance.
Using deepfakes to support identity theft
Identity theft often succeeds when a criminal successfully pretends to be someone who the target trusts. The scheme can involve posing as a representative from a financial institution, a law enforcement officer, or a loved one. In each case, the victim shares personal information once a basis for trust is accepted.
Artificial intelligence provides criminals with powerful tools for assuming a false persona and earning a victim’s trust. By empowering deepfake creations, AI allows criminals to create more realistic audio or video content for impersonating trusted individuals. AI can also drive chatbot interactions, such as text message exchanges, that convincingly mimic a trusted person’s communication patterns.
In 2023, reports began to surface of scammers using AI-generated voice deepfakes to gain access to financial accounts. Rather than using AI to trick individuals into providing passwords, voice deepfakes go directly to organizations like banks to accelerate the process, impersonating customers to gain access to accounts. The vast amounts of audio and video content that users provide on social media channels make these kinds of AI-supported scams possible.
Voice deepfakes also allow criminals to expand the reach of their identity theft operations, no longer limiting them to deploying schemes in regions where they can understand the language. AI can translate schemes to different languages and use natural language processing to understand targets’ responses.
Using automation to maximize scam thefts
AI also comes into play once scammers obtain personal data through an identity theft scheme. With AI and automation, criminals can move more quickly to utilize personal information once they obtain it. If the stolen information involves social security numbers, they can use it to rapidly apply for several credit cards. If it involves credit card numbers, they can rapidly max out the credit card accounts.
Reports show that identity fraud cost Americans at least $43 billion in 2023. The amount resulting from account takeovers was nearly $13 billion, and new account fraud accounted for more than $5 billion.
Taking steps to prevent identity fraud
Preventing identity theft requires increased vigilance on the part of individuals and enhanced security practices. Password managers, two-factor authentication, and manual audits of services can all help provide a higher security threshold to protect identities.
However, personal vigilance is not enough to repel attacks effectively. Success requires comprehensive strategies that involve the companies that hold sensitive data as well. Individuals must be more educated about identity theft, and companies must invest in more robust systems for keeping data secure.
Greater cooperation between regulators, law enforcement, and technology companies could also help to prevent identity theft. Currently, the lack of cooperation is creating a void in which threat actors can successfully leverage new tools to outpace the defensive measures being implemented. The level and intensity of attacks will only increase until there is a higher threshold to hold companies accountable for their part in data breaches.
The use of AI has dramatically increased the volume of identity fraud cases, with experts suggesting that a new case occurs once every 22 seconds. It has also made them more difficult to detect. To stay safe, today’s companies and consumers must ensure they stay educated on the latest schemes and leverage every tool available to keep those schemes from succeeding.
,Yashin Manraj, CEO of Pvotal Technologies, has served as a computational chemist in academia, an engineer working on novel challenges at the nanoscale, and a thought leader building more secure systems at the world’s best engineering firms. His deep technical knowledge from product development, design, business insights, and coding provides a unique nexus to identify and solve gaps in the product pipeline. The Pvotal mission is to build sophisticated enterprises with no limits that are built for rapid change, seamless communication, top-notch security, and scalability to infinity.