Generative AI and the Intensified Identity Fraud

While AI opens up new opportunities across industries, it also presents unprecedented challenges to businesses, particularly in the form of intensified deepfake threats. In this article, we will reconstruct the process of AI-related identity fraud, as well as preventative measures to tackle these challenges.

Share

March 20, 2024

7 minutes

Keqiang Xu, Yuqi Chen

This February, OpenAI released its first text-to-video AI model, Sora, attracting huge public attention as a revolutionary advancement in AI and video production. While the capability of generating sophisticated movements and scenes opens up new opportunities across industries, it also presents unprecedented challenges to businesses, particularly in the form of intensified deepfake threats. The rapid development of AI has significantly complicated the verification of user identities, providing fraudsters with ample opportunities to exploit vulnerabilities.

“The Hong Kong police recently revealed a major case of AI fraud where a finance employee of a multinational corporation was conned by a scammer using AI face-swapping technology to impersonate the company's CFO. Despite initial suspicion, the employee was reassured when other colleagues joined the video call, leading to the transfer of 200 million Hong Kong dollars to an unknown account.”

This case exemplifies the significant development in the misuse of AI within the telecommunications fraud domain. In this article, we will reconstruct the process of AI-related identity fraud, as well as preventative measures to tackle these challenges.

The Initial Attack Process

The initial fraudulent attack process can be broken down into the following steps: gaining the victim's trust through social engineering tactics, obtaining control of their mobile devices, stealing accounts, transferring funds or consuming credit limits. And after achieving the user identity, fraudsters will initiate impersonation attacks such as AI face-swapping, presentation or injection attacks that bypass liveness detection, and transferring funds/applying for loans/consuming credit limits.

💡 In the context of information security, social engineering is the tactic of manipulating, influencing, or deceiving a victim in order to gain control over a computer system, or to steal personal and financial information. It uses psychological manipulation to trick users into making security mistakes or giving away sensitive information.

Preparation

Gaining Trust

The goal of this step is to have malware installed on the victim's mobile device to gain control access.

Controlling Device

These actions render typical security measures such as changing device, IP detection, and two-factor authentication ineffective.

Stealing Account

This critical step retrieves almost the entire set of personal information from the victim's device.

Profitting

At this stage, the fraudster has already obtained whole control over the device. The victim cannot receive any relevant notifications since messages are intercepted.

The emergence of these tactics can be traced back to the renewal of Google Store privacy policy at the end of 2023, where permissions related to location retrieval, app listings, SMS/call logs, and cameras were tightened. Furthermore, it expands the classification of apps with similar permissions and malicious code as junk software.

Against this backdrop, cross-platform impersonation attacks are becoming increasingly popular. Typically, fraudsters have a deep understanding of the functionalities and audit requirements within specific domains. They will probe into the security measures across similar platforms and initiate mass attacks afterward.

Currently, the TrustDecision intelligence team has monitored such attacks plaguing countries such as Thailand, the Philippines, Vietnam, Indonesia, Peru, etc.

The Derived Attack Process  -  Using AIGC

In this scenario, identity verification becomes tricky as all the 'applicants' submit authentic user information, posing a challenge for conventional KYC tools to detect identity fraud. Moreover, fraudsters are well-informed about the creditworthiness of the data owner, ensuring that it meets the platform's risk control requirements.

💡 Currently, most facial recognition solutions utilize Presentation Attack Detection (PAD) to determine whether the identity is authentic.

💡 A presentation attack is when an attacker uses fake or simulated biometric data, such as masks or photos, to deceive a biometric authentication system, like facial recognition. PAD aims to distinguish between live human faces and such imitations. It’s primarily used to defend against presentation attacks. However, more and more fraudsters are now turning to deepfake to carry out injection attacks, which bypasses physical cameras and use tools such as virtual cameras to directly input images into the system's data flow.

Cases

Considering the availability of data, the cost of attack, and the complexity of platform encryption, fraudsters may initiate various forms of attacks, including but not limited to presenting manipulated photos, head models, printed pictures, screen filming, and injection attacks, etc.

Case 1

Approach: Photoshop to replace the portrait of the ID card, film the screen, and hold a 3D head model to pass liveness detection.

Data Characteristics: Similar/near-identical demographic data aggregated, highly identical facial feature aggregated

Risk Label: photoshop, abnormal image edge, fake face

Target: To detect the baseline of the platforms’ risk control capabilities

Case 2

Approach: Change the name of the ID, use printed materials or film the screen

Data Characteristics: Similar/near-identical demographic data aggregated, highly identical facial feature aggregated

Risk Label: photoshop, reflection, moire effect

Target: To fake the identity, bypass the liveness detection and subsequent portrait comparisons

Case 3

Approach: Mass produce videos after figuring out the platform’s liveness detection algorithm

Data Characteristics: Extremely high liveness detection pass rate, highly similar video background and applicant’s apparel

Risk Label: AIGC, injection attack

Target: To fake the identity, bypass the liveness detection and subsequent portrait comparisons

The critical question arise..

How to prepare for such attacks?

The above risks stem from a combination of technologies and tactics including social engineering, malware, remote control, and AIGC, etc. To address them, platforms must enhance user education in terms of vigilance against frauds, strengthen app security with firewalls and malware detection and removal tools, and develop effective monitoring mechanisms. Additionally, they should improve strategies for dealing with malicious AIGC applications and continuously update anti-fraud algorithms to detect and respond to risky behaviors promptly.

In terms of implementation, it Involves business logic design, tool and technique reconstruction, monitoring and analysis of identity verification process and results, verification behavior analysis, CV model counteraction, empowering decision-making AI models, and supplementing offline retrieval, etc.

TrustDecision suggests:

About TrustDecision

To effectively mitigate the risks posed by AI-driven deepfake attacks, it's crucial to introduce effective and robust identity verification technologies. This includes implementing preventative measures in terms of ATO and other fraudulent applications.

TrustDecision offers comprehensive anti application fraud solutions by integrating endpoint risk recognition capabilities, liveness detection algorithms, and image anomaly detection capabilities. Our solutions suite, including KYC++ and Application Fraud Detection, is designed to combat identity fraud risks derived from advanced generative AI techniques, such as presentation attacks and injection attacks. By leveraging these innovative tools, businesses can mitigate the risk of intensified fraud losses, safeguarding their operations and assets.

Related Posts

Let’s chat!

Let us get to know your business needs, and answer any questions you may have about us. Then, we’ll help you find a solution that suits you