Traditional identity verification unusable by 2026: Gartner

By

Due to the rise of AI-generated deepfakes.

Cyber-attacks using AI-generated deepfakes on face biometrics mean around 30 percent of businesses will no longer consider identity verification and authentication solutions to be reliable. 

Traditional identity verification unusable by 2026: Gartner

In the Gartner report, Predicts 2024: AI & Cybersecurity — Turning Disruption into an Opportunity, the development of fake images of real faces has led to traditional biometric authentication methods deemed inadequate.

The report also found digital injection attacks increased 200 percent in 2023 while presentation attacks are the most popular form of threat to a network.

Akif Khan, VP analyst at Gartner said, “In the past decade, several inflection points in fields of AI have occurred that allow for the creation of synthetic images.”

“These artificially generated images of real people’s faces, known as deepfakes, can be used by malicious actors to undermine biometric authentication or render it inefficient,” said Khan.

He said as a result, “organisations may begin to question the reliability of identity verification and authentication solutions, as they will not be able to tell whether the face of the person being verified is a live person or a deepfake.”

According to Gartner, identity verification and authentication processes using face biometrics today rely on presentation attack detection (PAD) to assess the user’s liveness.

“Current standards and testing processes to define and assess PAD mechanisms do not cover digital injection attacks using the AI-generated deepfakes that can be created today,” said Khan.

Moving forward, a mixture of PAD, injection attack detection (IAD) and image inspection will be needed to prevent attacks.

The report also recommends chief information security officers (CISOs) and risk management leaders carefully select vendors with capabilities above the standard of today.

“Organisations should start defining a minimum baseline of controls by working with vendors that have specifically invested in mitigating the latest deepfake-based threats using IAD coupled with image inspection,” said Khan.

CISOs and risk management leaders should seek to include additional risk and recognition signals, such as device identification and behavioural analytics, to boost the likelihood of catching attacks, the report stated.

Got a news tip for our journalists? Share it with us anonymously here.
© Digital Nation
Tags:

Most Read Articles

CBA's leaders are keeping a close watch on AI metrics

CBA's leaders are keeping a close watch on AI metrics

NBN Co weaves AI and automation into its operational "fabric"

NBN Co weaves AI and automation into its operational "fabric"

OVIC sets limits on GenAI tool use in external meetings

OVIC sets limits on GenAI tool use in external meetings

CBA AI 'voice bot' deployment linked to review of 45 roles

CBA AI 'voice bot' deployment linked to review of 45 roles

Log In

  |  Forgot your password?