Getting identity wrong at the e-commerce checkout comes with a price attached. If it’s a fraudster the fraudulent transaction goes through and you get a chargeback. But if it’s a legitimate user who you block, you lose the sale and get all the reputational damage.

The acceleration of digitalisation in recent years has delivered fast and efficient channels but it’s also increased opportunities for fraudsters to exploit security weaknesses and steal data, products and money.
The use of bots to automate this activity allows fraudsters to cheaply attack companies very quickly and at scale, hacking passwords, setting up fake accounts and scraping valuable data from websites they target, said Alasdair Rambaud, Head of Fraud at Ping Identity.
To address this problem, organisations increasingly rely on behavioural data to identify bad actors and protect against loss.
It's an issue that is covered in depth in a new whitepaper from Ping Identity.
Behavioural data can detect even the most sophisticated forms of fraud without relying on private user data. It can be deployed at any stage of the customer journey, allowing for early detection, while providing stronger, more intelligent forms of identification that do not create more work for users.
The advantage of behavioural biometrics technology is that it works invisibly behind the scenes, continuously protecting transactions and combating fraud without requiring direct user intervention.
What is behavioural data and why does it matter?
Behavioural data is a term used to describe information about how users interact with websites, applications, devices and more. It is a powerful data source that excels at rooting out fraud, cyber thieves and villains.
Everything you do has a pattern to it. From your morning routine to your work schedule. The same is true of how we behave online. Every time you interact with a device or application, you generate data that tries to quantify your behaviour.
Whether it's your location, how many times you click on a page, swipe left on a person or product, (or right), the duration of time spent, how far you scroll before moving on to the next thing, even the type of device you’re using - every interaction generates data that tells us something about your personality, mood, or intention. And that information can be represented in a data set.
But the bots and emulators deployed during fraud operations also have their own distinct behavioural patterns, the kind that seems to suggest someone — or something — is trying to mimic human behaviour.
What are bots?
A bot is just an automated script that allows you to do a repetitive task over and over again, at an incredibly large scale.
Many bots are legitimate software programs or applications that are used to imitate and automate human behaviour quickly, efficiently, and at scale. These programs have become fairly sophisticated at mimicking human behaviours, avoiding even the most advanced detection systems, like Google reCAPTCHA.
But they are also tools for fraudsters.
There are three areas in particular fraudsters commonly deploy bots, according to Rambaud.
The first is credential stuffing.
“This is something we see a lot of in commerce and banking. In this case, the bots are used to attempt tens or even hundreds of thousands of logins and passwords to an account to gain access. Once that's done, then humans will take over.”
Account creation is another typical target for fraud bots. “The account creation process is often very simple and straightforward, especially as merchants are trying to create the easiest customer experience by removing friction from the process.”
Fraudsters on the other hand are looking to create large numbers of accounts all at the same time for use in future frauds.
“Or they may be trying to exploit a marketing campaign which offers cash or other prizes to customers who sign up for services,” he said.
A third typical use for bots is for content scraping. “In this case, the attacker wants to know everything you are selling on your site and the price you are selling it for. They might also be looking for a specific item with a high retail price that they can steal.”
How do bots mimic human behaviours?
The good news is, despite their best efforts, even the most advanced bots can’t fully imitate gestures and interactions that reflect the innate nuances of human behaviour.
Bots are built to complete very simple actions and trained to process logic.
A large-scale attack at speed, for example, requires only simple bots that use API direct attacks or credit stuffing. While more sophisticated bots, though slightly slower and more expensive, satisfy logic-based challenges presented by the GUI of a higher value target.
These attacks nonetheless expose themselves by exhibiting non-human behaviours.
A finger swipe on a mobile device, for example, has multiple dimensions: speed, angle, pressure, and changes in the device’s orientation. Bots can’t generate this type of sensory data, and if they do, it’s noticeable. Other key tells include a cursor that moves too quickly, rapid switching between keyboard and mouse inputs, or text that entered too fast. These types of behaviours are inconsistent, and vary across devices and programs, requiring a flexible solution that can identify and accommodate these disparities.
But what if this data could also be used to detect fraud?
The ability to correctly identify the intention behind every activity is key to protecting our privacy and security.
Behavioural data and machine learning enable the deployment of adaptable solutions that can be trained to identify non-human behaviour and distinguish between thieves and legitimate users. Since it doesn’t rely on predefined rulesets or signatures, it can identify and learn behaviours of new bots as they evolve.
Evolving detection
The way fraud is detected has changed over the years. In the early days of the internet, transaction data such as credit card numbers, mailing addresses or shipping addresses were checked to establish identity, but it is static and easy to collect through activities such as social engineering, or page scraping.
Later, particularly with the rise of mobile commerce, the emphasis shifted to devise data.
In both cases, though the fraud mitigation didn’t kick in until the transaction level.
“When you have your user on your site for a few minutes, you can see what they are doing, you can analyse their behaviour.”
According to Rambaud, “Fraudsters behave very differently to users. A legitimate user for instance is right almost all the time when they put data in, such as a login or password. You can almost assume a 100 per cent success rate. If you are trying to move money or buy something you are going to get it right nine times out of ten. For the fraudster, it’s the opposite. Bots operate with more than a 99 percent chance of failure.”
However, by studying behaviour on the site prior to the transaction, fraudulent behaviour can be identified before the shopping cart – in the case of a merchant for instance – is reached.
“The way Ping works is that it provides the ability to catch fraudsters much sooner in the process. The more time and freedom fraudsters have on your website the more damage they can do,” said Rambaud.
“If your methodology is only to catch them at the point of the transaction when they are monetising their fraud, you have missed out on a lot of opportunities to stop them.
“Ping has multiple touchpoints with users throughout the journey and by injecting the Ping line fraud signal very early in the session, you can stop the person logging in, ask for a password reset or trigger multifactor authentication.”
By creating this additional friction, you can have a higher level of confidence in the identity of the user by the time they arrive at the checkout, Rambaud said.