One of my favorite sessions at this year’s Gartner IT Security and Risk Summit was Tricia Phillips' presentation "Don't treat your customer like a criminal". It started out with a strategic planning assumption that,
“by 2022, businesses that make it easy for their customers to be seen as trustworthy will earn 20% more revenue from their websites than businesses that make their customers prove themselves trustworthy.”
To me, that was a new perspective on how to think about online fraud—not just how much online fraud costs a business in terms of chargebacks, account takeovers, and cashing out, but rather lost business due to customer frustration and abandonment during the enrollment, signin, and purchase process.
Clearly there were fundamental problems with the way I was thinking about the risk of online fraud, and the costs of countermeasures.
Most customers are not criminals
The reality is that 98% of human traffic is legitimate and 2% is not. So why do we treat our online businesses as if it’s the other way around? Part of that is because we’re information security and fraud management professionals and it’s in our DNA to focus on the risk numbers. Part of it is that we know how many breaches there are and how many people reuse credentials on multiple accounts.
Then there’s the fact that 90% of sites with login pages and 80% of sites with sign-up pages have been victims of (non-human) bot attacks. And today’s bots are so sophisticated that it can be hard to isolate them from legitimate human traffic. That’s part of the reason why it’s so important to separate human from bot traffic automatically without imposing friction on human users.
Keeping the criminals out – and the customers?
If you make it more difficult for fraudsters to get past your defenses, are you also making your customers jump through so many hoops that they feel distrusted and go elsewhere? But if you make it simple for customers to transact on your site, are you also making it too easy for criminals to defraud you (and cause your customers to lose faith in you)? What we need is balance – an investment in what Tricia called “the tools of trust”.
Building a foundation of digital trust
Trust comes from an understanding of what a legitimate customer’s activity pattern looks like. Tricia described this as “context-based authentication”. Different actions act as indicators for different levels of risk. If a visitor is just looking around, that’s a pretty harmless activity. But add something transactional, and suspicion ramps up a notch. Every action that a visitor takes between looking around and starting a transaction helps to build a picture of trustworthy or not-trustworthy. This – extended across a wide swath of website visitors and compared with what you know to be acceptable visitor behavior – is how Distil Networks’ machine learning models helps you to determine what’s normal human behavior and what’s not.
To identify that 2% of ill-intentioned humans, we need to apply some of those same comparative behavior techniques – what Tricia referred to as “passive behavior biometric and behavioral analysis” to the known-human visitors. How closely does their behavior mimic that of the control group? How do they use their device? What route(s) do they take through the site? In this way, we can establish successive “gateways” that visitors can be allowed through as they prove their trustworthiness, rather than blocking them with an all-or-nothing wall.
Distil uses passive biometric behavior and behavioral analysis to identify both bad human and bad bots. For example, the way users type, move and click their mouse, scroll, and other on-page and page-to-page behavior.
Yes, you’re likely to come into contact with the user experience teams during this process, but work with them – after all, their end goal is that 20% uplift in customers over the competition.
Tricia also produced a thought-provoking research paper on the same topic as her session. You can download it here free.
About the AuthorFollow on Twitter More Content by Elias Terman