Share article on

Mobile-First Fraud Prevention With Fine-Grained Behavioral Data

untitled image

While it might seem obvious that companies can only thrive in today’s economic climate with an online-first/mobile-first approach to business operations and customer interactions, this trend has an important secondary effect, which continues to invite risks to revenue if not planned for and addressed.

According to a report by Statista analyzing the share of global internet traffic across mobile devices vs. desktops, mobile usage surpassed desktop usage at 51% in the final months of 2019 and had grown to 56% by the end of 2021. And in a 2021 report by Deloitte, the average number of devices per US household has more than doubled from 11 to 25 in just the past two years.

Companies that look to use their risk and fraud mitigation functions as competitive advantages to stem the tides of dollar loss would be wise to challenge their assumptions on what kinds of data and tooling position them to best counter cybercriminals attempting concerted attacks on a victim’s overall device ecosystem rather than singularly targeting one platform or online account.

Recent statistics corroborate this. In a report from 2020, the World Economic Forum estimated that cybercrime damages would cost the world $11.4 million each minute, or $6 trillion annually, a sum equivalent to the GDP of the world’s third-largest economy. In a study released by Juniper Research, eCommerce merchants stand to lose roughly $20 billion in 2021 due to criminal activity, an 18% increase over the $17.5 billion lost in 2020.

Fraud continues to grow in sophistication and personalization as dollar losses mount, with typologies perpetrated by actors across various device types, running the gamut from money laundering schemes, dummy or mule account creation, stolen credit card purchases, account takeovers, product feature abuse, and many others.

“One size fits all” has never worked at scale

By assessing statistics on fraud volume and the types of fraud businesses might experience, we see a clear gap in how effective common approaches to prevention actually are.

When viewed optimistically, this indicates plenty of room for improvement in how organizations (especially those who stand to lose the most financially) can bring their foundational risk-mitigation strategies to the forefront of what is possible today.

Common approach companies take is to adopt a “one-size fits all” anti-fraud or risk vendor as the centerpiece of their strategy. These vendors sell tools that often combine user behavioral profiling and data feature ingestion via APIs, marry those signals with aggregate data pulled from other apps and sources, then use pre-built data models to assess specific online behaviors and users for “risk”.

Given that the lion share of consumers today are both utilizing mobile devices over their desktops and increasing their overall device count, there is a clear detection gap with many vendors that have not developed the sophistication to provide behavioral baselines for the same user on the same app across all their devices, which often differs substantially.

With these tools, customers can configure standardized “risk buckets”, or numerical thresholds for how behaviors are flagged by their models and then build the corresponding remediation flows per bucket. Examples of remediation might be allowing a certain behavior to continue due to low perceived risk (classified by the pre-built models), blocking the behavior from occurring due to high perceived risk or requiring further action (on part of the monitoring analyst or even the user) to questionable perceived risks. With a long enough training period, pre-built risk models like these may, in theory, work to prevent some share of financial loss for a specific set of fraud typologies.

More sophisticated and effective risk teams demand from their tools the lowest possible false-positive rates (from both a dollar loss and frequency perspective) and the highest possible true-positive rates across all fraud typologies affecting their business. These typologies might be unique to their industry and at times are unique to the company and the product itself.

In essence, a sophisticated risk mitigation strategy should center on achieving organization-wide acceptable levels of predictability for all criminally induced financial loss (currently and potentially) as the business pursues its broader goals.

And while this outcome may be possible with one-size-fits-all risk models and tooling, there are considerable time and dollar costs that come with a failed or even sub-par implementation, along with a tendency for those same costs to compound while simultaneously impeding revenue and customer growth.

Modern risk teams operate with thinner margins for losses than their predecessors, taking pragmatic and even challenging views of a few classic risk-mitigation concepts can help determine if pre-designed risk models suit their business, or if there is simply a more flexible and future-proofed alternative that is conducive to their long-term strategy. These teams understand that there is a reason global fraud remains a risk for businesses of all types while its volume and dollars lost continue to mount.

For risk teams that seek to develop this level of nuance, there are three key questions they can ask of their strategy to build effective, long-term contingency plans to address the high risks they face :

What type of foundation is required within the risk-data ecosystem?

If an organization’s goal is to solve for common fraud typologies short term across a customer base with generally homogenous behaviors (i.e. small banking platform serving a unique or local demographic where most users are on desktops) that can be analyzed via a simple enough feature set, adopting a point product with pre-built risk models can plug that gap for the near term.

If the expectations from a business’s risk function are more ambitious and require in-house specialists, either because of a fraud typology mix, the spread of devices across users, the sophistication of the perpetrators, or the sheer volume of loss dollars needing resolution, building specialized teams and internal risk scoring models from data both within and outside their ecosystem can strengthen the business competitive advantage.

How clear is the equation denoting loss from false positives, training periods, and operational overhead for all fraud typologies that need solving with a pre-built risk model?

This math should be clear in discussions with any vendor proposing their models and product as the solution to every fraud typology that needs addressing. Anything unclear here invites risks of dragging an organization into unforeseen costs sustained before any fraud is actually detected.

Per each fraud typology by each device type, training periods require continued loss on part of the business before models adapt, legitimate behaviors i.e. purchases can be blocked (impeding revenue and growth), and many vendors charge on a volume basis which can prolong the training period given available budget. And if a vendor does can not clearly articulate how a risk models addresses and captures behavioral nuances across devices, this calculus is further complicated.

Clear distinctions should also be made between how a model classifies a case as a “false positive” and a case flagged for “review”. Without demarcation between the two, cases might falsely be attributed in the latter which shouldn’t, muddying any assessment of efficacy. Organizations spend more dollars on cases flagged for review due to the lifts in analysts approving/blocking cases until the training period “clears”, often to growth in costs and to the stalling of user activity, new user activation, or revenue realization.

In an alternate approach, organizations can decide on data types and features collected in the assessment of each typology, run regressions to build predictive models in-house with specialists that are familiar with internal risks and reasoning, and if pulling the same data types from external sources and users of other apps, further augment their models to get clear picture costs before a model is even implemented. Tuning stays flexible and in-house, false positives are controlled to higher degrees, and any critical and urgent changes to models are more immediately implemented.

What kinds of capabilities allow for a nimble risk-mitigation strategy as attackers revise their tactics in the future?

If an organization is only facing a singular type of fraud on a single type of device, adopting a point solution may suit if the vendor’s long-term roadmap, feature set expansion, execution, and plans align with the long-term risk-mitigation strategy of the adopter.

This question of efficacy becomes more difficult to answer when losses sustained are larger, and/or if the fraud typology set occurring now and expected in the future leans toward becoming diverse. This is difficult to determine depending on the calculable importance of the risk function to its business now and in the future, and the tolerance for betting this importance on a separate entity’s ability to execute the outsourced function continually.

Profiling the user ecosystem risks as needed

There are many other points to consider when deciding on how to build the foundation of data, tooling, and teams that will future-proof your business against losses. The decisions involved require nuance in thought and complex planning, with consideration of how their customers behave across their overall device ecosystems.

Though ultimately, they come down to the specific needs for sophistication in a business, and whether an organization believes taking a classical approach to handling fraud can prove them the exception to growing trends.