In a previous post, we explored how adopting vertically integrated fraud solutions that target common fraud incidents results in rigidity which translates to financial risk in companies facing diverse fraud typologies.
Here, we’ll explore a few examples of how fraud typologies change across products depending on the features and user flows a fraudster must navigate to successfully execute an attack.
We’ll also explore strategies a data science or risk team can apply to close detection gaps before a loss occurs.
Exploiting Ridesharing Apps
Given their sheer volume of users and usage frequency, ridesharing apps like Uber and Lyft have faced fraud of all sorts, many of which involve manipulation of trips for higher payouts, or the creation of dummy accounts used in social engineering scams.
These instances are a challenge to detect, mainly due to the difficulty in building fraud models on data features that are robust enough, both to uncover scammer behaviors that are difficult to change and to provide insight into how a risk team could discourage future scams by breaking the economic incentives to run them.
Gaming the Spotify Ranking System
Music streaming apps like Spotify have strong financial incentives to ensure that the most popular musicians grow their audiences with more repeat listeners.
Scammers on these platforms often pose as “promotional agencies”, reaching out to artists with promises of higher royalties, boosted audiences, and guaranteed song slots on promotional playlists.
In reality, they utilize automated programs to “listen” to streams that might temporarily boost royalties and playlist rankings but violate Spotify’s terms of service.
This results in royalty disputes, loss of revenue from legitimately frequently streamed artists (and Spotify), and skewed data on country/city streaming rates.
Paypal Promotion Abuse
Financial institutions might be more targeted by attackers than any other industry, given the utility they offer a potential scammer.
Fraud schemes not only pose substantial risks in terms of revenue and customer loss, but they can also have enormous negative impacts on shareholder perception and market cap if the company is publicly traded.
Paypal is in the midst of weathering these fallouts ever since the news was released that at least 4.5 million user accounts created under a recent $10 sign-up promotion were done via automation, or “bot farms”.
Ignoring the $45M in losses from payouts alone, shares dropped 25%, accounting for a roughly $50 billion loss in total market cap for the company overnight. This decline in market cap was attributed to analysts by the re-adjustments Paypal made to their user volume projections for 2025 in light of the automated sign-up campaign.
A lack of robustness and undetected fraud cases
With these examples, it is challenging to imagine how an off-the-shelf risk model, which caters to the lowest common denominator of fraud types, would provide the robustness required to detect unchangeable attacker behaviors and break their economics in a way that moves the needle.
It is, however, easy to imagine how quickly attackers would evolve past these widely adopted approaches to generic fraud modeling.
What then is the alternative, more effective approach for organizations that require a nuanced strategy?
Advantages of capturing high fidelity data before the user funnel starts
The key to detecting fraud typologies regardless of their exceptionality is to minimize assumptions made about the user as early as possible and check those assumptions as frequently as possible without impeding the user flow.
Both can be done by capturing as much fine-grained data as possible before the user journey begins, and subsequently at various points further in-funnel with low friction means that do not disrupt the user experience.
Let’s look at Paypal’s registration flow as an example of how this concept could be applied:
- To sign up on desktop, the user must first make a selection on choosing a personal account or a business account.
- Then must enter details into input fields, starting with a phone number
In just these two steps, there are troves of interaction data that can be gleaned to build an initial behavioral baseline and minimize guesswork about the user.
Text field input patterns, mouse and trackpad movements, location data, and other types can all be captured with proper tooling. And if this principle is applied to capture device signals unique to mobile, gathering pointer data, pressure, distance orientation, linear accelerometer readings and more is possible for such a small set of simple interactions.
By capturing high fidelity, unique to device features before a user even completes an account creation flow, in-house data science teams can draw from the richest context possible to establish behavioral baselines across every user’s entire device ecosystem, then model those baseline contexts on every important behavior through the lifecycle, ie login, purchase, settings change, etc.
It is this kind of data fidelity that allows a risk team to build models on the smallest set of assumptions about their user behavior, and follow a long enough breadcrumb trail of behaviors and signal contexts to profile fraudsters immediately proceeding any kind of financial loss event, irrespective of typology.
A robust data foundation allows mitigation at scale
Whether it’s the product utility offered to customers, the sophistication of attackers, the industry they operate in, or the scale they operate at, there are numerous variables that dictate the mix of fraud typologies a company stands to face.
When fine-grained behavioral baselines captured at the onset of the user lifecycle serve as the building blocks of a broader risk data ecosystem, it becomes extremely difficult for attackers to mask their tactics irrespective of the fraud they try to commit.
Companies that adopt this approach to mitigation early enough position themselves to counter potential fraud loss before it occurs, instead of reacting to the fallouts of detection failures.