Payments On Fire

Payments Fraud

Episode 158 – The High Pain of Push Payment Scams – PJ Rohall, Featurespace

We all know that there are risks in payments. And that when the controls we put in place to manage risk fail, fraud is the result. Fraud is now an industry using many of the same tools to defeat our controls that we use to defend our payments systems.

A reality is that fraudsters aren’t discouraged when we erect a strong new defense around one weakness. They just move on to the next, more easily exploited vulnerability. We put EMV chips on cards and fraudsters moved to card not present transaction fraud. We have card data breaches so we tokenize payment credentials to make stolen data less valuable. So, fraudsters work to gain control of our accounts through ATO.

Account takeover (ATO) fraud has been made easier through data breaches, social networks, and all of the other ways our data is shared online. It is easy for fraudsters to capture our user IDs and passwords and even to intercept multi-factor authentication messages. (Note to self: SMS text as an authentication factor isn’t strong enough).

ATO is a problem but it can be defeated by biometrics, behavioral analytics, and other techniques like truly strong passwords. Use them.

The Latest Scourge: Authorized Push Payment Fraud and the Scams That Drive It

A fraud vector now favored by a good part of the Fraudster Industrial Complex is authorized push payment fraud. Enabled by social engineering, a fraudster uses what they know about an individual to convince the victim to send the fraudster money, often by masquerading as a representative of a trusted entity like a bank, a telco, or a government agency. Other times the scammer poses as a distant relative or potential love interest. Preying on vulnerable, often older, individuals, these frauds can be elaborate, taking years of grooming if the payoff in victim monies is big enough.

For the victim, the impact can be devastating and the pain enduring. Our work in developing markets has taught us that these frauds can take food off of the family table or derail plans for the family business. In developed markets, we hear of victims losing their life savings.

These are not rare cases. UK Finance has just reported that, for the first time, APP fraud exceeded card fraud during 1H 2021. Total losses due to APP scams rose to £355.3 million in H1 2021 , up 71 percent over the same period in 2020. The number of cases rose 60 percent to 106,164. Note that banks there are trying to restore lost funds to victims but the percentage of losses recovered has been, at best, 45 percent. Better than nothing but imagine the pain. (The report also contains an excellent description of the many types of scams employed by fraudsters.)

That is breathtaking. And a warning of what’s to come in the far larger US market.

To help raise awareness and protect individuals and businesses, UK Finance has produced an excellent informative site, along with resources, called Take Five to Stop Fraud. It’s worth a long visit.

No Simple Solutions

This is a tough fraud to prevent because it is hard to detect. The victims properly authenticate themselves to their bank. From an authentication perspective at the bank, everything looks fine. It’s just that the accountholder sends the money to the fraudster or to a money mule hired by the fraudster for provision of an intermediate, less suspicious, bank account to receive the funds.
The good news as this Payments on Fire® episode discusses, there are technologies and actions that can make a difference. As of now, however, we are vulnerable to these frauds.

For more on fraud types and how to sort them out, visit the Federal Reserve’s FraudClassifier℠ Model.

This is Going to Get Worse

Fraudsters are good at APP scams because they are rewarding. They are difficult to detect although there are techniques available. And it’s low risk because getting caught and then successfully prosecuted is rare. Therefore, we can expect a lot more APP fraud as instant push payments grow in popularity.

What we don’t know, in the US, is the extent of APP fraud in today’s environment. While individual firms—whether closed loop systems like Venmo or open loop ones like Zelle—no doubt track APP fraud internally, there is no legal or industry requirement to report those numbers. In admirable fashion, UK Finance has required fraud reporting for years in the knowledge that situational awareness leads to risk reduction.

We Need the Numbers to Improve

The great business management thinker of the last century, Peter Drucker, is often quoted to say: “If you can’t measure it, you can’t improve it.” Applied to US fraud reporting statistics, a more accurate rendition would be: “If you won’t measure it, you won’t improve it.”

There has to be a way for the industry to develop and share solid metrics that inform everyone without embarrassing or exposing individual entities to undue litigation, never mind ridicule.

Painful Human Impact

While comprehensive fraud reporting in the US in unavailable, there are certainly plenty of stories to illustrate the pain involved and how many, out of shame or embarrassment, choose to hide what happened rather than report it to authorities. Ouch.

Scammers Love Push Payments

To shed light on APP fraud, its impact, and some approaches to detecting fraudster coercion and the misdirected payments it causes, join Glenbrook’s George Peabody and PJ Rohall, Fraud Subject Matter Expert at Featurespace, a fraud management software company. PJ is also the co-founder of About Fraud.Com, a community site for the fraud management industry.

In this episode George and PJ discuss the growth of APP fraud and techniques to detect and deter it. You’ll hear him describe examples of the impact APP fraud has caused on individuals, many least able to weather this kind of financial damage. Psychological damage is real.

PJ outlines how scammers prey on people’s vulnerabilities – our fears, desires, greed, and worries. Scam types include romance scams, business email compromise (phishing), investment scams, purchase scams, and more.

What Can We Do?

Fighting back against scammers is an ecosystem-wide task. As PJ makes clear, every stakeholder has a role. Yes, regulators, financial institutions, the social media giants and the rest of Big Tech, mobile network operators, payment networks all have a role to play in educating us so we can raise higher barriers to the fraudsters.

But it’s not just up to those big entities. As individuals, we can contribute to the solution, to the work of prevention.

If you’re reading this, you’re broadly in the payments industry. You know how serious this is and will become over time. Here’s some actions you can take:

  • Read through the UK Finance’s Take Five to Stop Fraud site. Its toolkits are a great resource that can help guide your discussion with friends and family. Nothing could be more British than its Take Five Over Tea with Loved Ones PDF.
  • Tell your family, especially your elders and the innocent. Children need guidance and guardrails online. Here’s another reason for them.
  • Tell your friends what you know. Ask them what their experience has been. Tell them what to do.
  • Take a few minutes when you’re with your wider community, in the real world and online. Tell them how scammers operate. Let them know how sophisticated and patient the fraudster can be.
  • Tell everyone that if they ever feel pressured to send money, that’s a sure sign of a scam.
  • Offer to help. That can help buy time and calm emotions.

 

George Peabody:
Welcome to Payments on Fire podcast from Glenbrook Partners about the payments industry, how it works, and trends in its evolution. I’m George Peabody, partner at Glenbrook and host of Payments on Fire. We are in an exciting time for the payments industry. We all know that. We’ve got new systems, we’ve got new competitors and innovation in many places. And in this episode, we’re actually going to talk about and take a look at one of the … I’m going to have to say one of the unwelcome side effects of moving money on the internet, and of course, I’m talking about fraud. And in particular, we’re going to talk today about the fraud challenges that today’s fast payment systems face. So by fast payment systems, we mean the immediate funds transfer capabilities of both mobile money systems that we see in lot of developing countries and in the slightly more developed markets, the national systems that provide near realtime bank account to bank account transfer capabilities, and these allows for a sender to push money to a receiver, essentially in real time.

George Peabody:
There’s a lot to like about these systems. They’re low cost, they operate 24/7, they’re immediate, many have the ability to represent the metadata about the payment to a level of utility and granularity that we haven’t seen before. They are also the first systems that support a new message type, the request to pay message type, that a biller or a small contractor could send to a customer and say, “Hey, please pay me. Here’s a request for payment”. So rather than pulling funds, the payer, the customer pushes the money using this request to pay message type. Pretty cool. But a system that pushes money in real time is also an attractive platform for fraudsters, so what’s not to like about that if you’re a fraudster? And in this age of data breaches, one of the fraud factors that we’re encountering of course is account takeover attacks, ATO.

George Peabody:
That’s gotten easier because well, so much of our own personal data has either been given up voluntarily by us on various, well, social media sites among others, and then there are the data be each problem. Hello, T-Mobile. So an ATO allows the fraud to pretend they are the rightful account owner, and once they take control of the account, they’re going to be able to do nefarious things with it. But an even more pernicious form of fraud comes from social engineering. And this is a global plague really, is we’re finding social engineering fraudsters convince their victim to send money to the fraudster. And the problem with this is that the victim properly identifies themselves to their bank or the payments provider, and they just send the money to the fraud who might be masquerading as a good merchant, a potential lover or representative of a state lottery. And this fraud type is known as authorized push payment fraud, and it’s a big issue.

George Peabody:
Unlike most countries, and that drives me crazy, and this drives me crazy, the UK requires banks to report their fraud losses. That doesn’t drive me crazy. What drives me crazy is that well, the United States doesn’t. So as a result in the UK’s case, we have a very good idea of how serious this problem is a big number, exceeding 400 million pounds that were stolen in 2020. And unlike credit card payments where if the merchant fails to deliver, we can dispute those charges and get our monies back, most of that 400 million pounds, it’s really never returned to the account holder. For more on the UK fraud report, please visit the show notes, we’ll have a link there to it on paymentsonfire.com.

George Peabody:
As I said, what makes me crazy about it, this is that in the US we don’t have this kind of information, and there’s an old saw that says, if you don’t measure something, you can’t improve it. And we’re going to be talking about that, and actually to about not only reporting, but the broader concerns of impact on fraud and citizens and the challenges around authorized push payment in particular, I’m for fortunate to have as our guest on Payments on Fire, PJ Rohall, who is the fraud market expert, FME, at Featurespace, a management firm, and PJ’s also co-founder of a website called About-Fraud, which is a pretty cool resource. So welcome PJ, really glad to have you here.

PJ Rohall:
George, thank you so much for having me. Really interested in talking about this topic. It’s something I’m passionate about, not just from a fraud prevention standpoint, but from how it impacts the consumer, because it’s so much different than other types of fraud, so I’m happy to join and hopefully provide some insight and education around authorized push payment fraud.

George Peabody:
Cool. So let’s get right into it. Well, let’s start before we dive into… I always say that. Before we dive into the guts of this, I’m curious about PJ, how you found your way to this, to this work. What was it that drew you in?

PJ Rohall:
Yeah, so I kind of fell backwards into to fraud prevention. I don’t know actually too many people who focus specifically on this, maybe in undergraduate, maybe they did go and get a certification or majored in something risk management related, but it’s a newer industry, so people are coming from all different walks of life. I studied finance and started out in that world and then found myself more interested in something different, so I took a job as a fraud analyst with an e-commerce based fraud solution. So in operations, I learned about fraud investigations. I applied a lot of my business knowledge to kind of how I grew my career in fraud prevention, in product, in eventually becoming a fraud market expert, which basically with Featurespace, I work across a lot of our different teams, our commercial, our product to understand… I’ve studied these fraud use cases, how can we use our technology to help best solve them?

PJ Rohall:
So within Featurespace, there’s so many great people that work across our different departments, but not a lot of them have spent their careers completely in fraud prevention. So that’s what our teams do. We worked at either merchants or banks in different spaces and we can kind of relate with the customers that will be using these solutions we’re selling. So I’m also passionate about raising awareness in the industry, because there’s a big need for people who are looking to start or grow their career to jump into fraud prevention, or security, or AML. There’s just tons of opportunity because it’s only headed in one direction and there’s tons of technology that’s out there that’s being developed. So whether you’re on the solution provider side or the end user side at a bank, there’s lots of opportunity for interested smart people who want to join the industry.

George Peabody:
Yep. That’s always the case and we’re always at looking for smart people at Glenbrook too, so if you’re interested in a career more on the consulting and education side, if you’re interested in the fraud management side give PJ a shout. Why don’t you tell us a little bit about Featurespace before we get into APP?

PJ Rohall:
Yeah, sure. So Featurespace is a global fraud solution that focuses on fraud and financial crime. We use our machine learning technology, adaptive behavioral analytics to pretty much optimize how we can detect fraud across card payments, application fraud for FIs, merchant acquires, and payment processors. We also have an AML solution, which is tremendous one of our newest solutions. And we use our technology, our subject matter experts and honestly our great people to deliver world class solutions to those different industries and those different institutions.

George Peabody:
Got it. So let’s get into the authorized push payment, and I got a lot of questions here. First of all, did I define it correctly as far as you’re concerned?

PJ Rohall:
Yeah, no, I think you hit on it. It is basically manipulating and, or social engineering somebody to make a payment to an account that the fraudster controls. So like you said, they log in, they go through all the, all the identity verification to say this is them, which is why it’s very hard to detect, and they’re the ones that actually execute the payment. Now, the way they’re manipulated can be very different depending on the type of authorized push payment fraud. That’s probably what I would go into more detail about because there’s things like romance scams, or business email compromise, investment scams, purchase scams, there’s lots of different scams because there’s so many different ways to manipulate people in their day to day activities and kind of prey on different emotions. So the types of scams will differ, but that is the basic core elements of it.

George Peabody:
Any sense of the scope of the problem? And we know what it is in the UK, it’s big. What about in the US, any clue?

PJ Rohall:
Yeah, In the US it is very hard to report on, like you said, because of the lack of reporting. I think we’re on a better track in the sense that the fed recently had some definitions, some payment definitions designed around splitting out unauthorized and authorized payment fraud. If we adopt something similar to that, then we can at least classify and tag this types of fraud and start reporting in on it in a more consistent manner, I think that’s part of the reporting issue in US, but a lot of it right now is consumer reporting, and they’re large numbers, but they’re missing out on big groups of the fraud because a lot of victims don’t report their fraud. They feel shame, they feel embarrassment about what they’ve done, so they’re not reporting this fraud.

PJ Rohall:
So I don’t have specific numbers for the US. If you look at the UK market, which is much smaller, and you’re looking at 479 million pounds of authorized push payment fraud in 2020, which is 38% of all fraud, which is astounding in itself, you can imagine the numbers in the US are just much, much bigger. And because it is not uniquely impacting them, it’s not uniquely impacting them compared to even regions outside the US. I will say the UK has had faster payments for a lot longer, over a decade.

George Peabody:
A decade, yeah.

PJ Rohall:
Yeah. So that’s going to drive more fraud here, but it still happens in slower payment environments. It happens ACH wire, and wires can be more instantaneous and they can be larger, amounts, so wires are vulnerable to it. So while we don’t have the specific figures in the US that are as reliable other than the consumer reporting ones, you can kind of deduce from all of that information that they’re probably quite large.

George Peabody:
We’ve been doing some survey work in Africa and South Asia on the impact and prevalence of fraud in a few countries, and right there, it’s mostly through social engineering and the impact on victims is substantial. Sometimes we’ve heard folks not having enough money to buy food, not having enough money to send their kids back school, not being able to start or expand a business. And as you said, I guess we’re all human beings, many of the folks that we’ve spoken to, they blame themselves for being tricked, and very few and expectations that the police or the mobile service operator that’s provided the payment service is going to compensate them or make them whole in any way.

George Peabody:
Yeah, it’s heartbreaking. These are pitiless crimes and yeah, actually a lot of the folks in the developed market we talked about, they said, “Yeah, COVID’s bad, fraudsters have been a growth industry,” because the fraudsters need to get money and put food on their tables too. So through your work on both at Featurespace and your site, what kind of impacts have you seen? And are there particular fraud types that have been jumped out at you that are most impactful?

PJ Rohall:
Yeah. The APP fraud for sure scams, and I just want to echo what you said, it is extremely sad what happens to some of these people, these are life changing amounts of money some of these people are losing, and are not getting back. And of course, you can in the surface say, “Well, they shouldn’t have done this, they were naive, but that’s really unfair in a lot of cases. These are very sophisticated scams, they’re prey on everybody who has emotional and psychological vulnerabilities, or you’re moving very quickly throughout your day, and you’re doing things that, yes, you should always be cautious, but it’s not quite that simple. They’re grooming these victims, they’re preying on vulnerable populations, the elderly. So it’s very unfortunate financially, but also psychologically. There’s long psychological damage that comes from this.

PJ Rohall:
And in addition to Featurespace and About-Fraud, I’m a passionate advocate for mental health. I’ve gone through my own challenges and people who go through this, we can’t underestimate that impact. I think there’s a humane kind of element that it really kind of draws me to this. But yes, I would say in the pandemic social engineering has probably risen to the top with this type of fraud, card fraud, third party fraud, application fraud, there’s spikes you’ll see in those different industries. It depends on the region, probably depends on the exact product. And it’s not just pandemic. It’s anytime there’s uncertainty and fear out there, you can convince people to do things that they maybe normally wouldn’t do, donate to a charity, do something quickly without thinking, or being fearful about it before you do it, so it could be any type of event that causes fear and uncertainty.

George Peabody:
I have a family member who’s a bit challenged, and the tenacity of a fraudster that’s been on his case has been stunning. Since we’re generally kind of tech first in this industry, let me start asking you some tech first questions. And one of the things that occurs to me is that this social engineering attacks strike me as from a technical point of view, to be really difficult to detect. I understand that there’s some capabilities or some signals that can start to indicate that there indeed is some kind of social engineering activity that’s going to tee someone up for an APP fraud, what do you got in the kit?

PJ Rohall:
That’s exactly it. They’re extremely hard to detect. And I think what you need to do is you need to look at it across kind of a couple different areas and then the life cycle of the payments. So the payment is where you can get more of your technology involved, but starting with definitions and reporting, are you labeling your fraud as unauthorized fraud versus authorized fraud? And then you can consistently report on it. I know we need to rehash that from a UK versus US versus other regions, but then you can define there’s 479 million pounds and reporting in general will always have its its gaps, but you need to start somewhere and you need to have a general idea of what the problem is.

PJ Rohall:
And then from an education standpoint, both the consumers and the people working at the bank, there’s a ton of educational initiatives, and we can talk more about that, which is also tricky because how do you reach all these consumers in an engaging manner where they’re going to, it’s going to resonate with them, it’s not just an email, they don’t open and they don’t look at, and then when they’re going and being socially engineered, they totally forget. But there are different things that can be done and probably more things that can be done from an education standpoint. Once you get into the payments life site goal, you can look at things across a lot of different points in the journey. So log into your bank account, the session activity, as you’re navigating your online banking all the way through to when the payment’s being made, and there’s different technology that can process that data and come up with risk scores to help understanding, is this person some being manipulated or is it not?

PJ Rohall:
And some of those technologies could be things like behavioral biometrics, behavioral analytics, device monitoring, and we can get into details of how they would do that. But really the core element is, especially with behavioral analytics, you’re going to be looking at the different types of data that you see, not just the monetary data, the payments, but also the non monetary data, are they changing beneficiaries? Are they sending it to a beneficiary they sent to before? What is the amount of payments they’re sending? Are they increasing in size over time? Are they decreasing in size? What is the beneficiary bank you’re sending it to? Do they have a reputation of up scams in the past or money mulling activities? So there’s lots of different signals that you can generate from that and profile off of that to understand the deviations and anomalies.

PJ Rohall:
When you get into behavioral analytics like behavioral biometrics, that’s more how the individual interacts with the device, how they’re typing, how they’re scrolling the pressure of different keys. So that can also offer some interesting insight, and then just device monitoring. Sometimes you can use device monitoring for lots of different types of fraud. Those are different examples of technology that can help. Behavioral analytics will have different types of engines to drive it, machine learning, some have more basic rules, but I think as much as it is the types of technology that I just named, because you can go and snatch lots of different type types of technology out there, but how are you using it? Do you understand the behavior that different for scams versus unauthorized payment? Do you understand the data that is relevant? Do you understand how to profile the person sending the money and the person receiving the money? So once you get into the details there, that’s where it becomes probably more important and valuable when actually delivering a good risk for.

George Peabody:
I’m hearing lots of basic tools are available, machine learning and rules engines, you got a lot of data to look at, but you’re still going to have to have some awareness of what the fraudsters are up to, and also something to do with respect to profiling the sender to understand how do they normally interact with their computers, how to determine whether they’re under any kind of duress. If they’re being pushed by telephone calls or being pushed by emails and text messages to hurry up and make a pay… “Send me the money right now, this is an emergency.” That data’s not available to you, so yikes, hard stuff.

PJ Rohall:
What’s going on in their brain is not available. What they’re dipping in the session is available to how long they’re spending in the session versus maybe how long they normally spend in the session, the patterns of the payments they’re sending and who they’re sending it to, and changing beneficiaries, all of that information is available. The channels that they’re using, are they going from online to mobile? And some of these will overlap with account takeover because when you commit account take over, persons are also probably changing the beneficiary that’s being sent to. But that being said, there are signals within there, and there is technology, really good technology out there that can understand these signals and understand the anomalies that exist even when they’re more settled.

PJ Rohall:
So that’s what we try to do with Featurespace. We try to use our fraud knowledge, our expertise in understanding all the different types and the behaviors. Like I said, they’re different as you get across purchase scams, investment scams, romance scams, the amounts are different. Investment scams will be a bit larger, purchase scams will be smaller. Some of them, the patterns on how they scam them and will they ramp up the amounts over time as they’re kind of grooming them, those things will change, so you you kind of have to mix a combination of product expertise with analytics expertise to really get a good solution.

George Peabody:
I got to ask you before I forget, it sounds like they’re great tools, and we got more ways to remediate and mitigate the problem, but I got to ask if the bank by rule of this payment system isn’t responsible, what’s in it for them for spending the money to put this kind of mitigation capability in place? They’re not in the chain liability here, what’s in it for them?

PJ Rohall:
In the UK, there’s something called a contingency reimbursement model. So some of those banks are in the channel liability, but that’s a whole longer conversation. So liability is one factor, but it’s also kind of how are you taking care of your customers. Like it or not, these risks are out there. The customer, I don’t think can be negligent a 100%, but at the same time, there are things that can be done to be put in place to help protect your customer, help retain your customer just for the… And yes, understanding that there needs to be often losses to gain investment in that. But I think banks and financial institutions that are investing in protecting their customers, that’ll help their brand, that’ll help top line revenue, that’ll help show that they’re on the forefront of this type of detection.

PJ Rohall:
Are they expected to catch everything? No. Are they expected to catch every time a person, they have these checks in place and the person still goes forward and makes the payments, it’s unfair to say they need to get to that level, so I think it’s realistic. I think it’s realistically looking at it and not attacking banks and saying, “How did you not help your customer out here,” But still saying, “Do you have something in place? What are you doing with the tools out there to try and protect your customers from fraud?” That unfortunately is evolving into a more challenging domain where it seems like it’s almost no one’s fault. It’s not the customer, it’s not the… Because there’s this social engineering aspect. So, yeah. It’s not a simple one, for sure.

George Peabody:
Yeah. So even in the UK, as you say, things are still being sorted out about what the obligations are. Here in the US, we’ve got regulation E which allows a bank account holder to dispute a charge if it’s one of the, I didn’t do it charges. So that could be an account takeover fraud that could be disputed and they could be made whole, but in this APP, does reg E apply to that?

PJ Rohall:
No, not as I see it right now. There was recent rumblings from a group that was… I forget the name of the group, I apologize, where they were interpreting a certain level of social engineering, where if you’re social engineered to giving up your information, and then it’s taken over, I believe that this should fall under or reg E, and apologies if I didn’t get that exactly right.

PJ Rohall:
So there’s going to be probably continued evaluation of, should reg E be covering some of the scams, more of the scams, less of the scams? I definitely don’t want to speak to where regulations will go in the US because I asked don’t know, but I think because there’s that chance that there could be some liability there because you just don’t know where regulations will go, and there’s also the case from a reputation standpoint, from a brand management, from protecting your customers, being on the forefront, and you can use a lot of this technology to already address the other types of that you do have liability for. So it’s the same types of technology. So just understand how to pivot it for this, because you should already have these types of technologies in house, and if you don’t, you can. It’s not like you can’t have access to them.

George Peabody:
What strikes me there from a technical point of view that you’re talking about is where we’re profiling the sender. It strikes me there would be opportunity for more effective fraud management, if there was some profiling of the recipient as well, that the receiving bank communication back to the sending bank in order to assess some risk. So I’m talking specifically about mule accounts, for example. So here’s an account that gets frequent money in and frequently sends it right out to other accounts quickly. Have you seen a system like that?

PJ Rohall:
Yeah. You can, and you should be profiling the beneficiary account. So when then you’re looking at your fraud solution, don’t look at just the data that you’re sending out and have just feed that into your platform. You need get your hands on the beneficiary information to what degree you can. Obviously-

George Peabody:
I was going to say, how do you do that if the recipient’s at another bank than your customer, your account holder, who’s sending the money, they’re recipients at another bank, what information do you get back?

PJ Rohall:
Yeah. There’s still information that you should have of what account this is going to, the account number, if you’re in the UK sorting and… So it may not be as robust as the sender information, but you should be able some information that you can profile. And then from a mulling perspective, you can look at that behavior and try and find insights into how that mulling might be happening on the heels of this scams. But I think mules are interesting because you can look at the point of application where mules are just setting up applications to be mules, they’re setting up DEA accounts to move money all around. So you can profile there, you can profile during the payments process, and you could try and get your best to have a holistic view of that.

George Peabody:
I was going to say, just to be clear, a mule is a individual that has been hired basically by fraudsters open up accounts that the fraudsters can use through them in order to transit the money eventually to the fraudster.

PJ Rohall:
Yeah. And some know they’re doing it, some kind of know, but they just still do it, and some have no idea. They’re just kind of tricked into doing this, but either way, it’s very strategic in how they do it, and then they disguise where that fraud came from and it’s really hard to trace.

George Peabody:
So PJ, we’ve talked about backend techniques. Let’s talk about the ones that are more front end, whether they are education or maybe user experience design, some of these of non-technical approaches that are being employed successfully to mitigate the problem.

PJ Rohall:
In journey scam, warning messages are something that is more common in the UK where, think about it kind of like a step up authentication so if it was an account takeover, you get an SMS message or can you log in, and it basically protects the point of login. But if you’re the genuine user and you’ve logged in and this might be a scam, we send messages to the people before they execute the payment to warn them that it could be a scam.

PJ Rohall:
So it’s tricky, and it’s something where you don’t want to just every time you log into your online banking, every person gets a scam warning message, because then it’s just white noise, no, it’s like that doesn’t really… So the idea is to use the same analytics that you would use to maybe stop a payment that you think is a scam to push a message to the customer that’s saying, “Hey, we’re seeing this as high risk, whatever signals we have to detect it, are you sure you want to make this payment? This is really out of the ordinary massive dollar amount, it’s really anomalous with your usual activity.” And that could be really helpful. Will people listen to it every time? Absolutely not, but it’s-

George Peabody:
Do they ever say, “And if you do click go or submit, it’s on you if you’re defrauded?”

PJ Rohall:
Well, I think that’s your case, if you’re doing things like, and then you go back to not even just a liability conversation, but are you taking care of your customers? Well, look, this is what we’re doing. Not only do we have analytics to stop the point of payment, but we’re sending scam warning messages. We’re doing the best we can, and I think what if FIs are doing that, that’s probably where they want to shoot for. But you have to have good technology to underlie it so you are interrupting the right journeys, right? You don’t want to put too much friction in there, it could upset the customer. They want to be protected, but they don’t want to be interrupted. So I think those are good. Because like I mentioned with the email, I get emails where it’s like, watch out for scams and great. And then you move on with your day and when you’re in the middle of the scam, you’re not thinking about that email.

PJ Rohall:
So this is a time to have informed analytics say, “Hey, be careful of scams. Remember that they do exist.” So then there also is the education piece, which is well before and outside of when scams happen, which is also not easy because how many ways can you reach your consumers? You can reach them through email blasts, which what percentage actually get to them and they open and read is quite minimal. It’s not very engaging, especially younger generations are just… That’s not the way they’re going to want interact with this type of thing. So I think it’s a tough one. I think using social media for good because people use social media to get scammed or get scammed through social media, but interesting campaigns around that on social media platforms, exploring different channels, different innovative things that are going to get people’s attention.

PJ Rohall:
One thing they do in the UK is called stop five. Hope I got that right. Which is basically think for five minutes. Stop, stop and think and take five minutes to understand, is this something that you normally would do? Which is a very simple thing, but that is very helpful, if you actually do that, you give your emotions sometimes to settle down. Take five, it’s called take five, excuse me. We have to just find different ways because clearly what we’re doing now from an education standpoint is not connecting. There’s nonprofits out there, one called The Noble, which is doing really good stuff around education and scams and the vulnerable and protecting the vulnerable. Could the government from a public standpoint get involved and have campaigns, consumer awareness campaigns? Sure. And the last thing I’ll say is individuals, but what can we do?

PJ Rohall:
We share things in About-Fraud, but it’s geared more towards people who work in fraud prevention, so my grandma, your sister or brother is not checking those things out as much. And one idea that I think would be really good, which how you pull it off I don’t exactly know is, people who are influencers in social media, in pop culture, in the world where that’s how messages are received by a lot of people, they’ll listen to that person and they will not listen to so many other people. Could there be campaigns around that if they understood it how much this is really hurting people?

George Peabody:
Great idea. I love what you’re pointing at. I think I struggle with the education vector in the fast payment system world, because unlike in the card system where there’s a lot of margin in those transactions and the card networks, they too have a ton of advertising and sometimes that advertising, that messaging is around security. But these push payment systems, these transactions are not high margin by comparison. And I’m not expecting the answer, but who’s the right party to put a program together? Individual financial institutions, they’re going to reach out to their own customers. But again, to your point, they’ve got trouble even reaching them. Is it up to the system operator? You could make an argument that it is, but they’re not getting paid big money for running these transactions we know.

George Peabody:
So yeah, this is really a challenging situation which where the market itself doesn’t have a cure. Actually, one of the things I respect what’s happening in the UK is that regulators in the UK, they’re willing to push on the industry to find a collaborative solution, and they’re not writing prescriptive rules necessarily, but they’ve got the knee in the back of the industry to move forward, and I guess I’ll just personally say I think that’s a role that we lack here and would be a beneficial to everyone. Because as I said, at the very beginning, there’s a lot of advantages to be using these tools, these systems, but we’re a marketplace that’s grown up on what I call capital S security, where not only do we have good tools, but we also, in terms of credit card transaction, we’ve got chargeback privileges, the combination of those two give us a lot of security and these new systems, the rules that of system aren’t as plain or familiar.

PJ Rohall:
Yeah. But you got to be careful of those can be abused too. The friendly fraud and chargebacks is so bad where people are just things, so if you push the other way too much, you’re going to have people saying they’ve had certain types of scams that didn’t really happen and so you get a moral issue on the other side. So it’s a tricky balance. The banks putting kind of a knee into the back of them and motivate it, sure, there’s things that they can be doing 100% and that those are some of the things we outline with technology and education, but they can’t be the only ones too. The social media companies where some of these scams are happening, they need to have a role in some of these, educa-

George Peabody:
I’ll just say, talk about a high margin business that can afford it, yeah.

PJ Rohall:
Exactly. And there was a recent bill in the UK I think they were lobbying to include social media as part of this. So I think people it’s fair, yes, how are FIs doing their part, but let’s not limit it to just them. And outside of the people who are in the scam conversation, because they happen to be a party to when it’s happening, like governments and nonprofits and influencers and people just who are sitting in the society of the country that it’s happening and seeing really, really bad things happening, how can we just rally as the right humane thing to do?

PJ Rohall:
There are resources out there, there’s money, there’s people who have influence. I think the knowledge and awareness of this stuff hasn’t grown outside of you, me, people in the industry who kind of know it, or if it happens to you, then you really know it happens. But beyond that, you hear scams and then you kind of move on with your life. You don’t sit there and be like, “Man, this…” It doesn’t really hit your heart, and I think we need to get to that level.

George Peabody:
Well, PJ, we’re going to have to leave it there. Really appreciate the conversation. Just to encourage listeners to go to paymentsonfire.com for the show notes, we’ll have links to PJ’s about fraud site, we’ll have links to the UK Fraud Report and more. So again, thank you very much, PJ. Really appreciate it.

PJ Rohall:
Yeah, George, it was my pleasure. Thank you for having me on.

5 1 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments