Transcript Of Enterprise Security Weekly with Rami Essaid

May 29, 2018 Katherine Oberhofer

On this episode of the Enterprise Security Weekly podcast, Co-founder of Distil Networks, Rami Essaid joins Paul and Doug to discuss Distil Networks, the security space and new innovations. 

Paul: Welcome everyone to this very special interview segment. I'm, of course, your host. I'm joined here in studio on this snowy Rhode Island day by Dr. Doug White, of course the Director of Cybersecurity for Roger Williams University and host of Secure Digital Life, and now, RSA speaker.

Doug: That's right, RSA speaker.

Paul: You gonna speak at RSA for the first time? Your first year?

Doug: I am. The first time. I'm gonna talk about the dangers of automated pen testing and tell some stories about Ethan Allen.

Paul: That's fantastic.

Paul: I'm very humbled that the RSA conference actually reached out to us and was like, "We need to fill the slot." I'm like, "Well, let me go out to the team." So I'm glad Doug that you could take that. It's exciting. So you'll have to say hi to our guest for this interview segment at RSA, assuming Rami will be there, Rami Essaid who is the Chief Product Officer and Founder at Distil Networks. Rami, welcome back to the show.

Rami: Hey, thanks so much for having me back, Paul.

Paul: Yes. It's always nice to have you, Rami. I love the story of how we met after we analyzed some of your marketing. We tried to figure out what you do. Rami jumped on the phone with us and I'm like, "So, Rami, like tell me what you guys do." And then he told me and I was like, "That's really awesome. We need to hang out more." And so here he his back on the show. I wanted to start, Rami, because we talk about bots and we talk about bad bots, and I think that, you know, in prep for this, we were talking about like where people are at in their understanding of what all that means. I think there's a pretty large percentage of people that are like, "Hold on, wait, what is a bad bot and like why do I care?" So I guess start with bad bots and the problem that you solve, Rami.

Rami: You know, the nice thing is that it's actually becoming mainstream now where more and more people are starting to understand some of the maliciousness behind bots. In fact, my parents for the first time I think somewhat understand what I do. So it's better than ever. But in the past, nobody knew what a bad bot was or what a bot was. And a bot I think most of us know is just any kind of script or program. To level-set there, it's just any tool of automation that interacts with something and meant for human interaction.

Now there's a lot of good bots out there, Google, Facebook, all of those crawlers that go around the web and harvest information. There's a lot of good stuff there. But bots are the tool of choice for fraudsters and hackers. It's no longer sitting there writing one command after the other. You know, when you want to find some sort of hole in a system, when you wanna penetrate through, when you wanna brute force your way into an account, you write a script or a program and you automate that process because it gives you leverage, scale, and speed, and that's what bad bots are there for. It's, you know, to give the hackers an advantage and we come in at Distil and we segment out malicious bots, filter them out of your web traffic and let only legitimate traffic interact with your website.

Paul: I mean, that is the problem, right, malicious bots. But there's lots of subproblems off of that, because like, I mean, obviously all that malicious behaviour. So like what are some of the things that people get as a benefit once they start filtering out that malicious traffic? Like what does that gain them?

Rami: Yeah. So, you know, the problems that bots are posing, brute force account takeovers. We've seen four billion usernames and passwords breached online. Bad guys are taking that list and then just trying them everywhere they can to see what accounts they can take over. They're running vulnerability scans across your system. They're finding that one unpatched server, just like they did with Equifax. They find that one unpatched server that has a vulnerability and then they don't have to look for a needle in a haystack manually, they have automation to find that needle in a haystack and then they can explode through there.

But now we're seeing bots exploit business logic, scalping concert tickets, holding airline seats, or, you know, as far as social media interactions, we're hearing about, you know, Russian hackers and bots influencing social media that then translates to influencing political campaigns. So the bots are exploiting the real tangible business practices of these companies and then kind of subverting them to their own will.

Paul: Yeah, and that's a big problem. So I have a somewhat technical question, Rami. When we talk about bots and identifying the script, you know, typically, we'd look at user agent, right? And a lot of our automation tools that you can find today will set a specific user agent. Now you can change that. Obviously your technology doesn't just...it's like signature versus behaviour, right? It doesn't just rely on that user agent string that we can so easily change.

Rami: Yeah. I think the reason web applications firewalls failed at this is because they really looked at signatures. They looked at static fields like user agent or blocking by IPs to block malicious bots. And what we came in and said is that all of these things are easily spoofable. They're all client side information that's being sent back to us and you can't take that at face value. So, one, we inject code to challenge the browser. We don't just assume that it's telling the truth. You can find user agent, for example, five different places in a browser.

We ask through JavaScript and through CSS, we ask that browser five different times, "What is your user agent?" But then we also run some JavaScript tests to make sure that it has the right JavaScript engine. We make sure it is exactly what it says it is. But then there's a layer that goes even beyond that, and what we do is we profile what real human behavior looks like, and try to find behavioral anomalies, things like keyboard clicks, mouse movements, etc. to try to differentiate human behavior from bots.

Doug: So this is almost turning into a kind of Turing test.

Doug: Well, because, you know, I always remember the first time I saw something I thought might pass a traditional kind of Turing test was a program called Whimsy. And Whimsy just took what you said and it wasn't like ELIZA where it just took the keywords and repeated them to you, but it actually took them, injected a lot of strange things. And it was really hard to start discerning which one was the human and which one was the bot in that. And it sounds like that's kind of what we're doing here is trying to now discern which ones are the humans and which ones are the bots. And the closer the bots gets to being human, the more difficult that's gonna be to do.

Rami: Yeah. I mean, that's exactly what we came in to replace is the antiquated system of CAPTCHAs. One, because everybody hates CAPTCHAs. The squiggly lines that you have to fill out, just nobody can do it, and the problem is nobody but bots can do it. Bots have figured out a number of different ways to defeat CAPTCHAs. So we're coming in and saying we can protect not a point in time, not just your, you know, signup page, but we can protect your entire site from bots in a completely transparent way.

Paul: So when I describe your technology at Distil Networks, I don't think I do it like justice, as much justice as you could, Rami, especially when I describe what I think is your major differentiator, and that's how you're accomplishing what we're talking about. And that is the injection of the JavaScript. Can you just describe for our audience? Because every time I describe it, people always come and say, "Like how does that work? Is that secure? How does that impact performance?" And I feel like I don't have the greatest answer. So I'd like you to address that before we move on to some of the things that you've discovered in your surveys and research.

Rami: Yeah. So one of the things that is really important to us is that we integrate into a broader web application security story, right? Bots are just a small sliver of the overall web app sec profile that you're gonna have to put together to be able to secure your website. So what we wanna do is find a way to integrate into other things. We can give you an appliance that you put inline that does the injection automatically for you, acts similar to a web application firewall.

But we can give you the JavaScript itself and a couple of API calls and a little bit of code integrations, and you can actually put us into your app itself. You can put us into your WAF. You can put us on a load balancer or an ADC. You can inject us anywhere else so that we become part of a more holistic web application security strategy. The components that you need is code that run on the client side.

And by the way, we don't just do this for websites, we do this for websites and mobile apps, so we can give you an SDK for mobile apps or JavaScript for a website. And then we have server side components that check the validity of that code that's running client side and then get updated information from our cloud that's doing all the machine learning and data crunching. So those two components, the server side that you integrate on your side and then you inject some sort of client side code, can be put in in any way, form of fashion, at the edge in your CDN, locally in your ADC, in your web app, so that we can become more holistic in how you view your web app security policy.

Paul: That's fantastic. Now you mentioned mobile applications, and this was something that, in a recent study that you did, you found that there was an increasing trend in the bot problem as it exists on mobile. So I guess first describe the research study that we're gonna follow this interview on and do a webcast about as well, and also how that ties into mobile.

Rami: Yeah. So every year we do this bad bot report. You know, just like we started off saying, four, five years ago when I started the company in 2011, nobody had any idea what bots were doing. And so we said, "Let's take all the data that we know, let's share this with the community at large to try to identify trends and try to help people better understand the impacts that bots have."

So every year we take all of our data across all of our customer base and try to glean some insights that are hopefully actionable for people to be able to better secure themselves or understand the problem better. And one of the pieces of insight that we've noticed is that mobile bots are getting more and more prolific. Now it's important to say it's actually mobile bots, people that are spoofing mobile bots, that they're saying that they're mobile bots, they're not actually running on mobile devices.

But as we see more mobile devices now online than there are desktop devices, it goes to reason that people wanna make sure that they don't impact mobile traffic, that they whitelist that, so it's an easier place for bad guys to hide. They just pretend to be mobile Safari or mobile Chrome and then they spoof their way in. What's interesting is 93%, 92% of the bad bots that we identify aren't really emulating or running on a actual device. They're just pretending, they're spoofing those user agents like we talked about and pretending to be a mobile device.

Doug: That's really interesting. And my next question was actually on industry. This is why I was getting my wires crossed. And so are they targeting a specific industry with these bad bots? Like do some people have this problem worse than others?

Rami: There's definitely bigger targets. And if you look at the Verizon annual report, it talks about data breaches and security vulnerabilities, they rank the value of stolen data, and it really corresponds well to where bad bots are targeting. Healthcare and ecommerce, those are the top two places that we see bad bots, because there's a lot more money to be made off of that information, off of those websites, off of those transactions. And then you look at, you know, some places where there's for example travel aggregators. There's a lot less money to be made there and you see a lot less bots impacting travel aggregators.

Paul: Has the authentication problem or potential solutions helped this? I know that there are some companies that are working on basically authenticating you the way you hold your phone, what Wi-Fi networks you are connecting to. Can that also help this problem and how does like biometrics play into the equation?

Rami: Yeah. I mean, I think eventually we need to move away from passwords. We've been talking about it for so long and we're seeing some innovation in that space for authentication, but it really hasn't hit mainstream. In fact, most customers that we have are, if they're using any kind of authentication, they're doing it passively, not actively. They're just putting their toe in the water on this stuff right now. It will help. Overall, once we get to a place where passwords aren't a thing and there's deeper authentication based on biometrics, that avenue of brute force login attack and account takeover, those credential stuffing attacks will go away as a vector for bot attacks. But that doesn't solve the overall bot problem.

Paul: And mostly these attacks occur on weekends when IT staff isn't there, which is all...I mean, you have data to back that up, right?

Rami: Yeah. You read the report. That's great. I mean, we find that, you know, bad guys tend to focus their attacks Friday through Monday morning because it's, you know, there's a lower staff on hand. A lot of people think of hacking as something that happens passively and then we react to. We have a managed solution. So not only do we have a software as a service platform that works and mitigates bots, but we actually have real people that react to sometimes bots beat us. I'll be the first to say we're not 100%. Nobody can be 100% at this problem.

So we have real people that are kind of operating as a managed SOC that are reacting to this. And so what we find is that we're actively fighting back and forth sometimes with an active attack. We change something, we catch them, and then they iterate, and then they change something else, and then we have to continue to react to that. And we find that the bad guys prefer doing that on the weekends because there's less staff there to react to this problem, whether it's us directly or really when you think about enterprises, most of their staff is at home and they're just on call and then they see an even easier target to attack.

Paul: This problem is greatly magnified, in my opinion, especially what I've seen recently from YouTube and Facebook in...basically their ad-serving system or other ways, like we've gotten some YouTube comments that I'm like, "Well, that's obviously spam. And that must be something new and YouTube is identifying it." Then I think a lot of the identification methods are looking at the result of them posting a comment or posting an ad, your technology can help them get ahead of that problem. How big is the problem of bots for big players like Google and Facebook and what can you share with us as to how they're addressing that problem? It always seems like a chicken and the egg kind of issue.

Rami: Yeah, so there's two places that we see really big problems in the publisher world, the social media world. One is advertising. And advertising has been estimated to be a multibillion-dollar fraud problem related to bots. Bots are attributed anywhere from $3 to $10 billion of digital ad fraud every year. So that's a huge number in itself. But then there's a bigger problem right now with social media influence. If you think about it, why buy an ad for Facebook when you can create bots that then amplify your message on their own and more organically and seem more legitimate?

We tend to ignore ads in scrolling, but we send to read what our friends and our family post in their feeds. So that has been a more effective use of bots. And, you know, the problem is for a long time social media companies put their head in the sand because what do they report to the streets? They report the number of daily active users, right? So the number that they're being measured on directly corresponds to having more bots is a good thing. And so for the first time ever, Congress paraded the social media companies down and had them testify and they're feeling some pressure to clean up their act. But they're just starting on this journey right now.

Doug: Yeah. Do you think AI starts to leak into that world as well so that you can start seeing bots that are actually more intelligent that can evolve into even more powerful bots so that they say, "I created this thing, now it's gonna code itself into a higher state of being so that it can actually figure out what it wants to do and what you want it to do." So you just sort of set it in motion and turn it loose?

Rami: I mean, that's like a year before, you know, we see the D-day happen, right? I think it's gonna get more and more sophisticated. You know, whether we see, you know, machines actually self-realize and wake up, I don't know. But we are seeing bots get to a place where the sophistication continues to evolve every year. It's not to a place where it's AI, but what we're seeing is bots right now come from residential IPs. They manipulate. They come from real browsers.

It's still very purpose-built. It's not self-learning. They're not even using machine learning let's say to randomize their fields because they don't have to yet. You know, just a basic Selenium script can get by 70%, 80% of most website defenses out there, right? And that doesn't even involve, you know, much sophistication at all. A script can create a Selenium script. But as we put more and more pressure, because I'm not the only company that does this, there's a lot of others, as we put more and more pressure on these bad guys to evolve, I think they will continue to evolve and get smarter and smarter.

Paul: And so there was a, along these lines, sophisticated bot. It was GiftGhostBot. Can you tell us about that?

Rami: Yeah. I mean, you're being a lot nicer this year than when we first met when you really ripped into marketing messaging versus actual data, right? This is the marketing name. We had to label it something. But, you know, what we did find was a really sophisticated operation in which they used a very distributed botnet that attacked a wide variety of retailers, over a thousand different retailers. And what they were doing is stealing gifts cards. And they took essentially the known gift card formats and then they went in and built a list of, you know, over a thousand different retailers that they targeted.

One organized crime unit that targeted them all at once in a three, four-week time span. And we saw really, really massive implications. We worked with, you know, federal agencies that, to try to work and chase that down. We protected our customers from it. But we saw this impact a lot more than just our ecommerce and retail customers. Big brands that you know and heard of took down their online gift card processing systems altogether, because they were losing millions of dollars to this kind of fraud. And this happened earlier this year. I think it was April that we saw this sophisticated happen, attack happen. We estimate anywhere from 30, 40 million all the way up to 9 figures of money was stolen in that attack.

Paul: And is that largely...the way I understood that problem was the gift card numbers can be somewhat predictable. I mean, some of them are just sequential. I'm sure there's other tactics in there, but they were using bots to basically figure out like what the next gift card was?

Rami: Yeah. I mean, it's really simple. You have a gift card that has a four-digit PIN and a number that falls into one specific range, right? And essentially, they just went through and brute forced their way through all the different possible combinations. You know, it looks like a credit card number is the gift card number and then it had a four-digit PIN. And so for every number, they just had to crunch through 10,000 different combinations of PIN until they got through, one number at a time, but for a script, that doesn't take long at all.

Paul: Right. Right. That's pretty scary.

Doug: Yeah, I mean, that starts to sneak into big data ideas too of trying to predict behavior, trying to predict things out there. And so if you can write sophisticated bots based on big data ideas, you may be able to start leading bots down these roads to where they predict something you should be attacking which I think is pretty interesting stuff too.

Paul: A lot of folks that I speak with, Rami, I don't think that...even I don't fully understand how this helps an enterprise, right. I mean, we talked about big companies, Google and Facebook, certainly have this problem at ridiculous scale. But if I'm an enterprise and I've got some smaller web properties, you know, how can this help me in my web application strategy? People talk to me about the alphabet soup in web application protection, right? It's RASP and it's WAF, and it's all this fancy stuff. But I always pitch this as, well, this can help you. So how can this help the enterprise today that's dealing with that application security problem?

Rami: Well, a couple of things. What is the point of your website, right? If you have transactions that you have online, if you have some sort of login where you protect against, you know, you're protecting some asset from the public world, that's a target for bots, right? Bots are going to try to breach through your login attack, your login page, and they're going to try to exploit the transactions that you have online. You know, 57%, 58% of all of web application security breaches start with a brute force credential stuffing attack where they get through a login, because what we find is, you know, everything that you put out to the public web is pretty innocuous.

You know, we do a lot more testing on that, but we expose a lot more systems once we think that somebody has to login and authenticate after getting in. So we expose a lot more systems. We expose more databases, we expose more functionality. And so the bad guys know that and they know that step one to get access to more things is to brute force their way into your login. But beyond that, again, it's about just good hygiene and vulnerability scanners is...like if you were to talk to any pen tester, what do you start with? You start with a vulnerability scanner and you find where the holes are, and that's what the bad guys are doing. A bot mitigation solution in front, make sure that anything that they probe comes back with nothing, right? So they don't get to figure out where the vulnerabilities are, so that they don't know where to get started. And that's just about good hygiene.

Paul: One of the resources that we've talked about in prep for this interview that I wasn't aware of, and OWASP has a lot of great resources. Specifically, they have a paper and some research on automated threats to web applications. Can you tell us about that research project and maybe how you've incorporated it?

Rami: Yeah. So OWASP, for a number of years, has been working on a handbook to help educate people about bots and automated attack, they call it, you know, they call it more an automated attack. So they outline all the different types of attacks, why bots are doing this. It's, you know, a meaty handbook. It's, I think, 70, 80 pages in which they detail the type of attacks, you know, how vulnerable you are to them and some of the things you can do to mitigate this. We've worked with them to both inform the types of attacks that we see, inform the mitigation strategies.

But we've also tried to incorporate their verbiage into how we communicate with our customers. I think it's important for us as an industry to all speak the same language. And if we do that then maybe we can get all on the same level playing field. So we've worked hand-in-hand with OWASP on contributing to that handbook. And it's a great first-step guide to understanding the problem. If you don't wanna read through 80 pages, our marketing team has taken those guides and turned them into little web video snippets so you can watch them in five minutes and get, you know, a higher level digest of that as well.

Paul: That's awesome. Yeah, because if you look at the list, it's great. And I love all the names for the different attacks, right, they talk footprinting, scalping, scraping, skewing, sniping, spamming, token cracking, it just sounds like a fantastic read to learn about all these different attacks. And the videos are on your site at distilnetworks.com?

Rami: Yeah. If you go to distilnetworks.com you can access the videos. It just helps, you know, indoctrinate people into understanding the problem a little bit easier.

Paul: That's awesome. And we also have a webcast coming up here on the Security Weekly Network. You can subscribe to our mailing list, securityweekly.com/insider, for the Security Weekly Insider mailing list. We'll be sending out the registration for that and publishing it, so stay tuned to future episodes for more information about that. That's gonna be a lot. In the meantime though, check out those OWASP resources and the videos on Distil Networks website, kind of in prep for that. We'll delve into a lot more details on the report that Distil is working on. Rami, anything else you wanna share with our audience today?

Rami: Yeah. I think that, you know, just talking about the webcast, you know, we've talked a lot about us as a company. I think the webcast, we're gonna focus more so on bots, the problem, the trends that we see, and also some things that you can do. You don't have to buy Distil's service to, you know, take some basic actions to protect yourself. So we'll talk about some of the things that you can actually do, some basic rules you can put in place at your firewall, at your WAF, some things you should look out for to understand whether or not you have that problem. I think it's important for people to just do some basic things even if they don't go as sophisticated as a full-blown solution to mitigate the problem. You should not say, "Hey, I'm gonna do all or nothing," you should do some fundamentals there and we'll talk about those on the webcast.

Paul: Fantastic. Rami, thank you so much for appearing on Enterprise Security Weekly.

Rami: Hey, thanks so much for having me and hope to catch your talk there, Doug.

Doug: Oh, yeah, I'll definitely look you up at RSA. So it will be nice to meet you in person.

Rami: Yeah. Bring me a cigar, I'll buy you a whisky.

Paul: There you go.

Doug: I will do just that. I'll steal it from Paul.

Paul: I got plenty. It's cool. Thanks so much, Rami.

Rami: Thanks, guys.

Paul: And with that that concludes this interview. Thank you everyone for listening and watching.

 

Previous Article
[Infographic] Mobile Bots: The Next Evolution of Bad Bots
[Infographic] Mobile Bots: The Next Evolution of Bad Bots

In this mobile bots study, Distil Networks now sees 5.8% of all mobile devices on cellular networks are use...

Next Article
Airlines Have a Major Bad Bot Problem
Airlines Have a Major Bad Bot Problem

Airlines suffer from automated abuse from bad bots in a major way. 43.9% of all traffic on airlines website...