Based on a webinar collaboration with: Derek Brink Vice President and Research Fellow from Aberdeen Group.
There is some good news and bad news for IT and security executives in the latest bad bot report conducted by Distil Networks. The good news is that there has been an overall decrease in the number of bad bots reported. The bad news is that advanced persistent bot (APB) activity is on the rise.
Nearly half of all Web traffic (46%) now originates from bots, with 19% from bad bots. It’s this 19% that causes the majority of problems. These are quite diverse in nature and vary from website to website—depending on what industry you’re in and what type of data is of value to hackers or competitors.
Also, with the growing popularity of public APIs (i.e. ones that are exposed to developers outside the company that owns them) come security risks, as their very popularity makes them a target for cybercriminals. The bad guys now use bots for API scraping, as well as web and mobile API hijacking. API abuse can be caused by bots, as well as runaway scripts, integration bugs, and over zealous business partners.
In relation to online fraud, both bots and humans can generate transaction fraud and chargebacks.
The Rise of Advanced Persistent Bots
There is ever increasing sophistication in the bad bot threat landscape. So much so that the data from our annual Bad Bot Report warranted the creation of a new term to describe the phenomenon: Advanced Persistent Bots.
Bots Acting Human Nearly
40% of bad bots can mimic human behavior, meaning tools such as web application firewalls (WAFs), web log analysis and firewalls— which perform less detailed analysis of clients and their behavior—will likely deliver a large number of false negatives.
APBs have one or more of the following abilities:
- Mimic human behavior
- Support cookies
- Browser automation (Selenium, PhantomJS)
- Dynamic IP rotation
- Distribute attacks across IP addresses
- Hide behind anonymous and peer-to-peer proxies
APBs Change Their Identities and Distribute Their Attacks
The persistency aspect comes from their ability to evade detection using tactics such as dynamic IP rotation (from huge IP address pools), using Tor networks and peer-to-peer proxies to obfuscate their origin, and distributing attacks over hundreds of thousands of IP addresses.
For example, one bot might go through 1,000 IP addresses to make one request apiece, instead of a single IP address to make 1,000 requests.
In fact, bad bots rotating IP addresses is now commonplace:
- 73% of bad bots rotate or distribute their attacks over multiple IP addresses
- 20 percent of the ones that distribute their attacks exceed 100 IP addresses during the course of their operations Bad bots are also changing their identities en masse:
- 36 percent of bad bots disguise themselves using two or more user agents
- The worst APBs change their identities over 100 times As shown in the table below, about three-quarters of bad bots rotate or distribute their attacks over multiple IP addresses, and of those, about 20% exceeded 100 IP addresses.
Bots which dynamically rotate IP addresses, or distribute attacks are significantly harder to detect and mitigate
Why Homegrown Solutions are Ineffective Against Advanced Persistent Bots
Many of Distil’s customers have tried to tackle the bot problem on their own by using CAPTCHAs or through log analysis—after which they write custom scripts or rules to block IPs or rate-limit their traffic.
CAPTCHAs lower conversion rates and can be easily defeated by advanced bots. Log analysis is also not very effective.
Not only is it expensive to perform, but most sophisticated bots masquerade as humans in logs—especially those using random IPs or distributing their attacks over hundreds of thousands of IPs.
Once you find and block an offender, it can simply change IPs and return unhindered. IP blocking and rate limiting are also reactive tactics. They can be thwarted by dynamic IP rotation and distributed attacks.
WAFs Designed to Solve a Different Problem – Ineffective Against APBs
When discussing bot problems with prospective clients, Distil often hears, “I’ve got a web application firewall (WAF) to handle that.” However, WAFs were never designed to to manage the volume, variety, and sophistication of today’s bots.
Instead, they identify and block application exploits looking to attack a coding vulnerability. They are IP-centric and use attack signatures. But in the world of bots, there are no signatures. Bots aren’t limited to perpetrating website attacks, rather they programmatically abuse and misuse websites—resulting in a wide assortment of problems.
Bots are dynamic; they can attack anything. If a hacker can dream up a way to misuse a site or its data, then they can create a bot to do it. For this reason WAFs and bot detection solutions solve different problems.
The evolution of bad bots over the years has been such that they can now bring serious damage to organizations that are not prepared for them. Whereas five years ago bad bots were mostly doing web scraping and form spam, over the past two years or so they have become capable of much more sophisticated actions.
Bad bots are now capable of activities such as transaction fraud, account takeovers, API hijacking, and uncovering vulnerabilities in IT infrastructures. They can have an impact on a company’s analytics as well as its infrastructure performance.
Bad bots, including APBs, are affecting companies in diverse sectors such as financial services, digital publishing, healthcare, ecommerce, real estate and transportation. The damage bad bots can do is only limited by the innovation of their operators.
Assessing the Risk and Economic Impact of Bad Bots
Organizations must make it a priority to assess the true risks of bad bots and determine the best technology investments they can make to address the challenge, according to Derek Brink, vice president and research fellow at research firm Aberdeen Group.
Aberdeen research shows the likely risk of experiencing bad bots can range between 1.8% and 7.6% of the annual revenue that the site contributes to the organization. The most likely median is about 4% or about four million dollars for a site that’s generating $100 million in revenue, but the likelihood of risk scales up or down regardless of the site’s revenue.
“You have to ask the business decision makers, ‘are you willing to accept the risk of between 1.8 and 7.6% of our annual revenue to bad bots with the median being about 4%?’” Brink says. “If that's an acceptable risk, if you're willing to deal with that, then let's just keep doing what we're doing.” Enterprises can take steps to address the risk of bad bots.
“You can choose to accept it. You can choose to assign it to somebody else. You can choose to try to reduce it to a more acceptable level” by deploying technology such as Distil Networks’, Brink says. The security team is responsible for determining what the risk is, how it can be reduced to a lower level, and how much that will cost.