Brute Force Attacks And The Non Human Internet

August 30, 2017 Bob Tarzey

Activity on the internet is often measured in terms of the volume of data being accessed. Unsurprisingly, using this measure, video streaming of various forms of content has come to dominate. Another measure is user activity, based on the arrival of someone at a website to request a resource or to carry out a transaction.

You might think that measuring such activity would provide an indication of the number of humans actively using the web at any one time. You would be wrong; on average 46% of such activity is not human, but carried out by software robots (usually referred to simply as bots), doing automated tasks or mimicking human behavior.

Bots have been evolving as fast as the web itself. Early bots, like web crawlers that index web content for search engines, were implemented with good intent and the web would not work without them.

However, over the years, bots have been put to more nefarious uses; the activity of many falls at best into grey areas, such as web scraping which is used, for example, by price comparison sites. Some bot activity is just plain bad; such as brute force attacks for account takeover.

Likewise, distributing attacks across many IP addresses reduces requests per IP allowing these attacks to “fly under the radar.” After all, 1,000 IP addresses making one request each achieves the same goal as 1 IP address making 1,000 requests.

Why takeover accounts and what is a brute force attack?

There are many reasons for wanting to takeover accounts and many types of accounts to takeover. That an account has been compromised may not be clear to the resource owner; in most cases the aim is for it to appear, at least in the short term, as if a valid user is going about a valid activity.

Furthermore, because of the way access details are traded on the dark web, the criminal use of an account may be some time after the initial takeover. In some cases, high value accounts are directly targeted. With the February 2016 attack on the Central Bank of Bangladesh, raiders gained access to a bank operative’s SWIFT account and moved $81M in funds.

Hacktivists seek direct access to the corporate social media accounts of organizations they object to; an unauthorized Tweet or Facebook post can cause embarrassment and undermine reputation. Unscrupulous businesses seek access rights to steal intellectual property from rivals, national security agencies do the same for cyber-spying.


Other account takeovers are opportunistic. Given a long list of usernames for a given resource, such as a social media site, each can be tested for passwords. Even if a taken over account is not of direct value, the same credentials may work elsewhere, perhaps in conjunction with other data stolen during the initial compromise.

The initial pairing of a username and password has identified an individual that is a potential soft target worthy of further investigation. Once entry has been gained, details can be updated. Delivery addresses may be added or changed so that goods can be ordered and sent to an address the criminal has access to.

With crime, be it petty or organized, the ultimate aim is to monetize the account takeover. With some resources, such as tickets for events that are often supplied as PDF downloads, this is relatively easy. The operator of the targeted site has not been breached in the traditional sense, but has unwittingly become part of a chain of criminal activity through account takeover.

The business cost of account takeovers and brute force attacks

The direct costs of fraud are obvious, be it the movement of funds from a bank or the purchase of goods via a compromised account that are not wanted or received by the account owner. The way a site owner chooses to manage its relationship with its real customers will vary.

Doing so badly contributes to a second indirect cost, reputational damage. For the media, compromised users make a good story, regardless of how the fraud was perpetrated, and consumer rights advocates can be unforgiving.

Reputational damage undermines customer confidence and business will be lost. It is best to avoid such events in the first place, but only so much can be done to force the use of long, complex passwords and encourage users not to share access credentials across multiple accounts.

There are other indirect costs too: bot-based account takeover often uses a brute force approach. Given a list of one million usernames, checking them all for an obvious list of one thousand passwords would lead to one billion login attempts.

This is a drain on resources and can lead to performance degradation similar to that experienced during a distributed denial of service (DDoS) attack; more business will be lost due to transaction abandonment.

Bot activity can also lead to skewed metrics, any organization with millions of failed login attempts would be unable to see if there were any real user problems lurking in the background and have little idea what percentage of page views were from real users, especially as developers of bad bots have become better and better at making them appear like real humans.

Account Takeover Bots

Twenty types of automated threats (bad bots) are described in a new taxonomy from OWASP (the Open Web Application Security Project). Each is given an OAT (OWASP Automated Threat) code.

Two are of interest when it comes to account takeovers and both are forms of brute force attacks:

1. OAT-007 Credential Cracking: Identifying valid login credentials by trying different values for usernames and/or passwords.

2. OAT-008 Credential Stuffing: Mass login attempts used to verify the validity of stolen username/password pairs. Credential Cracking is sometimes referred to as dictionary brute force.

That is, if you know a username for a resource, cycle through every word in the dictionary to try and find the password. This is why we are all encouraged to add numbers and other characters into passwords.

But hackers are savvy about this and test obvious replacements like ‘o’ for ‘0’. Longer, more complex passwords will be harder to hack, but there is more and more low cost cloud-based computing resource available that can do the job. The symptoms of dictionary brute force include high numbers of failed login attempts and increased complaints regarding account lockouts.

It would not have taken much to brute-force the Facebook supremo, Mark Zuckerberg’s, Twitter and Pinterest accounts which both shared the same password ‘dadada’, when hackers broke in in June 2016.


Credential stuffing exploits our propensity to reuse passwords across multiple sites. If the infrastructure of a site owner is compromised and a list of usernames and passwords stolen, then the list can be checked against other resources. Of course, a responsible site owner should encrypt such information, but many do not, and compromised admin accounts may lead to encryption keys being compromised too.

The account details that were included in the data stolen from the Ashley Madison adulterous dating site in July 2015 were used in this way. It may not initially appear very useful to be able to access a stranger’s plans for infidelity (unless they are a celebrity) but the account details were run against other resources to look for cases where users had used the same credentials. In most cases they will not, but just a few out of millions is useful for direct use or re-sale on the dark web.

The ticketing site StubHub says some of its users were compromised in this way. Anyone with access to a compromised account can access as much personal data as the valid account owner has posted: name, address, phone number, email address etc. This can be used to deepen attacks on an individual, once an online presence has been compromised in one area it may spread to others, especially if the reason for compromise in the first place was due to poor practice.

Brute Force Attack

There are a number of approaches that online application and website owners can take to mitigate account takeover bots.

1. Limiting login attempts: It is quite common for a limit to be set on login attempts, for example to three, this makes dictionary brute force all but impossible. The trouble is that valid users often reach this stage because they forget passwords, so sites with a low security threshold often don’t bother. Furthermore, limiting login attempts does nothing to stop credential stuffing which only uses one attempt for each account.

2. Strong authentication: There are many types of strong authentication, often requesting additional information following the successful entry of a username and password pair. So, the first thing to be said is that this does not stop credentials being tested to identify accounts for further probing, once the effort is known to be worthwhile. Mobile phones used to send one time codes can be spoofed and so on; enough information may have been picked up from a low security social media account to get the information to target a more valuable bank account.

3. Are you human? One obvious way to catch out bad bots is to ask the user to prove they are humans. This is the aim of CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart). They annoy some users who find them hard to interpret and, needless to say, workarounds have been developed. One of the bad bot activities described by OWASP is CAPTCHA Bypass (OAT-0091). There are also CAPTCHA farms, where completing the test posed by the CAPTCHA is outsourced to teams of low cost humans via sites on the dark web.

4. IP blacklisting: Some bad bots use the same set of IP addresses over and over again and these can be blocked using network defences. However, most bad bot operators are savvy to this and will constantly switch IP addresses and anyway, the short-term cloud instances that they buy to power their attacks will have variable IP addresses. It is hard to keep IP blacklists up to date so most bots would not be blocked in this way.

5. Rate limiting: Checking for peaks in network traffic volume can be used to identify unusual behavior, such as might be caused by a credential cracking bot. However, just monitoring transaction rates alone is not enough to recognize a particular behavior and anyway, what’s the hurry? Attacks can be run over a period of hours or days. This may be necessary anyway to get around login limits. Such ‘low and slow’ attacks cannot be rate limited.

6. Web application firewalls (WAF): WAFs are designed to protect web applications from being exploited due to the presence of common software vulnerabilities. Most bots are not targeting vulnerabilities but aiming to mimic valid human behavior. This is certainly the case with account takeover bots. One exception is vulnerability scanning bots (OAT-14 in the OWASP handbook), but even these are mimicking the dynamic software scanning process that many organizations regularly carry across their web-facing applications.

7. Direct bot detection and mitigation: With the 46% of all website visits now made by non-human bots, the issue has escalated to the point where specific mitigation makes sense. The basis for this is to be able to distinguish bots from humans and then classify a bot in real time through techniques including behavioral analysis and digital fingerprinting. The need for specialist tools to achieve this has become more necessary with the increasing sophistication of bad bots.

We estimate that 88% of all bad bots are now advanced and persistent rather than simple. For example advanced persistent bots (APB) mimic human behavior, load JavaScript, support cookies and spoof IP addresses.

Account takeover is a necessary step for much online crime and espionage to succeed. Even if users can be persuaded to adopt better practices online, cleverer bots with ever more processing power behind them will be used to mimic login processes and either crack credentials or make use of stolen lists of users. The best way to prevent this is to identity bots at an early stage, understand their purpose and, if it their intent is bad, stop them in their tracks.

About Quocirca

Quocirca is a research and analysis company with a primary focus on the European market. Quocirca produces free to market content aimed at IT decision makers and those that influence them in business of all sizes and public sector organizations.




About the Author

Bob Tarzey

Bob joined Quocirca in 2002. His main area of coverage is route to market for ITC vendors, but he also has an additional focus on IT security, network computing systems management and managed services. Bob writes regular analytical columns for Computing, Computer Weekly, TechRepublic and Computer Reseller News (CRN), and has written for The Times, Financial Times and The Daily Telegraph. Bob blogs for Computing, Info Security Advisor and He also provides general comment for the European IT and business press. Bob has extensive knowledge of the IT industry. Prior to joining Quocirca in he spent 16 years working for US technology vendors including DEC (now HP), Sybase, Gupta, Merant (now Serena), eGain and webMethods (now Software AG). Bob has a BSc in Geology from Manchester University and PhD in Geochemistry from Leicester University.

More Content by Bob Tarzey
Previous Flipbook
Distil for Gaming Websites | Data Sheet
Distil for Gaming Websites | Data Sheet

There are 10 to 1 odds your gaming website is being compromised by online bots. Click here for more informa...

Next Video
Gamble on Your Website, Not With Your Website
Gamble on Your Website, Not With Your Website

What are the odds that your gaming website is being compromised by automated bots? Watch this interactive w...