Web applications are subject to click fraud, comment spam, content scraping, and more. Bot detection and mitigation can close these vulnerabilities.
Who's that coming to your website? Is it friend or foe? Is it a customer wanting to buy your products, or someone or something wanting to steal your web content? Is it a community member that wants to post a relevant comment, or a spammer intent on planting junk links and content in your open comments section? Is it a real person clicking on an ad, or a web bot driving up fraudulent clicks?
Web applications are increasingly being subjected to automated threats such as click fraud, comment spam, content scraping, abusive account creation, and more. These and other illicit or unwanted activities are described in detail in the OWASP Automated Threat Handbook for Web Applications.
This article is about one vendor’s approach to defeat unwanted web traffic, whether it's automated or human-driven. I should point out that there are desirable and highly useful web bots too, such as the web crawlers used by search engines to find and index content, and chat bots that are used to fetch information and bring it into chat rooms where humans meet. Any solutions designed to defeat malicious bots have to allow the good ones to proceed.