Web applications have become the mainstay of the business world. Whether it’s the backend of a mobile app that connects users to your product or your public-facing website, one thing remains the same. Web apps have become just as important in doing business as brick-and-mortar operations. Yet we sometimes overlook the need to secure our online applications.
To complicate matters, we’ve seen a huge increase in bots, which now make up 61 percent of all website traffic. Cheap cloud computing resources and open source software have enabled attackers to launch bot attacks faster and at a lower cost than ever before. Hackers use bots to uncover website security vulnerabilities – at scale – then spread their attack origins across hundreds of IPs. Bad bots are now the key culprits behind web scraping, online fraud, reconnaissance attacks, man-in-the-browser attacks, brute force attacks and application denial of service.
Securing web apps from the millions of bad bots that attempt to penetrate them each year can seem like a daunting task. John Stauffacher, a world-renowned expert in web application security, and the author of Web Application Firewalls: A Practical Approach, recently sat down with Rami Essaid, CEO of Distil Networks, to brainstorm actionable ways organizations can defend their web applications from malicious bots. The good news is that you can quickly shore up your defenses by following a few simple rules, as well as implementing controls within your application development lifecycle.
Defending Web Apps Against Malicious Bots
Click through for eight steps organizations can take to shore up defenses surrounding web apps, as identified by Rami Essaid, CEO of Distil Networks, and John Stauffacher, a world-renowned expert in web application security.
Profile Web Apps
“Profile” refers to the act of grabbing a comprehensive dataset from the application that represents everything about that application. This includes URIs, the names and values of parameters, the names and values of cookies, and the types of uploads and libraries each application uses. Once you complete your profile for each application, you have a baseline. You should consider anything outside that baseline as a threat and block it.
Limit Your Exposure
You best accomplish limiting your exposure by shrinking your potential attack surface through such measures as GeoIP fencing and client interrogation. Simply block any traffic that originates from undesirable geographies or that displays client characteristics unlike those of your typical customer base.
Enforce Application Routes
Each of your applications has its own workflow and discrete routes that ‘normal’ users follow. By enforcing defined routes and workflows, you can prevent automated bot attacks from testing numerous URLs and executing forceful browsing attacks into your applications.
Scrub All Inputs
Any time an application receives data, from any source, you should assume that the data is unclean and needs to be sanitized. Scrub incoming data to eliminate anything that appears to be program logic or an executable, even if execution would occur elsewhere. The cleaning process is complex and requires searching for and removing certain character sequences that could enable vulnerabilities. When an application “scrubs” inputs or goes through this cleaning process, the exposure to attacks like XSS and SQLi comes down considerably. To take it one extra step, by defining a character set (a collection of ASCII characters that a valid input would have), you take that exposure to almost zero.
A strong web application firewall (WAF) input policy specifies exactly what characters your application expects across each of its inputs. If your application is expecting a product ID number consisting of 12 numbers, then a WAF input policy would at the very least remove control characters and punctuation. A strong WAF input policy would constrain the product ID to only accepting 12 characters as input, and those 12 would have to be numerals – anything else should throw an error. You should be scrubbing data any time you accept it from end users or external services.
Encrypt All Cookies
This is so easy to do today, and there is simply no reason not to. An HTTP cookie is a piece of data sent from a website and stored in a user’s web browser while the user is browsing that website. Every time the user loads the website, the browser sends the cookie back to the server to notify the website of the user’s previous activity. Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items in a shopping cart) or to record the user’s browsing activity (including clicking particular buttons, logging in, or recording which pages were visited by the user as far back as months or years ago). Most applications store this information in plain text, where it can be easily retrieved by the browser. Encrypting the contents of cookies assures that your application is the only application that can read the contents of your particular cookies. Generally, a symmetric cipher is used with a pre-shared key that can rotate, and the application can facilitate the key expiration and rotation.
Applying SSL adds another security measure that is simple and has no downside risk.
Monitor Login Pages
So many bots are written to perform ‘brute force’ login attacks by throwing all kinds of username and password combinations at your login page. Block any traffic that makes rapid, multiple login attempts and/or appears to be the same user coming in from different networks or geographies.
Always Enforce Protocol Specifics
Surprisingly, many bots have poorly written code and don’t actually follow the HTTP protocol. This makes them easy to identify and block when you simply state that your apps will only speak the protocol as it’s written.
These best practices rely on solid web application security policies. So, make sure you have no wildcards in your policy, such as one that says, “let in all traffic.” Second, do not rely solely on signature sets, as you’ll be chasing new signatures on a continuous basis. In fact, it’s better to spend time upfront whitelisting the good in your WAF or bot detection and mitigation solution rather than continually updating all of the bad that could possibly be thrown at your application.
Finally, the best web application security policies are dynamic. This means you should make it an integral part of QA testing every time you update application code. But with a solid baseline in place from profiling your web applications, this should become as routine as brushing your teeth.