There has been a lot of news lately about high-profile attacks on Web applications. Hackers employ tactics like cross-site scripting (XSS) and SQL injections, which have been around for more than 20 years. Yet, both are prevalent attack vectors now more than ever before – which makes it that much more important for organizations to have a formalized application security policy for their developer teams.
John Jacott, security evangelist for Coverity, which offers a development testing platform, sheds some insight on nine important questions that should be central to implementing an application security policy in any organization.
Click through for nine questions that should be central to implementing an application security policy in any organization, as identified by John Jacott, security evangelist for Coverity.
Businesses need to be able to test applications in a manner that is both scalable and extensible. The problem is, applications outnumber security professionals on scales that we can’t even comprehend. How do you determine which one is going to be the “in” for the attacker? All of them could be “the one”… It’s important to identify which applications pose the highest risk and threat vectors to the business, so that you can focus your efforts accordingly.
Security professionals typically use a different, security-centric process that is outside of normal development. This does nothing more than stop development efforts and make developers suffer missed deadlines, limited innovation and lost scalability. Security experts need to engage developers and get them excited about security. We have to test applications as they’re developed; we need to adapt to their processes. We should not be using different languages, different methods or going outside of the proven processes that developers use – especially not six months (or more) after they’ve closed development on a project and moved onto the next application.
This is really the crux of the issue: Developers outnumber security professionals, yet we insist on differentiating ourselves with a “security culture” instead of getting to understand the developers’ culture. We may exclude the biggest ally we could possibly have by putting testing at the end, in tools or services that are outside of their normal environments, in a punitive exercise that is often duct taped and bailing wired in place. We then wonder why we fail as a cost center, when a profit center fights back using the business unit against us. Not everyone can be a security expert… Everyone can test for security defects.
Why inhibit profitability? A proper application security professional understands the environment he/she operates in. Pragmatic in nature, we understand the risks and will be flexible with compensating controls when something’s found, or when a process needs to be bypassed to make that new revenue.
Nobody wants to be “That Guy” – the one that sits in on every meeting and insists that everything be “securely architected.” In truth, most applications are rarely “architected.” They’re grown/innovated via agile or waterfall methods. Instead, we need to show developers, early and often, the consequences of poor security, and let them start thinking about it. Most “security architects” are ineffective because they do not deputize their constituents and empower them to create more secure applications on their own. Jacott is not saying don’t perform architectural reviews. He is saying teach them how and become part of the development effort.
Create recognition programs for secure innovation and build cooperation between development groups (i.e., a scoreboard of flaws / defects fixed and award the teams that fix the most, or even just a movie / beer / pizza night with peers). It’s a great way to incentivize security and make it a priority across the board.
Simply put, a bug is a bug, is a bug. Help the team understand any defects in the language they use, with accepted processes they have in place, to fix issues they know about.
Every control should be well publicized and sponsored by the highest CXO type — and be well understood and accepted by the development community. If we don’t have acceptance, it’s likely our fault for not using their language. Security testing without understanding and agreement is just punitive.
Similar to the previous point, it’s important to have a pre-agreed upon set of standards in place. Build into an existing information security management system that uses best business practices, apropos for your business.