OWASP and Fortify

Written By:
Published:
Content Copyright © 2007 Bloor. All Rights Reserved.

One of the useful and fairly altruistic things that vendors can do is to sponsor developer communities in the areas they support. We say “fairly altruistic” because, of course, the more active a developer community, the more software and software support it buys. Nevertheless, there is probably a continuum from cynically bribing developers to use your products and no-one else’s (thankfully rare, these days at least) and genuinely trying to promote an independent community, in the expectation that the size of your slice of the cake will increase as the whole cake gets bigger.

In our estimation, Fortify Software’s support of OWASP (the Open Web Application Security Project), a “worldwide free and open community focused on improving the security of application software” comes at the altruistic end of that continuum. So, we accompanied Brian Chess (Founder and Chief Scientist at Fortify Software) and John Wood (Fortify’s District Manager for the UK.) when they went to Edinburgh to help inaugurate a Scottish chapter of OWASP.

Chess was talking to the group in Scotland about what Fortify Software does: static analysis of code (i.e., without compiling it and running it against test data) for potential security bugs and dynamic analysis of code running in test or production, looking for behaviours that just shouldn’t be happening.

Now, code inspection (which is a superset of what Fortify does) by people other than the programmer (or even just by the programmer, assisted by an appropriate tool—compare AgitarOne DIY unit testing by eXtreme programmers) is an immensely powerful way of removing defects—and not just security-related defects. However, we are just a little concerned that some managers might see automated code inspection or static analysis as a “silver bullet”; as a substitute for the use of managed languages (like C# or Java), good practice and the employment of good developers. Far better, we think, to use a language/environment that doesn’t allow buffer overflows, say, than to find and eliminate them after coding—and far better to employ good programmers and process than to employ bad (but cheap) ones and attempt to clean up after them with clever technology.

Not surprisingly, Chess agrees—he is particularly scathing about the unthinking use of scripting languages: “finding vulnerabilities in php is like shooting fish in a barrel”, he says, and he believes that using managed languages such as Java and C# leads to many fewer bugs in production. However, although static analysis isn’t a substitute for good programming practice it is an extremely powerful complement to it. Even good programmers make mistakes and static analysis is a cost effective way of ensuring that these mistakes don’t reach production.

One issue Chess highlights is the inculcation of bad practice (and especially bad security practice) in introductory training courses and books. He mentions that one Ajax author that told him “we never intended these examples to be production-ready code”. Possibly, but people write production code from examples in books. There are valid reasons why examples are simplified (to fit on-screen in an on-line article, perhaps) but authors should always highlight something that isn’t “production ready” as and wherever it is displayed.

Talking with Chess on the way to Edinburgh, we discussed the “where next” for Fortify. Chess is contemplating extensions to Dynamic Analysis looking for the patterns associated with fraud: different usage patterns (perhaps a lot of late-night work “sorting out the books”, unusual trading patterns with an unregulated area of the world, that sort of thing). Once you find these patterns, you’d have to be tactful, of course—but an enquiry about providing an assistant, training, or recognition for someone loyally working far into the night will improve staff morale—and be thunderingly inconvenient if the person concerned is actually doing something they shouldn’t be (most fraudsters like to feel secure and unobserved).

Developers are finally waking up to “secure coding”—but fraud is potentially far more destructive to a company than any simple hacking exploit. It is also far harder to address, because much fraud depends on insiders and part of the fraudster’s tool kit is to appear to be a loyal, popular, employee. In fact, fraudsters are often in fairly senior management positions, misusing legitimate authority (they may well be in a position to switch off fraud prevention controls). In some organisations, targeting fraud can be seriously career-limiting before ever you get to identify a fraudster—so automated fraud detection, not down to an individual asking embarrassing questions, can be very attractive.

We think that one of the biggest discouragements to fraud is transparency—reporting of unusual application usage patterns, without personal attribution, perhaps. Chess’s Fortify products are probaly in a position to detect fraud patterns cheaply—and machine-enforced governance avoids the political and human costs associated with manual governance. Chess is rather stepping on the toes of the big datawarehouse fraud prevention systems here, but it might well be possible to detect basic fraud patterns (and the effects of simple errors, such as mis-entered prices an order of magnitude lower than they should be) more cheaply than the datawarehouse can (if less flexibly, perhaps); and this could open up an SME market which can’t afford data warehousing.

Anyway, developers should start thinking about designing “fraud proof” automated services these days—and perhaps Fortify dynamic analysis will help them, one day.

Now, back to OWASP, which says: “Our mission is to make application security ‘visible’, so that people and organizations can make informed decisions about application security risks.” That is a fine idea and must contribute to the reliability and resilience of the automated systems business depends on, but the OWASP people are, in general, security experts. We need more than this. Chess quotes Petroski from To engineer is human: “success is foreseeing failure” and, he points out, we need non-experts to get security right. This is where we think automated static and dynamic analysis, of the sort Chess designed for Fortify, has an important part to play today.