Content Copyright © 2010 Bloor. All Rights Reserved.
It’s a software developer’s job to write application code that satisfies customer requirements and meets business objectives. This code needs to be functional, usable, reliable and with acceptable performance and supportability. As the modern world relies on software to function, teams of developers must do their best to churn out millions of lines of code under huge pressure to satisfy customer demand.
With looming deadlines and the need to do yet more work developers, in the past, had little time to ensure their code was free from bugs or errors that opened security holes in the application. Fortunately, as many applications ran within a client server network, relatively isolated from the outside world, this approach was normally successful.
Then along came the Internet, the World Wide Web and the subsequent massive growth in handheld devices that exposed what would be normally closed applications to millions of anonymous users. Combine this with the recent introduction of organised cyber criminals continuously looking for new ways of committing crime, and the computer security ground rules have been rewritten forever.
Development teams realised very quickly that their approach to software development was insufficient to cope with the explosion of malware and hacking that was exploiting flaws in software code. The scale of this problem is immense; in 2009 alone over 7,000 new software security vulnerabilities were found, putting pressure on developers to rapidly improve their knowledge of security issues if they are to see their applications survive.
Against this background we have seen a huge move towards componentised code, and the reuse of code libraries and functions that had been developed in house, purchased or borrowed from other developers. As customers have looked to slim down their costs, the use of commercial and open sourced software grew. Outsourced software development has seen projects sent across the other side of the world to be written by developers they have never met in a country they may never have visited. So not only do developers need to worry about security defects in the code they write, but also in the code they reuse.
This perfect storm raises huge concerns in the minds of information security professionals who are trying to get a grip on the scale and diversity of software entering their organisations.
How can code be checked for security flaws? How can the executive be assured that the various components used in an application are free from potential security bear traps? What can be done to verify that software complies with internal and external governance, compliance and regulatory standards?
Conventional application code testing by either scanning the source code, undertaking manual penetration testing or using a web-based scanner is, at best, only providing partial coverage of an application. It is also expensive, requiring manual, time-consuming processes that are simply not scalable and are prone to missing security flaws.
Worse of all, this testing provides a false sense of security.
On the other hand we need to consider the developers. The sheer volume of potential security flaws and new and emerging threats can be overwhelming to a developer under pressure to roll out yet another new feature.
The only realistic solution to this problem is to take the pressure off the developer and automate the checking for security flaws in a comprehensive way as part of the software development lifecycle (SDLC). By integrating independent security scanning, using multiple techniques as part of the SDLC, it becomes second nature to the developer to get their code thoroughly checked as part of the regular development process. Customers and the business can be reassured that a trusted third party has viewed the code and passed it fit for purpose.
Software that is bought in from a third party, often without available source code, needs to be part of this informed review of code security. Traditionally, lack of source code has thwarted any investigations of such a black box software solution, forcing customers to take it on trust that the code is secure and unlikely to present a security problem. Fewer organisations are prepared to accept this situation with such blind trust and will expect an independent assessment of the code’s security profile.
Software development managers and information security professionals need to act now to address the security of the software they write, purchase or co-opt into their solutions.
Failing to act due to lack of a pragmatic and cost-effective solution is no longer excusable.
If you are interested in finding out more about application code security then I will be running a webinar on 16th September in conjunction with Veracode. Details here.