How to get security prioritized in enterprise programming

Put our talent to work

Given all of the intensified attention to security issues these days, it’s surprising how often application security is still neglected.

To be clear, in-house app dev is a top priority for companies and app testing has never been neglected. But that testing overwhelmingly focuses on functionality—does the app crash? Does a right-click on the blue icon deliver the desired action?—rather than security. Why is that? Like almost everything else in security, the blame falls mostly on C-level executive priorities.

The CEO and other executives are relentless in pushing delivery dates. That is reflected in general instructions as well as manager bonus incentives. By focusing solely on time to market, they are almost forcing a situation where app vulnerability concerns are given a backseat to, well, just about everything else.

This problem also builds on itself. The emphasis on speed pushes developers to rely on as much open source code as possible. Although there is no question that leveraging pre-built open source software makes coding faster, it also sharply increases the risk of absorbing vulnerabilities—both known and unknown, intentional and unintentional—into your homegrown app.

There are efforts to minimize these open source security holes within payments’ PCI, financial sector’s FS-ISAC and the Open Web Application Security Project (OWASP), but developers don’t check on the latest patches nearly as often as they should. Don’t forget that such checking has to include historical checking, where open source code that your team included in an app nine months is explored.

A big problem with such efforts is that many programmers aren’t ultra-strict about logging every single piece of open source code used. This makes checking for historical usage futile it not utterly pointless. With new holes being discovered every day, code that checked out as clean on Monday may not be so clean on Wednesday.

This brings us back to time-to-market and corporate priorities. Open source is just one issue. Apps have to constantly be tested for unintentional security holes that your team did, especially when they were rushed to meet a deadline.

What can be done to minimize this security problem?

Developer security education

In programming courses at almost all universities today, security is mentioned as an afterthought. Courses must stress that security has to be examined rigorously before code is finalized. Developers must understand that security testing is just as mission-critical as functionality testing.

Would a corporate coder send up code before checking to see if it actually works? Of course not. And yet that same coder has no problem with sending the code without performing thorough security testing, either in-house or through a trusted third-party. Part of the blame here goes to the educational community that allows this thinking to continue.

Penetration testing during the development process

The beauty of pen testing is that the tester isn’t asked to look for anything in particular. Instead, the tester plays with the app and continually checks what data is saved or transmitted. A good and creative pen tester will try and do things that few users would do to find security holes that beta testing would likely miss.

Far too often, when corporate does bring in a pen tester, it’s either after a problem has been discovered or, at best, it’s at such a late point in development that fixes will cause far more problems than they should. Do pen testing often and—critically—at the earliest practical point. And then do it again after fixing the app to see if the patch opened any new vulnerability. (By the way, if you don’t want to spend the time and money on pen testing, that’s fine. Once released, cyberthieves will be more than happy to do it for you.)

Clean code incentives

A key problem with programming incentives today is how much it ignores security. A developer will be in serious trouble if he/she sends up code that doesn’t work the way it’s supposed to. But code that is sent up that is later found to have open source security holes? If a decent excuse is offered (something like “No one knew about that hole when we re-used that code”), the developer will likely escape punishment.

Incentives for programmers must be changed to reflect an emphasis on security testing. Incentives today tend to reward coders for how many glitches they fix, which provides a bizarre incentive to create holes that can later be fixed. Companies need to reverse that and send apps out for extensive testing—functionality and security—and incentivize coders whose code has the fewest problems.

That will encourage them to include security testing. Where memos won’t work, bonus descriptions will never fail.

Related Posts
Scroll to Top