Securing the software supply chain
From a Sony perspective, the attack is being projected to cost the company between $150 and $300 million dollars – and that’s not including the damage to the company brand, the loss of worker productivity, and the numerous other problems caused by the release of large amounts of personal and business data.
It hasn’t become clear how the attackers got in yet, but it appears that a known malware application called as Destover was used. This application apparently dropped copies of itself and installed new backdoors as it wormed through the Sony networks.
The cascade of high-profile hacks, from Poodle to Heartbleed, to this most-recent Sony incident, is no doubt causing a lot of concern in other corporations, large and small. I imagine hundreds (or even thousands) of CEOs calling urgent meetings with their cyber-security leaders, demanding reassurances that their businesses are protected against this kind of catastrophe.
What can you do?
One of the most important jobs of a software CSO is to make sure that only programs that are trusted are allowed to run on the internal networks. Many preventative software security tools and techniques focus on just this because once that outer layer of security is breached, it is much harder to prevent malware from spreading, it may be difficult to detect, and it is highly expensive to clean up the mess it leaves behind.
Of course, as a necessity of business, corporations do need to run programs from outside vendors all the time on their internal networks. Hence, huge importance is placed on technologies such as code signing and digital certificates, which give the receiver the confidence that the code they are about to execute does in fact originate from a known source and that it has not been tampered with in transmission. These kind of supply-chain risk-management techniques are definitely useful for reducing risk, but are not sufficient to prevent malicious code.
The problem with these techniques is that they are passive. Just because the software is known to come from a particular supplier does not mean it is safe to run. The supplier may not have followed secure programming principles, or they may not have tested their software very well. A more sinister scenario would be that a hostile insider has deliberately placed malicious code in the application.
And even if the external software vendor has followed secure software development practices, there is still no guarantee. New security vulnerabilities in software from highly trusted publishers are being discovered and exploited all the time, as we continue to see in the news.
Organizations that care deeply about security do try to take a more active role in their defense. It is common practice to subject third-party software to stringent acceptance testing, which for a security-sensitive program will include penetration testing. Unfortunately, basic testing of this nature is not a guaranteed defense against sophisticated back-doors that often lurk in the dusty corners of code that are rarely exercised.
The solution: binary analysis
This is why many so many government agencies, such as the DHS and DARPA, believe in static binary analysis of code, and why we have been conducting research in this area for years at GrammaTech. We have worked to make it possible to use static analysis on stripped optimized executable software to find security vulnerabilities. After all, executables are the concrete manifestation of what will run, so for secure systems they warrant direct scrutiny. Because static analysis is designed to explore even the most infrequently executed paths, it can find vulnerabilities that only show up in unusual circumstances. Static analysis can also find those pernicious vulnerabilities that are intentionally inserted by malicious insiders and that were designed to be invisible in the source code.
CodeSonar has long been a successful tool for finding security vulnerabilities in source code. Earlier this year, we released CodeSonar’s binary analysis capability, which analyzes executables and libraries for the Intel x86 instruction set. Using this feature, security analysts can take a more active role in defending their systems because they can now see for themselves whether or not the code in question contains vulnerabilities.
About the author:
Paul Anderson is VP of Engineering at GrammaTech Inc – www.grammatech.com