Devices Rule--The Internet of Things
Common Weakness and Enumeration—Tracing the Path to Industrial Security
Many industrial control systems have evolved from being stand-alone to Internet-connected without giving due consideration to the security issues. The CWE software assurance initiative can be the basis for a development and testing discipline to build in security from the start.
CHRIS TAPP, LDRA
Page 1 of 1
Cost reduction is the primary driver for connecting critical infrastructure components to the Internet. Utility meter reading and the control and monitoring of remote plants traditionally required significant numbers of feet on the ground. Many companies and municipalities now put systems in place to reduce these costs by allowing these tasks to be carried out remotely. Robust cyber security is required to ensure that the systems are adequately protected from those who may wish to cause harm.
Many organizations, including the Department of Homeland Security, have raised concerns that criminal and terror organizations are attempting to take control of critical infrastructure. Numerous incidents show that these concerns are more than theoretical. Recent reports on the Internet claim that on November 8th 2011, a pump used by a U.S. water utility plant was destroyed after hackers gained access to the SCADA system by using authentication credentials previously obtained when the system manufacturer’s security was compromised. The hackers appear to have commanded the pump to start and stop repeatedly, causing it to overheat.
A second incident is also under investigation after screen shots were posted to the Internet on November 18th showing what purports to be the user interface for a control system used within the Water and Sewer Department of the City of South Houston, Texas. The poster claims to have proof that other industrial systems have also been compromised.
As security-related issues are now being given greater importance, contracts increasingly include clauses related to security. Many require proof that the vulnerabilities identified by the Common Weakness and Enumeration (CWE) are not present at system delivery.
Notably, the security of industrial control systems was often not considered when they were initially designed because the first versions were stand-alone replacements for traditional analog processes. They then evolved through locally networked SCADA devices to become fully connected to the Internet. The assumption that no one would be interested in gaining access to such systems led to very little being done to enhance the security of the devices.
Even though newer systems are designed with connectivity in mind, it is common even today for industrial control systems to use hard-coded user names and passwords as the only attempt to provide security. Worse still, many of these credentials are shared across all the installations for a particular range of devices, making it very easy for a generic piece of malware to spread and gain access to a wide range of plants.
However, securing the passwords of the devices only gives a minor improvement in overall security, as the underlying communications channels of the devices expose a significant attack surface. It is often assumed that the data passed over these channels would be produced by a trusted system and that per-device data validation is either unnecessary or could be simplified.
This assumption fails when a third party (e.g., a hacker or malware) is able to manipulate or generate data passed over a channel. For example, the signals for a Polish tram system had been designed so that the drivers could control them by using a remote control. The system was considered to be secure as the hardware was not commercially available and no thought was given to the injection of commands from any other external source, resulting in a decision to use encrypted data without any validation. In 2008, a teenager modified a different remote control unit so that he could change the signals. His actions lead to the derailing of at least four trams, resulting in twelve people being injured.
According to research by the National Institute of Security Technology (NIST), 64% of software vulnerabilities stem from programming errors.
The Common Weakness and Enumeration (CWE) is a strategic software assurance initiative run by the MITRE Corporation under a U.S. Federal grant, co-sponsored by the National Cyber Security Division of the U.S. Department of Homeland Security. It lists those programming errors that lead to security failures within systems with the aim of improving the software assurance and review processes used to ensure connected devices are secure. Enumeration of the vulnerabilities in this way allows coding standards to be defined to target them so that they can be eliminated during development.
The CWE database contains information on security weaknesses that have been proven to lead to exploitable vulnerabilities. These weaknesses could be at the infrastructure level (e.g., a poorly configured network and/or security appliance), policy and procedure level (e.g., sharing usernames and/or passwords) or coding level (e.g. failing to validate data). The CWE database holds information on actual exploits, not theoretical, and so only captures those coding weaknesses that have been exploited in the field.
CWE should be used within the development environment to ensure that known vulnerabilities are not introduced into the software. Many of the issues that have been identified are amenable to automatic detection by static and/or dynamic checking tools. To obtain maximum benefit, such tools should be used as early as possible in the development process, as trying to add security in at the last minute is very unlikely to succeed. The adoption of other tool-enforced security standards, such as the CERT-C Secure Coding Standard, complements this objective and enhances the security characteristics even further.
Ensuring System Security
Many security vulnerabilities can be traced to coding errors or architecture flaws and are generally hard and/or expensive to fix once a system has been deployed. Unfortunately, many developers are only interested in the development and testing of core application functionality. Security is rarely tested with the same rigor. The security of a system needs to be considered as one of the most important attributes of a system. Security requirements need to be included up front in the system design and implemented during normal development if the final system is to be secure.
Figure 1 illustrates the attributes associated with system quality. By focusing on these measures at all phases of the software development lifecycle, developers can help eliminate known weaknesses.
System quality is determined by many attributes, including those relating to security.
To prevent the introduction of security vulnerabilities, a development team needs to have a common understanding of the security goals and approaches to be taken during development. This should include an assessment of the security risks and the establishment of the secure coding practices that are to be used.
The risk assessment determines the quantitative and qualitative security risk for the various system components in relation to a concrete situation and recognized threat. This information is used to reduce security vulnerabilities in the areas that will have a high impact if their security is breached. The assessment results in the development of a set of security control and mitigation strategies that will form the core of the system security requirements.
These security requirements become part of the same development process used for all other requirements. Detailed at the outset, the security requirements are then traced through the design, coding and testing stages in order to ensure fulfillment of the initial requirements. These linkages form documentation that demonstrates how the final system meets the security objectives laid down at the beginning.
CWE is a “do not get caught by” list and is not an actual coding standard. However, coding standards can be used to ensure that the CWE issues are not present in a project. Compliance with these standards helps to ensure that project security goals are achieved, especially as many security issues result directly from the coding errors that they target. Additionally, compliance with a recognized standard helps to demonstrate that contractual security obligations have been met.
Compliance with the chosen coding standard, or standards, should be a formal process (ideally tool-assisted, but manual is also possible) as it is virtually impossible for a programming team to follow all the rules and guidelines throughout the entire code-base. Adherence to the standards is a useful metric to apply when determining code quality.
Static and dynamic testing should be considered essential practices. Static analysis tools confirming CWE compatibility systematically enforce the standard across all code. Dynamic analysis assures that the code does not contain run time errors, including those that could be exploited to compromise security.
If a claim is to be made that a system complies with a security standard like CWE, then evidence must be provided to support that claim. Traceability makes it possible to show which test result(s) prove that a particular security requirement has been met. Tracing from requirements to the design, verification plan and the resulting test artifacts can be used to support such a claim.
This illustrates how requirements can be mapped to the design specification, verification plan, source code and verification reports. Such graphical representation makes it easy for developers to immediately spot unnecessary functionality (code with no requirement), unimplemented requirements and failed or missing test cases.
Adoption of a security standard that targets the CWE vulnerabilities allows security quality attributes to be specified for a project. Incorporation of security attributes into the system requirements means that they can then be measured and verified before a product is put into service, significantly reducing the potential for in-the-field exploitation of latent security vulnerabilities and the elimination of any associated mitigation costs.
The use of a qualified and well-integrated application lifecycle management (ALM) tool to automate testing, collation of process artifacts and requirement traceability dramatically reduces the resources needed to produce the documentation required by certification bodies. It minimizes the workload for developers and allows managers to efficiently track progress.
With the increase of attacks on industrial security, it’s clear that application developers need to rethink their assumptions. Leveraging the knowledge contained within CWE and choosing to develop and test software with the aid of CWE-aware tools represents a significant step forward. Companies that incorporate CWE and embark on a process of continual improvement help ensure that only dependable, trustworthy, extensible and secure systems are released for production.
+44 0151 649 9300