Monthly Archives: November 2013

Steps in a Secure Software Development Lifecycle Model (1)

As discussed earlier [1, 2], the Secure Software Development Lifecycle (SSDLC) process that I commonly use has an inner core that is built around policies, standards, and best practices, and an outer shell of ongoing activities around security training and education.

The middle circle groups the activities that need to be performed for every release of the product. It does not matter whether the product team is using, for instance, a waterfall model or an agile model; the basic activities are always the same. It is obvious though that in an agile model, where the release cycles are much shorter, some of the activities take considerably less time. This allows the agile team to keep their short release cycle, and the respective SSDLC activities to benefit from early “customer” feedback, which is an integral part of the agile philosophy. Depending on the project, the “customer” can vary, sometimes even between cycles: this role can be filled by company internal customers, operations teams, integration teams, external customers, and many more.

2013-10-18 - The Core of a Secure Software Development Lifecycle ModelThere are eight activities in the SSDLC, and each of the activities can be its own more or less complex process. The content of the “activity boxes” generally depends on the development model and the type of project, but I found the following definitions to be pretty universal and a good starting point:

1. System Concept Development

This activity answers important questions on a comparatively high level for executives, but it is also a good elevator pitch. Questions that need to be answered here are things like: What should the system do? Does it integrate with existing solutions? What is the value add (both intrinsic and extrinsic)? Did anyone else already build this? Why should we build this? Is the solution worth funding? In particular the last question is very interesting for everyone involved. If there are specific security implications (e.g. from a system managing PII), this should have already come up in the discussion by this point.

2. Planning

This is basic project management 101. At this time, a core team has usually been appointed for the project, and roles have been assigned within the core team (remember, this holds both for waterfall and agile models, where roles may change once a Potentially Shippable Increment [PSI] has been completed). Questions answered at this stage include things like: What needs to be built? Do we have all the resources we need to complete this iteration? What are the timeframes? Are there dependencies on other groups, or are other groups depending on this project release? Toward the end of this stage, the epics implemented in this phase will be known with high certainty, which allows the security architect to start thinking about their security implications.

3. Requirements Analysis

The requirements analysis is somewhat intertwined with the planning activities. In particular in agile development models, it is not uncommon that teams jump back and forth between planning and requirements analysis, although this happens less frequently the further the agile project progresses. A specific part of the requirements analysis is the security requirements analysis. As in a regular requirements analysis, a lot of the work is driven by the product vision and system concept, as well as the relevant standards, policies, and industry best practices. Based on a security and privacy risk assessment, the team should establish a solid set of security and privacy requirements, as well as quality requirements that will later on help establish acceptance criteria for implemented features.

4. Design Analysis

Once the requirements analysis is complete, the team should have a pretty solid understanding of the “what” they want to build. The design analysis answers the questions around “how” things should be built. The first step in the design analysis requires the architects to create design specifications that include major system components, with information about the type of data these components are processing, the users that are accessing them, and the trust zones in which they are operated. Part of the general design analysis is the threat analysis, which will produce a set of design requirements based on an attack surface analysis. The threat modelling process is probably the most complex part in a Secure Software Development Lifecycle process, and while there are tools and methodologies available that help structuring this process and make it repeatable, it usually requires a skilled security architect.

User provided input, phising, and stolen SSL certificates

It is a well-known security best practice to validate user provided input before accepting it for further processing. There are a lot of resources in particular for web application developers on how to sanitize user provided input, how injection attacks like SQL injection or LDAP injection work, and what a cross-site scripting attacks is. A very good resource is the OWASP project, with their overview on data validation and their input validation cheat sheet.

Most guides around input validation focus on how to properly validate the syntax of the submitted data. While this is a very important step, it is also important to keep the underlying business logic in mind and consider how the semantics of the user provided input could compromise the underlying business logic.

Today, I came across a security hole that emerged although the application was correctly validating the syntax of the user provided data. However, the data sanitation mechanism did not take into account how that data would be used in the downstream components, and what an attacker could do with a seemingly harmless email address.

The application in question provides services to a user, and allows the user as an extra gimmick to register a customized or “vanity” email address in the form of user_chosen_part@service-name.com. The team made sure that the email address always has the correct syntax, and even used a standard validation library with a well-tested regular expression instead of crafting their own.

However, although the validation ensured that the submitted addresses are syntactically correct, the seemingly harmless input can lead to a significant security hole that can enable a major phising attack on the application.

In this application, an attacker could simply request a seemingly “trusted” email address such as hostmaster@service-name.com, and then use that email address to exploit other users on the system. As an example on how this attack could work, an attacker could request an anonymous SSL certificate from a third party (such as “comodo free SSL”) using such a “trusted” email address. In this example, the third party (comodo) would validate the ownership of the domain by sending an email to one of several “trusted” email addresses, and then grant the attacker an SSL certificate if he can prove that he has access to such a “trusted” email address.

The attacker can then use that certificate to create a spoofed website (in this case: www.service-name.com) and phish users’ data – and the users would find it very hard to even notice the attack, because the SSL certificate from comodo (“trusted by 99.9% of browsers”) would be valid and actually be made out to the original server name!

To prevent this form of attack, the service in question would have to ensure that specific, usually “trusted” email addresses such as hostmaster@service-name.com would not be available for a user to register.

In the case of comodo, the list of “trusted” email addresses available to choose from for domain ownership validation includes:

  • admin@service-name.com
  • administrator@service-name.com
  • hostmaster@service-name.com
  • postmaster@service-name.com
  • webmaster@service-name.com

It is of course a really good idea to extend this list of prohibited addresses in the application with other addresses that are commonly abused for phising, including (but not limited to):

  • support@service-name.com
  • help@service-name.com
  • sales@service-name.com
  • security@service-name.com
  • order@service-name.com
  • info@service-name.com
  • etc

In this specific example, the connection between the potentially malicious user input and usage of this input in the business logic is fairly straight forward. However, this is often not the case, which makes it really necessary to keep downstream components and their business logic in mind when doing input validation.

The Core of a Secure Software Development Lifecycle Model (2)

The Secure Software Development Lifecycle (SSDLC) process discussed earlier is built around a custom security policy, security standards, and security best practices, and completed through extensive security training and education. While the security policy is an important factor, security standards, best practices, and education are crucial to make an SSDLC program successful.

The security standards and security best practices include security relevant government standards and regulations (eg. NIST, HIPAA, PII regulations, …), but also established industry best practices (e.g. OWASP best practices for web application development, PCI-DSS compliance requirements for credit card pyaments, etc). Some standards and best practices are fairly universal, while others may only be relevant for specific projects. As an example, a web application that is processing credit card information will have to follow PCI-DSS regulations, be compliant with the relevant privacy standards, and implement a good deal of the OWASP recommended best practices. An application for a smartphone without its own billing system and without any credit card payment processing on the other hand can skip the PCI specific requirements.

An ongoing activity in the SSDLC is continuous security training and security education. This is fundamental for implementing a successful SSDLC program. Training and education must include all project members: developers, QA, architects, legal, project management, etc. Everyone needs to get a tailored training to understand both how the SSDLC works, and about foundational concepts like secure design, threat modeling, secure coding, and security testing. Depending on the project, the training can include aspects on relevant standards and best practices.

Security training can come in many forms, such as instructor led trainings, recorded video trainings, books, and training on the job. Once everyone in the team has reached a minimum baseline, I found training on the job to be the most effective and efficient in particular for the technical staff. When someone in the team (good case) or in the public (customers, white hats, black hats – bad case) has found a security vulnerability, I recommend to get at least the entire technical team (architect, dev, QA, operations) together for a post-mortem analysis. The person who found the issue then explains the problem, and asks the team to create a fix and a regression test to prevent the problem from happening again in the future. Also, the team must come up with a mitigation that can be used by operations to protect deployments when the latest update with the security fix cannot yet be installed.