OWASP SAMM 2.0 Assessment
SAMM 2.0 Assessment
The release of SAMM 2.0 is widely anticipated and promises to provide a method for measuring and evolving Application Security Programs with the focus on Agile and DevOps methodologies. It’s natural for many organizations to want to dive right in and start assessing to see how they would score against the lastest SAMM. While the assessment process is simple and self-explanatory, there are several factors and approaches that should be considered, such as:
- Size of the organization
- Consistency of the SDLC practices
- Prior SAMM experience
While SAMM predominantly focuses on the assessment workflow, I wanted to offer some guidance here to help organizations avoid common pitfalls based on my experiences of performing these and other assessments.
Assessment Methodology
How should you perform the assessment? Do you complete the assessment on behalf of the organization representing all development teams and efforts in the organization? Do you complete it for various divisions and business units? How do you demonstrate continuity between SAMM 1.x and 2.0? These and other questions are very common and while there is no right or wrong answer, I would like to share the approach and through the process for making these decisions.
Size of the organization
Regardless of the size of the organization, SAMM can be used to create a global scorecard showing how the maturity level of the organization’s Application Security program. However, as the size of the organization is increased, analysis of creating a single SAMM scorecard requires taking increasingly more liberty and summarization of the details. While the end result will meet the overall objectives of the assessment, loss of detail may make it hard to move from “What?” to “Why?”. Therefore, larger organizations should consider performing the assessment at more granular levels and then using collected data to create aggregate summaries.
Small organizations with a few products may safely skip the rest of this writing and simply proceed with the assessment since they are not likely to encounter challenges faced by organizations with hundreds and thousands of individual applications and development teams. However, larger organizations should spend a little time considering the best levels at which the assessment may be performed. I have performed assessments anywhere from division level to an individual team or application level and only learned one universal truth: it depends. Answer to this question should really be driven by the questions the SAMM assessment will answer and how the best way to structure and monitor remediation.
One question that generally helps to guide this consideration is: “If you were to Gamify application security, what groups would be competing?”. Using metrics to create “natural forces” for encouraging positive change can be a powerful tool and help reduce the need for Application Security to be singularly responsible for pushing the program forward. Using numbers to represent application security can highlight teams or applications that may be trailing behind and hopefully use the gap to push the team to catch up to the others.
SDLC Consistency
Large organizations often grow by acquisition, which may significantly increase the likelihood of teams using a wide range of SDLC practices. Another common source of inconsistencies may also result from differences between technology stacks. Inconsistency between SDLC should drive the need for a more granular assessment that includes every team, while those organizations with more consistent technology stacks and practices may be able to use sampling to help substantiate their scores. In my practice, I often observed that even smaller organizations with practices expected to be consistent between teams often benefit from polling multiple teams.
SAMM familiarity
SAMM is a qualitative assessment and will always be subject to the interpretation of the person filling out the questionnaire. Therefore, an organization should consider some ways responses could be normalized in how the different questions are interpreted. Reading SAMM guidance and guidelines provided with each question will certainly help improve consistencies in the interpretations, but it may not be feasible to expect all responders to invest this much time voluntarily.
Prior to beginning, the assessment Application Security teams should review the questionnaires with a few Software Architects to help identify opportunities to improve interpretations of questions. Replacing generic names of committees, teams, and policies might be a great place to start, but further analysis may find other opportunities for making responses to the questionnaire easier. Some care should be exercised to make sure the modifications remain consistent with the description of associated Activity and Maturity Level.
In all cases, some training might be appropriate prior to beginning the assessment, even if limited to a recorded webinar introducing the assessment and briefly introducing the Business Functions and Security Practices. However, including SAMM in the existing security training will further improve the quality of the responses and help with the acceptance and adoption of the Application Security roadmap.
Interviews
Once the data has been gathered, it’s advisable to conduct a series of interviews to help validate responses and more importantly, get more context behind the different responses. While in larger organizations meeting with every responding team may not be practical, sampling the teams will help provide a lot more information to aggregate SAMM scores and identify practical steps for implementation of SAMM guidance.
Qualitative assessments are prone to inaccuracies due to differences in interpretations of guidance, prior experience, and frankly time of day. However, the purpose of the interviews should not be limited
Summary
Prior to beginning, the assessment organizations should develop a set of questions they would like the assessment to answer. Answers to these questions should be at the center of considerations for implementing SAMM in the organization as well as for structuring the assessment.
Future posts will cover individual business functions and conclude with the various ways data can be analyzed and aggregated for executive briefings.