top of page
CSOL-560-Secure Software Design and Development, Fall 2018


I was introduced to the fundamentals of Secure Software Development Life Cycle during Fall 2018 semester The course was labeled “CSOL560 -Secure Software Design and Development”  with professor Cameron Carter. The focus was on developing a software system that can address the existing gaps in managing risk in supply chain which is important part of the equation of addressing the needs in national security and safety. The books we used were two of them, one was optionally:


Axelrod, C. (2012). Engineering safe and secure software systems (1st ed.). Norwood, MA: Artech House. ISBN-13: 978-1608074723
Sommerville, I. (2015). Software engineering. (10th ed.). Essex, United Kingdom: Pearson. ISBN-13: 978-0133943030

To understand the fundamental part of systems engineering practice, requirements engineering needed to be defined, and in the case of this course, we need to work on project in groups of five to six students, each with a specific role to play. The project was about to develop a software system for a customer that addresses the pertinent question: Do we really know what our Supply Chain looks like at any given point in time?

Leveraging open-source, we had to define requirement engineering and create requirement artifacts for an extensible, highly scalable, and interoperable framework. This SCRM, as it is called will be comprised of four main components: Taxonomy Builder, Data Fusion Engine, Tracker Analytics, and Reporting & Alerts Engine. Each team has to choose one component, and in the case of our group, we chose Reporting & Alerts Engine.

SCRM Background

According to Wieland, A., Wallenburg, C.M. (2012), Supply Chain Risk Management (SCRM) is "the implementation of strategies to manage both everyday and exceptional risks along the supply chain based on continuous risk assessment with the objective of reducing vulnerabilities and ensuring continuity.”

A key concept in SCRM is Supply Chain Interdiction, which, according to Bell, E. J., Autry, C.W., Griffis, S.E., (2015), "refers to activities that constrain or otherwise negatively impact the resource base of a target entity such a corporation, an entire industry, or a whole nation-state. Interdiction activities can cause degradation, disruption, destruction, or denial of access to supplies, and includes counterfeit and intentionally-tampered products being inserted into the supply chain."

The following are the results of our seven-weeks work, which culminated in a Test Plan with six main areas: Functional Tests, Unit Tests, Regression Tests, Verification, Validation, and Mitigation. We put together first a document describing the functional requirements.

Test Plan Outcome

Unit Tests: For each set of functions that implement our Functional Requirements, we wrote Unit Tests that would ensure the function is properly implemented

Functional Tests: To describe what functional testing will be performing for the application

Regression Tests: Explain how we implement regression testing to ensure that changes by one developer to one portion of the code will not introduce errors elsewhere

Verification: Describing our process to verify the software application

Validation: Describing our process to validate the software application

Mitigation: Explain mitigation strategies to address any vulnerabilities or software errors that were not caught using the testing mechanisms implemented as part of the test plan

Table 1: A partial list of Test Cases

Unit Tests


Unit testing, as it is applied to reporting and alerting component of the Supply Chain Risk Management (SCRM) system will be focusing on how to complete an action spelled out in the functional requirement in the artifact document.


Here is given a proposed list of test cases, steps to execute, its priority, if manual or automatic, and its outcome. The picture below is a partial list of test units we developed.


Functional Tests

The purpose of the SCRM Functional Testing is to validate that the software meets the functional requirements specified within the design documents. The testing will be dynamic in nature, testing the application in a running state as opposed to reviewing the source code. Each requirement will be tested to verify that the expected output is returned when inputting the appropriate input. For requirements that are not testable in that manner, such as usability requirements, User Acceptance Testing (UAT) will be performed. Any error conditions will be documented and reviewed for suitable error messages.

Test Environment

The test environment will reflect the current standard end-user technology portfolio. Test users will represent each department.
Platform 1: Windows 7 desktop PC, Internet Explorer 11/Firefox Browsers, Wired network
Platform 2: Windows 10 Laptop, Microsoft Edge/Google Chrome Browsers, Wireless network

Test Plan
End users and designated testers will perform the test cases defined in the partial list of Test Cases section below. The results will be documented. Any deviations will be recorded and referred to developers for review. For those that involve other systems such as API integrations, coordination will be established so that testing can be completed.

Partial List of Test Cases

Requirement: Report and Alert Interaction

Users should be able to quickly get started with interacting with reports and alerts with minimal training required.
Expected Results:
To validate that, users can get started interacting with reports and alerts with minimal training.


Requirement: Help Capabilities
The system should contain “help” or documentation capabilities for end-users to supplement training and provide self-service usage assistance.
Expected Results:

End users can easily locate the help feature, navigate through the options, and search for common issues and usage instructions.

Requirement: 180 days of duration

Alerts and reports must be available within the system for at least 180 days
Expected Results:

While difficult to simulate, review system configuration and parameters to within a high degree of certainty that data will be available for at least 180 days.

Regression Tests

The purpose of the SCRM system regression test is to validate the functional and non-functional test still complete successfully after any newly developed code is introduced into the Reporting and Alerting engine, and it’s subsequent microservices.

System Overview
The SCRM Reporting and Alerting Engine provides a robust solution for supply chain management by coupling closely with the existing Taxonomy Builder, Fusion Engine, and Tracker Analytics systems.


The engine capability’s leverage RESTful APIs that can be extended by third parties and end-users to provide reports in any format and be consumed by any existing framework used for reporting and generating alerts.

Fig 1: SCRM Alerts Component Architecture


Golden File Concept

It is difficult to automate the comparison of test results with expected test results when the test data is complex, such as the format of a report, or contents of a region of memory. To simplify the comparison, we declare the expected test results as a “Golden” file. A “Golden” file is considered truth after the file has been analyzed to be adherent to the system requirements, and controlled documents, and executed multiple times with the same repeatable results. Formal test procedure results are then compared to the “Golden” file, and the two must match. Golden files used for check test output are included as test inputs.



  • Requirements traceability: If the documentation for the Supply Chain Risk Management (SCRM) includes requirements that in some way pertain to a particular method, then that method's test series will be tailored as much as possible, to test for the method's compatibility with those requirements.

  • Items or components tested: It is impossible to list every method that will be tested, because not every non-trivial method will be designed or brainstormed beforehand.  Suffice it to say that a test series will be written for every non-trivial method that does not require user input and that we will design our software in such a way that nontrivial functions will be separated from user input as much as possible.-

  • Testing schedule and resources: Since this form of testing will be performed as we go, we will not set up a schedule for it.  It should, however, take about as much time (and quite possibly more) to write these test series as writing the methods that they test.  The idea is that even though most of these tests will probably succeed most of the time, by automating the testing process, we can easily avoid doing a great deal more work in the long run.

  • Hardware and software requirements: COTS/FOSS software, may be required to run the test cases in series.  

  • Constraints: The one important exception to this form of testing is the User Interface.  It is nearly impossible to test basic user interface functionality without user interaction.  Although the GUI methods are to be kept separate from as much of the non-trivial functionality as possible, the GUI developer will need to perform some manual checking to ensure that each button performs the desired function.


  • Database Software

  • Features to be tested

Mobile Client

Although this is an important component of our project, it is a COTS/FOSS software, and so our testing will only involve making sure that our software can use it to perform the appropriate database functions.
Testing approach:
Since this is third-party database software, we will test our own software's integration with it, rather than the database software itself.
Pass/Fail criteria: This component will automatically "pass" if the Server, Server Software Proxy, Reporting Application, and Class Library all pass their tests that relate to the database

  • Mobile Client

  • Features to be tested

At the component testing stage, the User Interface will only receive basic testing (make sure buttons work and bring up correct windows, etc.) and most of the testing will be based on the back-end methods that call the Server Procedure Proxy's methods to populate the GUI with the intended information.
Testing approach:

  • Test series will be used to test the functionality of each non-trivial back-end function.

  • Human testing will be used to test the GUI functions.

  • Because the Client's actual functionality is so dependent on all of the other components working correctly, the Client application cannot be tested as a whole until the entire system can be tested.

  • Pass/Fail criteria: If the test series all pass, and if the basic windows appear as they should when the user taps the appropriate buttons, this test is considered to pass.  If not, it fails.


  • The SDS provides more information on the Mobile Client's intended functionality.

Desktop Client

  • Desktop Client

  • Features to be tested


At the component testing stage, the User Interface will only receive basic testing (make sure buttons work and bring up correct windows, etc.) and most of the testing will be based on the back-end methods that call the Server Procedure Proxy's methods to populate the GUI with the intended information.

Testing approach:

  • Test series will be used to test the functionality of each non-trivial back-end function.

  • Human testing will be used to test the GUI functions.

  • Because the Client's actual functionality is so dependent on all of the other components working correctly, the Client application cannot be tested as a whole until the entire system can be tested.

  • Pass/Fail criteria: If the test series all pass, and if the basic windows appear as they should when the user taps the appropriate buttons, this test is considered to pass.  If not, it fails.


In the software validation process, we will be evaluating the final product, i.e. (Reporting & Alerting Software ), to check whether the software meets the customer expectations and requirements.

Software Design Validation


  • Validate the number of servers required to support the volume of logs

  • Validate the software architecture design with the govt agencies to avoid any problems later relating to scalability or performance. Ask the agencies to provide a capacity plan that you can use as a scalability roadmap.

  • Validate the design log aggregation points into the architecture.

  • Allow for a Development Manager/DB in your architecture. It is possible to crash/lag a system in the process of creating reporting & alerting content (rules, reports, etc.). Having a non-production system to build and test content on will pay big dividends the first time something being written fails and forces a manager to restart.

  • Validate reporting & alerting engine network connectivity

  • Validate the reporting & alerting engine database

  • Determine the disk space requirements for your reporting & alerting engine database(es)

  • Train the Implementation team to test the reporting and alerting engine ‘s dashboard

Functionality Validation

  • Configure Reporting and Alerting software engine so it can start accepting logs and alerts from third-party

  • Configure it to transmit events

  • Validate events are being received at the manager from the agents

  • Check to see all expected events are being received

  • Validate the events that are being parsed and classified properly

  • Validate the Reporting and Alerting software is processing events properly

  • Validate data normalization

  • Validate correlation function

  • Validate database archiving capability

  • Validate database restore functionality

Deployment Validation

  • Provide access to test user community

  • Have these users validate content suitability for their assigned roles

  • Build production accounts for user community

  • Build accounts with appropriate rights

  • Disseminate user accounts and software

  • Training of End Users and SOC Personnel in Security Operation Centers in various functions.

  • Migrate business processes to this new Reporting & Alert environment

  •  Integrate into the CSIRT/Incident Handling process

  • Educate internal groups on capabilities and limitations of the Reporting & Alerting software product; this can include Audit, Management, and especially Legal.

  • If possible (based on regulatory and legal requirements), develop white list-based filtering to prevent your database from getting filled with useless events. While you would like to have every log at your fingertips, the cost is storage, and bandwidth can be exorbitant. Determine some local trade-offs and filter at the log collection point to reduce overhead.

User Experience Validation

  • Provide test accounts to partners who will be using the system then give the playbook how to use the dashboard for viewing threat information

  • Let them write rules and observe logs and alerts

  • Collect the feedback from partners, and govt agencies end user to see if there is a room for improvement in terms of user experience




Though not every vulnerability or software error will be identified through our testing processes, we aim to mitigate the risk of these issues through proper planning and execution, utilizing a risk management process and life cycle testing of our software, the overall risk of major vulnerabilities or code errors will be significantly reduced.

Compared to a traditional waterfall software project, scope on our agile project is not rigidly managed but is left flexible.  The key determinant of success on our project is customer satisfaction. 


Our software development teams embrace change and understand that requirements will evolve throughout a project, which is why agile methodologies allow requirements to be defined iteratively in the product backlog.  We utilize the Scrum methodology, where the product backlog is an ordered list of requirements that the Scrum team maintains for each product.


The backlog changes as business conditions change, technology evolves, or new requirements are defined.  Continuous customer involvement is necessary on agile projects since the customer must prioritize the requirements and make the final decision about which ones will be addressed in each new iteration.

Fig. 2: Risk Management Process


Agile Strategy for Managing Bugs


There are two general strategies for managing software bugs on an agile project.  When a bug is detected, the first order of business is to try to determine how critical it is and what impact it will have on the functionality of the application or the entire system. Our generally accepted taxonomy has the following severity levels:


Severity 1:  An error that prevents the accomplishment of an operational or mission-essential function, prevents the operator/user from performing a mission-essential function, or jeopardizes personnel safety.


Severity 2:  An error that adversely affects the accomplishment of an operational or mission-essential function and for which no acceptable alternative workarounds are available.


Severity 3:  An error that adversely affects the accomplishment of an operational or mission-essential function for which acceptable alternative workarounds are available.


Severity 4:  An error that is an operator/user inconvenience and affects operational or mission-essential functions.Severity 5:  All other errors.Critical bugs, or showstoppers as they are often called, are so severe that they prevent us from further testing.  But a critical bug that, for example, causes an application to crash, might be a low priority if it happens very rarely.  Priority for fixing bugs should be based on the risk potential of the bug. 


On fast-paced agile projects, bug fixes for low severity bugs often get low priority and are usually only scheduled when time is available.Risk-based software testing looks at two factors – the probability of the bug occurring and the impact of the bug when it occurs. 


High impact/high probability bugs fixes should be scheduled first:  for example, a bug in a particular module of code for an on-line shopping cart algorithm that keeps your business from processing transactions.  On the other hand, a bug that introduces a very slight rounding error in that same transaction should have a lower priority.



As shown in figure 3, we utilize a risk matrix for measuring the severity of the bug or vulnerability and address each one as applicable.

Fig 3: Risk Matrix

A Second Agile Strategy for Managing Bugs


The second general strategy for managing software bugs on agile projects is to avoid them in the first place.  There is a school of thought that says that a problem caught in development is not a bug since the software is still in the development phase.


Fig. 4: gile Projects

Agile is all about short, flexible development cycles that respond quickly to customer demand.  DevContinuous Delivery (CD) is a software development strategy that optimizes our delivery process to get high-quality software into the hands of our customers as quickly as possible. The notion of releasing a prototype, or minimum viable product (MVP), is crucial for getting early feedback.


Once the MVP is released, we’re then able to get feedback by tracking usage patterns, which is a way to test a product hypothesis with minimal resources right away. Every release going forward can then be measured for how well it converts into the user behaviors we want the release to achieve.


The concept of a baseline MVP product that contains just enough features to solve a specific business problem will also reduce wasted engineering hours and a tendency for feature creep that often leads to buggy software


Agile project management accommodates change as the need arises, and scope, schedule, and/or cost vary as required.  This flexibility comes because agile teams engage with customers and other stakeholders throughout the project--to do things like prioritizing bug fixes and enhancements in the team's backlog-- and not just at the end of the project.    

On agile projects, quality comes through collaboration.  Effective collaboration is vital for prioritizing bugs in the risk-based software testing approach described above, as well as throughout the entire Continuous Delivery process. An agile team that uses a test management tool that allows them to work collaboratively in real-time will also be able to recognize defects in their products and mitigate them more quickly.  This shortens development and testing cycles, improves team efficiency, and reduces the time it takes the team to bring high-quality software to market.

Reference Library

  • Wieland, A., Wallenburg, C.M., 2012. “Dealing with supply-chain risks: Linking risk management practices and strategies to performance.” International Journal of Physical Distribution & Logistics Management, 42(10).

  • Bell, E. J., Autry, C.W., Griffis, S.E., 2015. “Supply Chain Interdiction as a Competitive Weapon.” Transportation Journal, Vol. 54, No.1, Winter 2015

  • Wieland, A., Wallenburg, C.M. (2012, February 16). Dealing with supply chain risks. Retrieved from  Emerald Website: Dealing with supply chain risks


I did have some background in the software design and development that acquired during my undergraduate time. But I did not realize, until after this course, that supply chain risk management is an area of great concern in the context of national security and business survival. And this is more reinforced in nowadays economy environment that has become global. Moving goods from point A to point B globally is a huge undertaken as it carries risks of all sorts since it may involve one to several suppliers. Besides risks of disruption caused by word's events, natural or human-made incidents, political or trade tensions, bad actors are not that far; ready to exploit all the vulnerabilities they can unearth.

With this course, I realize how much involvement professionally and ethically people are to be to do their due diligence. The Professionalism level and ethical character expected from cybersecurity professionals cannot be underestimated when facing the supply chain risks management equation. It is good that this course has provided me with that awareness.  If a distribution channel is taken over by hackers or that a cyber professional betray the trust given to him to do the job of ensuring safety, consequences are humongous. Anything can be injected into the supply chain by bad actors. If the application that is used to track goods as they transition from airplanes to ships, from ships to land, and from land to airplanes again and back to land does not possess the right security mechanisms, a system alerts that is failing, confidentiality, integrity, and availability not guaranteed, the whole business operations ecosystem is in jeopardy.
Supply Chain Risk Management stakes have reminded me of the professional level its actors carry to ensure proper operations. Cybersecurity actors need not only do the job as required but need to constantly ask themselves if the job is being done ethically, especially when faced with the dilemma that comes outside the established security norms. While organizations may have made all the effort to put in place security policy that they think will help in addressing most common risks, but then how about when coming into situations of risks not spelled out in the actual processes. This is where the professional nature of those involved to protect the supply chain and their ethical values come into play. So, cyber professionals in the context of supply chain risk management should go beyond the routine and constantly adapting to changes as they embark on risk identification, risk assessment, and risk mitigation journey.
How can business, for example, keep on trusting a supplier that the technology they use is current and will not fall victim to a cyber attack, and if they do, what are the consequences to the business operations and how to mitigate those risks. If the business policy does have alternative backup solutions documented somewhere, should it not be suggested as part of risk mitigation, should not the business inquire about the software the supplier is using, and check for its robustness or if it does not have bugs that can be easily exploited?

Through this course, coupled with the need to do more research to respond to the prompts of the project, I have learned that Splunk, too, can be used as a tool to reach business operations monitoring goals. Splunk is a solution organizations can leverage to meet monitoring objectives as suggested in NIST cybersecurity framework. With Splunk, organization can afford visibility into a supply chain at different phases as goods move in the supply chain; events can be monitored continuously and metrics established in real-time, which really helps to make fast and informed decisions. Splunk has many other applications for an organization in the reporting aspect of business operations related to the supply chain. 

bottom of page