• Skip to main content

Redpoint Security

Helping security professionals and developers navigate the infosec world.

  • SERVICES
    • Assessment Services
      • Application Security Assessment
      • Secure-Code Review
      • Hybrid Application Security Assessment
      • Mobile Application Security Assessment
      • Web3 Assessments / Smart Contracts
    • Training
    • SDLC Consulting
    • Process Assessments
  • PRODUCTS
    • Surveyor™ – For Web Application Security
  • Industries
    • Finance
    • Software Development
    • Healthcare
    • Insurance
    • Web3
    • Ecommerce
  • RESOURCES
    • Blog
    • Absolute AppSec podcast
    • Open-Source Projects
  • About Us
    • Redpoint’s Story
    • About our team
    • Contact Page

SDLC – Managing risk in Software through the compounding effect of control gates

November 6, 2025 by redpointsec

By Cameron White

“Civil service police exam hurdle test, 1942” by Seattle Municipal Archives is licensed under CC BY 2.0.

If you’ve ever watched someone run the hurdles in a track meet, you may share my amazement at their agility to consistently leap each hurdle at speed when the pressure to perform is on. The compounding exertion to clear each barrier is not hard to imagine, and when you’re trying it yourself, the effort can feel surprising.

Maybe that is one of the reasons I’ve always liked the analogy of hurdles when discussing quality gates or security controls. It may be possible to imagine the quality defect or security bug sailing over the first one or two hurdles, but it is a proportionately much rarer thing for a bug to successfully navigate all the control gates when they are properly aligned in support of the final product. There are many industries that take advantage of the compounding effects of control gates. Borrowing from my own experience, I’ve seen them heavily utilized in both the food manufacturing and software development industries. 

Path to Controlled Product Release

In the food manufacturing world, there is no tolerance for safety defects. There are from time to time some minor quality issues, but for the most part, manufacturers aim to maximize the amount of high quality product produced per the given resources. There can be tremendous pressure on a production team to produce the maximum amount of product with the least amount of downtime. Profit margins can be incredibly narrow (sometimes, pennies on the dollar) for food products. A business is often only able to remain in business because of the sheer volume of product produced. In light of that context, it is interesting that food manufacturers often speak of microbial load, and critical control points in a not too different way than security and software development teams talk about tech/security debt and stop gates. 

Because of these similarities, I think it’s worth looking at the theory behind some of the tolerances food product manufacturers build into their processes. And how these controls are designed to support the flow to eventual product release. 

In food manufacturing, controls are built out starting with a description of the end product. Yes, there are common controls to expect but they are tweaked and designed to align in support of the process to produce the end product. Can someone say Threat Modeling?1  These controls are then strategically inserted into the production flow so they occur at the most efficient junctures. Of note, no single control is designed to cover all risks. They are meant to overlap and offer backup to other controls when possible. It is the cumulative effect of these controls that gives the production team the assurance the finished product won’t make anyone sick and that it merits the customer’s trust. For food companies, the risk of a recall is also significant. As mentioned, the margins can be so tight on some products that having to recall everything produced during a specific shift or from a given location can be enough to put a company deep in the red. In software development, the risks are less likely to be directly impactful to anyone’s health, and are usually limited to the ramifications of important services going down or the loss/exposure of sensitive data. That said, the financial implications to services going down or sensitive information ending up in the wrong hands can be very steep. And similar to the pressure the food industry navigates when scaling safety/quality controls across large quantities of product, there is so much code being written these days that resources for testing and securing each line of code must be thoughtfully scaled. Accordingly, it is in the systemic approach to the “defense in depth” mantra headlined by security teams, that information security teams can gain similar confidence that their systems are resilient enough.

Compounding Effect of Security Gates:

Going back to the sprinter/hurdler we mentioned at the beginning, this cumulative yet logarithmic effect can be physically experienced if you have ever tried to clear a series of hurdles while running down a stretch of track. The first hurdle or two can feel like quite an adjustment compared to the stride you had while simply running, but, notably, once you’ve begun to master the technique, seven hurdles doesn’t feel much harder than ten hurdles. 

In manufacturing this effect is expressed as a logarithmic curve (think inverse of exponential curve)

This is because the initial controls will have the highest impact on the overall outcome. However, it is the additional controls that help to ensure there are no gaps, even if a single control fails; the additional coverage reduces the likelihood of a single point of failure creating an unsafe/unfit product. 

This diminishing of returns means there is almost universal agreement on the value of implementing some carefully thought out controls for any process, but as the number of controls increases, their value can become harder to justify. Additional new controls are not all equally impactful in preventing the bug/defect/contaminant. The risk management side of information security really becomes prominent when discussing the ROI of various security controls with the company’s CFO. How then do we know if we have enough controls in place to cover the gaps? This is how we get to, as Kelly Handerhan is fond of saying, “secure enough”. 

Which Security Gates to choose?

Which controls offer the highest impact? Which controls can most easily be validated? The easier it is to validate a control is working properly, the less need there is to devote additional resources to preparing for the time that that control will fail. What if the risk commonly associated with missing the vulnerability via a false negative scenario could be made entirely tolerable by focused effort on ensuring the controls you have in place are working? I would suggest that besides the obvious compliance check a penetration test can provide, that the intrinsic value of security assessments largely comes from the assurance it can provide on whether the controls in place are working as desired. 

Identifying which controls to implement for a code-delivery pipeline can feel like quite the array of options, but building with the end in mind can narrow the scope considerably. Consider the following questions as a starting point:

  • What are my software assets?
  • Which team owns or is responsible for that asset?
  • How does code for that asset get promoted to the production environment?
  • Who will be using this solution? And what are their security concerns/requirements?
  • What data types and corresponding regulatory requirements do I need to consider?
  • What steps does every code change go through before reaching the production environment?
  • Where are the natural pauses or handoffs during the code-promotion flow?
  • What security/quality controls do I have in my toolbox?
  • Which controls are so critical that if they failed it would destroy trust in the product? 
  • How are those critical controls validated and regularly re-validated?
  • Which controls support or provide similar/redundant coverage to those critical controls? 
SDLC process which foregrounds feedback loops and iterative processes

The Case for Process Validation

With the answers to these questions in mind, it would be possible to draft a minimum security checklist and where these controls or ‘checks’ could fit in the software deployment workflow. And logically, if we wanted assurance this process is working, we would need to test it. We would need to send well defined security bugs through the process and confirm whether our checks are catching anything that falls outside the tolerances we have specified. 

Example of SDLC in Waterfall with locations for stopgates

I hope you’ll bear with one last anecdote from food-manufacturing workflows. For large facilities, it is not uncommon for food manufacturers to set up metal detectors or even x-rays near the end of the production line as a final safety precaution to ensure there were no stray bolts, metal filings from the machinery, or any other artifact from the production facility that may have inadvertently wound up in the product. That sounds very robust, but it is worth noting that, because the demand on these machines is quite intense day in and day out, quality assurance teams periodically throughout each production shift perform tests to validate and confirm these machines are working properly. They can do this by placing samples of steel, iron  or plastic pellets of varying specific sizes through the machine and documenting whether the machine successfully passed or failed to detect each artifact.

They may have three to five sample sizes and as long as the machine can successfully detect all samples, the machine is considered operational. However, if the machine ever fails one of these checks, the time of the failed check is noted, and the tester will look back in their documentation to see when the last successful test occurred. And any product that was checked by that machine since that last successful test is quarantined until it can be re-scanned by a machine that has been properly calibrated and performing within specifications. This level of validation, although mostly unknown to the general public, has led to such an absence of these sorts of contaminants making it into the final product that the public feels comfortable consuming their next frozen meal from the local grocery store. 

This same way of thinking (recognizing that not all security controls are equally impactful) must be applied to our most critical software security controls. If validations are synonymous with annual audits at your organization, I would suggest that means your organization is not performing validations but may have unintentionally conflated its compliance program with its validation process.  

What does this look like in practice? It depends on the control, but similar to the use of an EICAR Anti-Virus Test File, it may be as simple as letting the development and security teams partner up and insert a well defined vulnerable chunk of code and track it through the deployment flow. You may be surprised by what you learn from such an exercise.

Scaling Code Security

The amount of code produced in each year has steadily increased. And with the advent of AI agent coding the amount of code produced each year is expected to shoot into the stratosphere!  At this scale the need for coordinated synergistic security controls assuring customers presents as business imperative. 

Historically, food manufacturing became ubiquitous following the industrial revolution. With the increased urbanization, people became disconnected from food sources and increased the need for non-perishable mass-produced food. Consequently, safety problems and the loss of visibility into food production for the general public meant relying on mass-production of food led to the public lost trust in the food produced. The lack of trust in food-production processes led to the formation of the FDA. And title 21 of the CFR was written legally mandating many quality and safety checks. 

Common Minimum Security Requirements for Software Security:

There are a number of security frameworks to help support or refer to when identifying the most appropriate security controls for your organization’s or team’s code-delivery pipeline. Some highlights include:

  • https://mvsp.dev/
  • https://owaspsamm.org/ 
  • https://owasp.org/www-project-application-security-verification-standard/ 
  • https://owasp.org/www-project-top-ten/ 
  • https://www.redhat.com/en/topics/devops/what-is-devsecops 
  • https://www.iso.org/standard/27001
  • https://www.cisecurity.org/controls
  • https://www.nist.gov/cyberframework

From these you could surmise a possible short list of must have controls, but we can open the question up as well to our consultant friends, appsec program managers, or you, our learned readers:

  • What are your tests? 
  • When juggling the necessity to ship code at high velocity and meet customer/client’s expectations of quality & security, what series of tests have you found helpful in your experience? 
  • Do you have a short list of must haves that get you by in a pinch?
  • Have you discovered a way to prevent metaphorical steel pieces getting into your code pipelines?

We’d love to hear from you with your suggestions! Or reach out if you’d like us to help implement a more secure process for your organization. Reach out in the form below to contact someone at Redpoint.

  1. Software Threat Modeling is a structured approach to identify potential threats and vulnerabilities in a software system. It involves gathering context by analyzing the system’s design and functionality to understand how attackers might compromise it and then developing strategies to mitigate those risks. ↩︎

Filed Under: Appsec, Code Security, Ransomware, SDLC, Secure by Default

Redpoint Security, Inc. - 1421 E. Millbrook Way, Bountiful, UT 84010. Copyright © 2025