Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

AppSec isn’t easy. So many different perspectives. In a Micro-Service architecture, systems become polyglot. Go, Rust, Java, Kotlin, JavaScript… data, data, data...

Table of Contents

In Compliance-focused security programs, handling so much data is a problem. What is good enough - how do you set a baseline?

When I was a developer, I didn’t like to touch security fixes… but it’s necessary. Especially these days.

Minimum Viable Secure Product baseline

As an initial baseline, I suggest MVSP for small services, unless you have select different requirements. As a starter:

...

As a starter for the Supply Chain security / Software Bill Of Materials / Dependabot:

2.6 Dependency Patching

What this control measures: Processes are in place to identify, and maintain up-to-date components within your product and/or service. Vulnerabilities that are known to be exploited are appropriately prioritized.


Why this control is important: Applying security patches in common applications and libraries is an important step to securing your infrastructure and application. Processes to deploy these fixes within a reasonable timeframe ensure targeted attacks exploiting these vulnerabilities do not affect the security of your product or data.

In cases where an application relies on a library with known vulnerabilities, ensuring the library is regularly patched also guarantees the application keeps pace with changes in the library. This reduces the chance of an urgent patch breaking application functionality due to a large jump in version.

Libraries or application versions marked as end-of-life should be considered as unpatched as they are no longer receiving security fixes.

Regular vulnerability scanning allows you to easily identify new vulnerabilities, as well as monitor where existing patches have not yet been fully implemented.

It’s a short guideline, and it’s easy to read.

Implementing a score for Dependabot and CodeQL

In the implementation it’s possible to pass a severity_df DataFrame / pivot table to the scoring function set_score.

Dependabot

CodeQL

Both CodeQL and Dependabot results can be Extracted, Transformed and Listed (ETL). You can use Pandas, Excel, Elasticsearch etc. Once you export the data as a CSV, JSON, XML… it’s your own workflow.

The scoring uses the keywords “critical” to “low” depending on the classification. GitHub classifies all reports in the API / ticket that resides with the repository.

Code Block
languagepy
def set_score(severity_df: pd.Series):
    """
    Helper function to claclulate the score metrics
    :param severity_df: DataFrame object with statistics based on GH Sec issues
    :return: malus (how much to deduct from the optimal start value)
    """

    malus = 0

    for key, value in severity_df.iteritems():
        key = str(key).lower()

        # this requires python 3.10
        match key:
            case "critical":
                malus = malus + 4 * value
            case "high":
                malus = malus + 3 * value
            case "moderate":
                malus = malus + 2 * value
            case "medium":
                malus = malus + 2 * value
            case "low":
                malus = malus + 1 * value
            case _:
                sys.exit("Unknown key:" + key)

    return malus

Report for example 100 - malus.

Limitations of the approach and of GH Sec

  • There are more sophisticated scoring approaches.

    • Veracode has a specific application profile setting based on the use case of the product, risks, architecture etc.

    • GitHub Security doesn’t even distinguish between a NodeJS frontend (Angular) and Backend project ( ) . At a certain site and complexity of the service platform, you will need different approaches.

      • But you can still start this way and expand later.

  • With GitHub Security application profiles for PCI DSS or other standards cannot be defined (unless you’re at an SAQ-A, where this can suffice)

    • you can read the standards and document your decisions (in the tickets)

      • GH Sec is too generic and QA focused for PSP or critical infrastructure security (imho)

  • It’s too difficult (cumbersome) to define CI / CD build gates, for example if you want to block deployments with a score, that violates the baseline AppSec score.

    • the AppSec control here is metrics-based, not preventive.

    • it’s focused on reporting