Why Advanced Security?

Introduction:

It has been seven months since I first joined GitHub (wow, that time goes quickly), specifically the advanced security team. For the people who have worked with me before, you likely know I'm incredibly passionate about what I do and work on. I strive to work in an environment and within a team that contributes meaningful work that makes a difference. That is one of the reasons I jumped at joining GitHub when I had the chance; I am a massive believer in developer experience and DevSecOps, so why not join the home of developers where I can hopefully contribute to making that difference to developers at a broader level.

Over these past seven months, I have seen first-hand some of the decision making criteria and processes that influence the determination of what security tool a company is going to move forward with. There are no right or wrong approaches to picking a security tool. However, there are vital considerations that every company should keep in mind, especially if you are choosing a tool that will bring change to a developers experience and may impact their productivity.

As a member of the advanced security team, I wanted to note six thoughts on how advanced security can strategically bring value to companies from a slightly different angle than expected.

Diversifying your toolset, centralizing the experience:

Commonly, companies would like to diversify their DevOps toolset. Doing this provides some advantages of "no vendor lock-in" and being able to pick the best-in-class tools. In the world of security, this is so important. Nowadays, it would be best if you likely had security for SCA, SAST, IAC, Containers and possibly DAST. Realistically, you are not going to find one tool that does all of these, and if you do, are they going to provide the depth and accuracy you would expect? So, let's say you use one tool for SCA, one tool for SAST and one tool for IAC. This means a developer has to check three different tools to get the data they need to make good security decisions. Yes, you may put some basic results in CI and maybe back to the GitHub Pull Request. Still, they need to context switch between GitHub and the three security tools to see the details (to determine if a result is a false positive or further information about the vulnerability). For a developer, this is incredibly frustrating and leads to a lack of productivity. Developers live within their IDE and GitHub (these are the two places I commonly live when writing code), and this is going to be where I get the most work done and at my happiest. So, how do you provide the best experience for your developers whilst still using the tools you want to use?

GitHub Code Scanning (a feature of advanced security) is the experience around fixing vulnerabilities such as status checks in pull requests, in-line annotations in pull requests, a description of the vulnerability, data flow, and so many more. The value of code scanning is it's 100% language and tool agnostic. When you upload data, the only requirement is that it must be in SARIF (for people who don't know, SARIF is a structured JSON object). This means, for example, you could use CodeQL for SAST, Twistlock for Container Scanning and Snyk for SCA, run them all as part of your CI process and upload results to Code Scanning. A developer would now see results from all tools and get a consistent experience! No longer needing to context switch between various tools to get the details. Everything a developer would need is now directly in GitHub, even though you aren't using all GitHub tools to get the data.

There are a few other values outside of improving the developer experience and diversified toolsets. One is, let's say in eight months, a new IaC security tool comes out, which you want to start using; you just need to plug it into your CI process and upload the results to Code Scanning, as you would any other tool. To a developer, 1) they wouldn't even really know a new tool has been added, they just see alerts that need reviewing, and 2) the alerts look the same as other alerts they have been fixing for a while now. As they are used to fixing previous alerts, they will be "used" to fixing alerts from this new IaC tool, which will lead to remarkably high adoption rates. This is why it's so important to provide a consistent experience across tools.

Another value is that as the data is now all in GitHub, where the developers live, they will be more likely to look and review the data coming from these tools versus just skimming through it. There is less friction in seeing these results in Code Scanning, which means they are more likely to take results seriously. You are truly making security a first-class citizen to the developer workflow using Code Scanning. This is where you want security to get to, a first-class citizen to the developer workflow where developers unconsciously fix alerts as they write code.

Regaining confidence in the developer community with SAST tooling:

I don't think it's unknown that developers traditionally don't like SAST tools. There are multiple reasons for this. Two main ones are that 1) developers are told about SAST results just before going to production, and 2) the results they get are either in a 23 page PDF or web page with 500+ alerts that THEY are told THEY need to review and fix. For the sake of this section, I will focus on the later; the number of results.

Developers lose trust in SAST tools because they produce so many results, with a large proportion realistically being false positives or quality-focused versus security. This means out of 500 alerts, let's say, only 20 may be worth addressing, meaning a developer has likely wasted 2 hours reviewing 480 non-necessary results. After about five iterations of this over a month, a developer will utterly lose confidence in the tool. Instead of adequately looking through the results, they will skim through results and most likely miss something important they wouldn't have if they had more confidence in the data.

This is where CodeQL comes into play. CodeQL is a semantic code analysis engine that treats your code as data. Vulnerabilities are then modelled as queries and executed against the data (built as a database during a CI run). Due to the fact CodeQL has this "built" version of your code represented as structured data, it allows the queries ran (when well written, of course) to be incredibly accurate and precise with the results it returns. For more details on how CodeQL builds the database, check out this blog post by a colleague, Nick Rolfe: Code scanning and Ruby: turning source code into a queryable database. Although Ruby is in the name, the process is similar for other languages. The above means developers are less likely to see false positives. Therefore when a result (or results) are found, developers will trust the results more and review them appropriately, hopefully leading to meaningful action on the results.

Combine the above with the three query suites you get out of the box with CodeQL per language. It is a legacy approach to run the same SAST scan on high-risk and low-risk applications. They have two completely different risk profiles, so why would a developer want to get notified on alerts that they may only tolerate for the higher risk applications. With CodeQL, you can configure which query suite you want to run on a repository level. This means on a lower to medium-risk application where your acceptance of false positives is close to 0%; you can run the standard security query suite. However, for medium to high-risk applications, where you will have a slightly higher tolerance and acceptance of possible false positives and a broader set of results (like md5 encryption), you can run the security-extended query suite. The accuracy and precision of queries may be slightly lower than the security pack, so expect to see a few more results. There is even a security-and-quality query suite that runs everything in security-extended with some bonus quality queries! This is also advantageous to security personnel as most companies likely have an internal risk rating per application, so it would be easy to map an internal risk rating to a CodeQL query suite.

To conclude, you can write your own CodeQL queries and even your own query packs! Think about the possibilities you can do here. You can take our queries, add some of your own, and maybe create a specific query pack for JavaScript SPA's. Or maybe Python API's? Dial the accuracy, precision and number of results to the number that suits your company.

Helping upskill and embrace educational collaboration around security:

It has already been established that security has shifted to being a developer first process over the past five years. Security may be involved as a consultant or advisor, but developers will likely see the data about vulnerabilities before security ever do. Meaning that security tooling needs to adapt to ensure the data returned is primarily aimed at the developer. This is a cultural shift for security tooling. Traditionally these tools have been focused on the security persona; for good reasons to be fair. Security has, up until now, been a security-first mindset, so data aimed at the security persona made sense. This is no longer a suitable approach in this modern age.

With CodeQL, every query comes with information about the query, but most importantly, to a developer, it comes with a recommendation on how to fix the alert, along with language-specific code examples, good and bad! Code examples are what developers want to see. It's great developers getting information about the vulnerability, but, if there are no recommendations on fixing the alert and no language-specific code examples, you introduce friction in the developers fixing process. A good security tool makes it easy to remediate. The easier it is for a developer to remediate the vulnerability, the more likely they are to do it quickly there and then. That's what any developer and security persona wants, high remediation rates!

There will be use cases (I have been there multiple times) where I read the description of a security alert, read the code examples, and still have no idea how to fix this vulnerability. At this point, traditionally, I would just give up and move on. Security can be seen as such a taboo topic in the developer world. Developers think it looks bad on them if they can't fix a vulnerability they caused, which leads to low remediation rates. This is a cultural change every security (and developer) company needs to try and change. Every tool needs to enable and foster collaboration, so if developers are unaware on how to fix it, they can ask and learn for the next time they see a similar vulnerability.

In Code Scanning, within each alert, a developer can click one button, which will open a GitHub Issue, automatically linking that code scanning alert to that issue. In that issue, a developer can mention a tech lead or another developer, maybe a security advisor/engineer, and open a conversation about what they can do to fix this alert. You may be reading this thinking, how does this one-button help really promote a discussion about learning? It's the fact it's so simple and easy to do. A developer doesn't need to copy and paste a bunch of content into Slack/Teams/Jira. They simply click one button, and it automatically opens that issue with all the required information. It's easy to do. Going back to a previous point, streamlining and reducing friction in the developer process will encourage them to use native features like these.

Quick rollout; and therefore time to value:

An essential part of adding any tool into your DevOps toolchain is the speed you can roll out and quickly get high adoption rates. It isn't worthwhile purchasing a tool that takes months to adopt and get good uptake. You want a tool where the value begins on day one, not day 90.

To enable Secret Scanning, you simply need to click one button at the organization level. This will enable Secret Scanning on every repository within that organization. Have 100 repositories? 10,000 repositories? It's a button click, and secret scanning is enabled. Custom patterns are the same, you simply add your custom pattern to the organization, and it's applied to every repository automatically. We tend to find most companies adopt secret scanning within an hour or two of getting Advanced Security applied.

Dependency Review is automatically enabled! There is nothing for you to do. No button click, no configuration, you just get advanced security turned on, and dependency review is automatically ready for use.

Code Scanning is the one product within advanced security that isn't automatically consumable via a button click or pre-enabled. This is because you have to update your CI pipelines/workflows to upload data into Code Scanning. Now, you may be thinking, "getting this ready for use across 100's or 1000's of repositories is going to take so much time". However, this doesn't have to be the case. Many customers have enabled Code Scanning (CodeQL) across thousands of repositories within days. If you use GitHub Actions, an open-source tool has been built fully dedicated to getting CodeQL enabled and set up across multiple repositories quickly and automatically, called GHAS Enabler. You can even use a GitHub Action after initially enabling CodeQL on current repositories, which ensures any new repositories automatically get CodeQL setup. Don't use GitHub Actions? Not a problem. Use the API's provided by GitHub to enable Code Scanning, then update your Jenkins Pipelines/ADO Pipelines programmatically with the required CodeQL commands (or any other tool you want to upload data from).

Security Overview will automatically start showing data the more repositories activate GitHub Advanced Security. No configuration is needed!

Finally, rolling out a tool is more than just enabling it and getting teams using it. It's great that people may use advanced security, but are they really using it in a way you want? Are developers revoking secrets? Are developers remediating vulnerabilities found by CodeQL? GitHub has provided a whitepaper on rolling out advanced security in a structured way that helps you see the value quickly and efficiently, hopefully ensuring people are using it expectedly.

Let's find and revoke those secrets ...

When people think about application security, two standard responses are SCA and SAST. Both these tools are critical in every good DevSecOps process. However, a new response on the scene is starting to become as important, Secret Scanning! I have seen some of the best DevSecOps processes complemented by CI/CD tooling, complete with automation and standardization. However, one capability usually is missing, a tool that detects secrets. Let's walk through why this is so important. Let's say a developer accidentally pushes a private key and an Azure Cosmo DB credential to a repository, and no one is aware. Another developer on that project (maybe a contractor who is new to the team?) finds these credentials and stores them for later use. Maybe that contractor is then let go quickly? That contractor then has ALL the permissions they need to access data in that database and delete EVERYTHING that Azure Cosmo DB credential has access to. Scary, right? This is just one example but highlights the importance of a tool that finds secrets. You could have the best SAST, DAST, SCA, etc., but secrets can simply bypass all of these.

GitHub Secret Scanning can detect not just new secrets, but secrets leaked throughout the entire git history of a repository. GitHub only tries to adopt high confidence patterns in its secret scanning service to ensure low false-positive rates, leading to higher confidence in use from developers. Meaning when secrets are found, developers actually action them. We don't want secret scanning to become a problem similar to SAST, where too many false positives lead to low confidence, which means no one uses it and secrets aren't revoked. There is no point in having a tool that finds secrets, but no one does anything about it. Remediation rates are better than the number of alerts found! However, if we don't have a pattern for a secret you would like to find, not a problem, you can simply create a custom pattern for that secret type, and our secret scanning engine will find any values that match that pattern.

Strategically finding the secrets may not be enough. It's great that a tool finds the secrets, but is that adequate? It still requires peoples time to revoke and remediate these secrets, which may take some serious time and effort. That's why within GitHub Secret Scanning, whenever a secret is found, a webhook can be fired and ingested by you. This opens up endless possibilities around automatically revoking certain secrets and even custom update scripts. These webhooks allow you to react to secrets being detected within seconds! No more manual work.

Including security in the developer workflow and embracing data-driven conversations:

The final aspect to discuss is likely one of the most important. When security personnel hear about "shifting left" and "giving more responsibilities to the developer", the push back is always, "How can I verify what the developers are doing is correct?" and "What will my role be in this new process?". One of the key elements of driving a developer-first security mindset is bringing developers and security closer together, working in tandem, versus being two separate entities involved at different software lifecycle stages. Being developer-first absolutely doesn't mean security sitting aside and watching along. It's about giving data to developers first, in a meaningful and purposeful way where they can take action quickly and efficiently. Then providing data to security personnel to have more data-driven conversations with developers, ensuring what they are doing is in the company's best interests.

Security Overview is the beginning of that data-driven journey between security and developers. Security overview allows you to answer questions such as:

  • Show me the top ten repositories which leak the most secrets
  • Show me the top ten repositories that leak the most code scanning alerts, and focus on critical repositories in risk
  • Show me the total number of Azure secrets which have been leaked and the repositories they have been leaked in
  • Show me the total number of JavaScript SQL Injections and the repositories they have been found in

The value of the above is that you can now create targeted educational campaigns for the repositories that need the most guidance. You can even make specific communication plans only aimed at specific repositories which require contact. You create a much more personalized feel.

You may even have your own SEIM tool like Splunk, Datadog, Sentinel, which means Security Overview may be useful for specific use cases, but there may be more data points that these SEIM tools can provide than Security Overview can right now. This is not a problem. Use the API's and Webhooks (or even native third-party integrations) to integrate with the tool of your choice! Security Overview continues to mature, but it has been so exciting to see the cultural and collaboration changes that Security Overview has promoted.

Conclusion:

To conclude, the above does not take away what you would expect from a good security tool, e.g. good high-quality results. But there is more to a security tool than the number of results found. Security has shifted from a security-first persona industry to a security and developer-first industry. Therefore, we need to provide tools and processes that complement both personas, not just one. The developers are the people you expect to fix these results, so let's make sure the experiences provided to these developers are aimed at them, whilst ensuring security have the data to verify the developers are doing what's in the company's best interest.

Reflecting on one of my first phrases in this article, "There are no right or wrong approaches to picking a security tool". I stick by that phrase. Every company has its own beliefs and criteria on what's most important to them. Still, the next time you think about changing/adding/updating developer tooling, especially security, consider more than just the results and data. Think about the experiences you want to create and the outcomes you want to foster.