What am I most excited about going into 2026 when it come to AI in Software Development?
9th January 2026
Introduction
2025 was another year of change for us who work in software development. New tools, new ideas, a lot of opinions, and no shortage of skepticism along the way! (It felt like everyone had a take, which personally, I love, it shows people care, and have thoughts they want to share). 2026 will probably look similar in that sense. But I also think it’s shaping up to be a year of opportunity. Not just to move faster, but to do things better. Now, looking ahead 6 or 12 months in this space is always risky (and bit of fun!). Things move quickly, new capabilities come to light, and by the end of the year some of this might look massively optimistic, or completely wrong, that’s part of the fun. Even with that, I wanted to put a few thoughts down on what I’m most excited about as we head into 2026. These aren’t predictions or a grand vision of the future. Just five areas where I think things could get interesting, and where I’m personally excited to see how AI continues to shape how we build software :)
And to tease you into reading all of them, number five is the one that I am most looking forward to this year :)
AI moves upstream. Planning and brainstorming become first class to writing software
One of the areas I’m most excited about is AI moving earlier in the SDLC, specifically around brainstorming (aka researching) solutions, and then turning that thinking into a plan (or a spec). I see these as two problems to solve, not one. They need different experiences, but when they’re done well, they improve everything downstream, especially coding.
In 2025, we started to see some of this. VS Code introduced plan mode, Claude added support for planning, and we saw the rise of spec driven development. All of that feels like the right direction. But In 2026, I think this goes further.
One of the reasons agent mode in the IDE and coding agents still struggle to land is that we often interact with them as if the plan already exists, or as if it’s already obvious to the agent. We assume it understands the right approach and that the solution we’ve hinted at is the correct one. The reality is, we’re just not quite there yet from a technology standpoint.
A prompt to an agent might look something like this:
“I’ve been given a task to add a search experience to this application. This is the file where search needs to live. Update the right files to include search.”
The agent goes off and does its thing. It usually gets you 40% of the way there. Then comes the back and forth to get it to 60%, 80%, and beyond. What’s missing here isn’t better code generation. It’s everything before that. Framing up the right context upfront. And with that, from what I see, there are two gaps that will improve in 2026:
First, in the example above, what’s actually the right way to implement search in this application? Has search already been solved somewhere else in the organisation? Is there existing functionality in this codebase we should be building on? Developers should be able to research and brainstorm together, with AI as part of that conversation. That means pulling in context from other repositories across the organisation, best practices from the web, and any custom instructions or constraints that matter. All of that combined helps arrive at a solution that’s more data driven and far more likely to be the right one. This isn’t about jumping straight to a spec. It’s about having a shared space to think, explore options, and pressure test ideas, so the team can land on the right solution the first time around.
Second, once you’ve landed on the right solution, you need to break it down. Turn it into stages, steps, and a plan. A spec, effectively. Something explicit enough that an agent can actually follow. That could mean a parent issue that lays out the full approach, with a set of smaller sub issues underneath it. Each sub issue focuses on one clear piece of work, written in a very single purpose way. In a realistic example, that spec might result in seven issues, all rolled up under one parent. That then kicks off seven separate coding agent runs. Each agent understands the bigger picture, but is hyper focused on solving one specific part of the plan. At the same time, it can take context from the other issues, so it’s aware of what else is happening around it. We already know agents are better when they can obsess over one thing and do it well. This approach lets them do exactly that, without losing sight of the wider system they’re contributing to.
Yes, this adds time upfront. But that time pays for itself. It helps ensure the problem is being solved in the right way, and it gives agents the structure they need to do better work. Instead of getting to 40% on the first pass, you’re starting closer to 80% or 90%. Will it ever be 100%? Probably not in 2026. But it’s an improvement over where we are today.
Because of this, I expect to see more use of AI upstream in 2026, especially around brainstorming and planning. And I’m excited to see how much that improves everything that comes after it in the SDLC :)
A quick side note. Do I think planning and brainstorming stay this important forever? Hard to say. Looking ahead to 2027 or 2028 feels a bit wild. But as coding agents get better, gain more context, and handle more complexity, some of this might become less important. That said, I don’t think planning and brainstorming ever go away completely. At least for the next few years, they feel like the key to unlocking much better outcomes from agents.
Agents operate at repo, service, and system level
Leading on from that, I think 2026 is the year coding agents start to work at scale, especially in codebases that already exist.
2025 was the year coding agents came to market!! We saw some product market fit emerge, along with some useful workflows. Agents could take on larger problems than agent mode in the IDE, and in the right scenarios, they worked well. That “right scenario” today is mostly greenfield work. New applications, new services, or relatively small and clean repositories. That’s still valuable, we shouldn’t shy away from that, but it’s not how most software actually looks, right?. Most codebases are brownfield. They’ve been around for years. They’ve evolved. They carry decisions, trade offs, and more than a little historical knowledge and information. This is where coding agents today can struggle. They work, but not consistently. They’re great at solving parts of the problem, but lack the ability to go deep on a complex issue.
I think that starts to change in 2026. Over the next year, I expect coding agents to get better at working inside long lived, complex projects. Better at understanding structure, better at respecting existing patterns, and better at knowing what not to touch. I also expect them to lean more heavily on quality signals in real time, using tests, checks, and feedback loops to course correct as they go, rather than after the fact. Yes, better integration layers will help. Yes, access to more context and richer signals will matter. When coding agents can operate confidently at repo, service, and even system level in brownfield codebases, that’s when they stop feeling like an experiment and start feeling genuinely valuable. 2026 is going to take a big step in that direction. I don’t think they will be perfect, but a good improvement.
Reusable expertise through custom agents & skills
2025 was very much the year of agents. Or maybe more accurately, the year of agentic experiences. In 2026, what I’m most excited about is what happens when those agents become easier to extend, customise, and reuse.
It’s great that first party agents exist and cover a lot of common use cases out of the box. But where things get really interesting is when teams can create smaller, more focused agents or skills that solve one specific problem, but really really well. Things like a migration helper, a refactor specialist, or a release checklist assistant. Narrow in scope, opinionated by design, and built to be reused easily.
What excites me most here is the idea of building expertise once, and then benefiting from it repeatedly. Anyone who’s spent time writing prompts, detailed instructions, wiring up integrations with MCP, or figuring out the right way to guide an agent knows this takes effort. Doing it well isn’t easy. But once that work is done, and done well, the value is there.
That upfront investment shouldn’t be lost every time someone starts from scratch. Being able to package that knowledge into a reusable skill or custom agent, and then share it across a team or an organisation, feels like a big value add. On top of that, add a marketplace style experience, where people can discover, reuse, and build on each other’s work, and it starts to scale in a powerful way.
For me, 2026 feels like the year where we shift away from giant, do everything agents, and toward smaller building blocks that can be composed together. When it becomes easier to get started with these reusable components, and easier to extend them over time, I think we’ll see a lot more meaningful adoption. Less friction, more leverage, and far more consistency in how teams apply their expertise.
This is very much the case in organisations or enterprises.
AI becomes more of a continuous presence across the SDLC
Building on the theme in section 1, one of the things I’m most excited about in 2026 is AI becoming more native across the rest of the SDLC, not just in the “write code” part.
AI has already been a force multiplier for coding. Tools like Copilot, Claude, and Cursor make it easier to produce code faster, and for many developers that’s been a change in productivity. But there’s a knock on effect, right? More code doesn’t just mean more output. It also means more planning, more to review, more to test, more to secure, and more to operate. You don’t get to skip the rest of the SDLC just because the first bit got faster.
This is where the enterprise angle gets real. For an individual developer working on a small project, “more code, faster” can be the whole story. But inside larger teams, getting code to production still means following the SDLC. Reviews still matter. Security still matters. Release processes still matter. Ops still matters. If anything, AI accelerating code creation makes the later steps feel even more important, because they’re the part that can become the bottleneck.
So with all that said, what I’m excited about in 2026 is AI starting to show up more naturally in those downstream areas. Helping with review, security, and operations in a way that feels integrated. We got early signals of this in 2025 with AI code review experiences starting to hit the market. I expect and hope that expands, with AI becoming more useful in other areas like validating changes, surfacing risk earlier, explaining trade offs, and helping teams move changes through the pipeline with less friction.
For me, the big value add isn’t just “AI helps me write code.” It’s “AI helps me ship software.” AI working across the SDLC end to end, that’s when we start to see the real value for developers and teams.
Orchestration/Experiences around agents/skills becomes a core developer driver
With all of that said, if more agents show up, skills become more common, and AI is present across the SDLC, what’s left?
For me, this is where 2026 gets really interesting, the experience that enables developers to actually thrive with all of this.
Humour me for a second and assume the previous sections come true. AI helps with planning and brainstorming. Coding agents get better and start working confidently in larger, more complex codebases. Developers are using more skills, and AI shows up across every stage of the SDLC. On paper, that all sounds great.
In practice, it also sounds, chaotic.
Developers could be brainstorming three different solutions in one place, planning multiple approaches in another, kicking off dozens of coding agent runs, writing code in their IDE for the parts they care most about, reviewing a pile of pull requests, fixing vulnerabilities, and still trying to ship features and bug fixes on time. That’s a lot to juggle, and it’s very easy for it to turn messy.
Where is a task actually in its journey to production? Is it still being planned? Is it in review? Is an agent working on it asynchronously, or does it need a developer decision right now? And out of everything in flight, what actually needs the developer’s attention now?
This is why I think, again, just personal thoughts, 2026 becomes a year where we focus much more on orchestration and experiences of these agents/skills. 2026 won’t just be about releasing more agents or more skills, but building the glue that makes them work together, makes them more usable. Experiences that help developers and teams coordinate work across humans and agents, without losing track of what’s happening or who’s responsible for what.
What excites me is the idea of experiences where a developer can spin up a piece of work, pull in teammates, bring in the right agents or skills, and then move smoothly through brainstorming, planning, and execution. Brainstorming feeds naturally into a plan. A plan kicks off multiple coding tasks. Some run asynchronously with agents, others are handled directly by developers. From there it goes into reviewing, securing, and quality checks, then naturally into production. Everything stays connected, visible, and understandable.
Right now, I think the industry does a decent job of giving developers choice. Lots of tools, lots of agents, and lots of capabilities. What we don’t do well yet is coordination. Moving from one stage to the next isn’t easy. Knowing when you’re needed isn’t always obvious. Understanding what’s happening across all your in flight work can be harder than it should be.
Is the industry going to get this perfect in 2026? Definitely not. But I do think we’ll see a shift toward lowering friction and improving the experiences that sit around agents and skills. And when that starts to come together, that’s when all of this really begins to feel powerful, not overwhelming.
That’s the part I’m most excited about :)
Conclusion
Looking 12 months ahead in this space is always a bit of a shot in the dark :) Things move fast, ideas evolve, and what feels obvious today can look very different tomorrow. That said, 2025 felt like a step forward, and 2026 has the potential to be even more interesting. What excites me most is not any single tool or capability, but the direction we’re heading as an industry. AI is starting to show up in more thoughtful, more useful ways, and we’re beginning to focus on experiences that actually help developers do their best work. Some of this will land, some of it won’t, and that’s okay. But if even a few of these areas move in the direction I’m hoping for, 2026 is going to be a fun year to be building software :)
Leadership Principles: Guiding the Way to Success
4th July 2023
Prequal
I've been searching for the right topic for my next blog post, which would offer relevance and valuable insights. As my role has evolved in recent months, I've gained a deeper understanding of leadership and what it truly means to be a leader. While I've held various leadership positions in the past, it's only now, with the opportunity to reflect, that I've crystallized my beliefs about what constitutes strong leadership traits. Working at GitHub has exposed me to exceptional technical and people leaders whose collaboration has helped shape my leadership principles. This blog post aims to share these principles and provide my perspective on their significance.
Before we begin, it's essential to acknowledge that leadership takes various forms, and the principles I present may not be universally applicable. My objective is not to impose my views on others but to share what leadership means.
Introduction
Leadership goes beyond a mere title or position; it embodies a mindset and a set of principles that guide individuals to unlock their potential and that of others. Whether leading a small team or an entire organization, understanding and embodying practical leadership principles can make all the difference in achieving professional and personal success.
In this blog post, I will explore the six leadership principles I hold myself accountable for. My aim with these principles is to cultivate a positive and productive work environment that lays a solid foundation for driving teams towards shared goals.
Lead by Example
As a leader, actions speak louder than words. Leading by example entails embodying the behaviours and values you expect from others. Whether demonstrating integrity, embracing a strong work ethic, or fostering a culture of continuous learning, your actions set the standard for others to follow.
Communicate Effectively
Clear, open, and transparent communication forms the backbone of effective leadership. It involves listening, providing feedback, and ensuring everyone is aligned on goals and expectations. Effective communication builds trust, resolves conflicts, and strengthens collaboration within the team. Whether the message is positive or negative, clarity in communication fosters a shared understanding.
Empower and Delegate
A remarkable leader recognizes the strengths and potential of their team members and empowers them to take ownership and make decisions. Delegating tasks and responsibilities lightens the load and allows your team to grow and develop their skills. Trusting your team and granting autonomy instil confidence and a sense of ownership.
Inspire and Motivate
A leader manages tasks, but, more importantly, inspires and motivates their team members. Be a source of inspiration by setting a compelling vision, articulating goals, and sharing the bigger picture. Recognize and celebrate achievements, creating a positive and supportive atmosphere where individuals feel motivated to give their best.
Humans, not robots
A great leader acknowledges and appreciates each team member's unique qualities and needs. Treating people as individuals rather than mere cogs in a machine demonstrates empathy, fosters a positive work environment, and builds strong relationships.
Stay relatable
Staying relatable as a leader means remaining open to questions and maintaining credibility in your expertise. While it may not always be necessary, I have personally recognized and valued the importance of this approach, especially when working as an individual contributor within a team.
Conclusion
I've learned that leadership is an ongoing journey that I have not mastered and may never (will likely never :) ) fully conquer. Each leader holds their own perspective on which behaviours are important to them. By sharing my leadership principles, I hope to spark insightful discussions and encourage fellow leaders to reflect on their guiding principles. Together, we can continue to grow and refine our leadership approaches, driving positive impact and empowering those around us.
Why Advanced Security?
6th February 2022
Introduction
It has been seven months since I first joined GitHub (wow, that time goes quickly), specifically the advanced security team. For the people who have worked with me before, you likely know I'm incredibly passionate about what I do and work on. I strive to work in an environment and within a team that contributes meaningful work that makes a difference. That is one of the reasons I jumped at joining GitHub when I had the chance; I am a massive believer in developer experience and DevSecOps, so why not join the home of developers where I can hopefully contribute to making that difference to developers at a broader level.
Over these past seven months, I have seen first-hand some of the decision making criteria and processes that influence the determination of what security tool a company is going to move forward with. There are no right or wrong approaches to picking a security tool. However, there are vital considerations that every company should keep in mind, especially if you are choosing a tool that will bring change to a developers experience and may impact their productivity.
As a member of the advanced security team, I wanted to note six thoughts on how advanced security can strategically bring value to companies from a slightly different angle than expected.
Diversifying your toolset, centralizing the experience
Commonly, companies would like to diversify their DevOps toolset. Doing this provides some advantages of "no vendor lock-in" and being able to pick the best-in-class tools. In the world of security, this is so important. Nowadays, it would be best if you likely had security for SCA, SAST, IAC, Containers and possibly DAST. Realistically, you are not going to find one tool that does all of these, and if you do, are they going to provide the depth and accuracy you would expect? So, let's say you use one tool for SCA, one tool for SAST and one tool for IAC. This means a developer has to check three different tools to get the data they need to make good security decisions. Yes, you may put some basic results in CI and maybe back to the GitHub Pull Request. Still, they need to context switch between GitHub and the three security tools to see the details (to determine if a result is a false positive or further information about the vulnerability). For a developer, this is incredibly frustrating and leads to a lack of productivity. Developers live within their IDE and GitHub (these are the two places I commonly live when writing code), and this is going to be where I get the most work done and at my happiest. So, how do you provide the best experience for your developers whilst still using the tools you want to use?
GitHub Code Scanning (a feature of advanced security) is the experience around fixing vulnerabilities such as status checks in pull requests, in-line annotations in pull requests, a description of the vulnerability, data flow, and so many more. The value of code scanning is it's 100% language and tool agnostic. When you upload data, the only requirement is that it must be in SARIF (for people who don't know, SARIF is a structured JSON object). This means, for example, you could use CodeQL for SAST, Twistlock for Container Scanning and Snyk for SCA, run them all as part of your CI process and upload results to Code Scanning. A developer would now see results from all tools and get a consistent experience! No longer needing to context switch between various tools to get the details. Everything a developer would need is now directly in GitHub, even though you aren't using all GitHub tools to get the data.
There are a few other values outside of improving the developer experience and diversified toolsets. One is, let's say in eight months, a new IaC security tool comes out, which you want to start using; you just need to plug it into your CI process and upload the results to Code Scanning, as you would any other tool. To a developer, 1) they wouldn't even really know a new tool has been added, they just see alerts that need reviewing, and 2) the alerts look the same as other alerts they have been fixing for a while now. As they are used to fixing previous alerts, they will be "used" to fixing alerts from this new IaC tool, which will lead to remarkably high adoption rates. This is why it's so important to provide a consistent experience across tools.
Another value is that as the data is now all in GitHub, where the developers live, they will be more likely to look and review the data coming from these tools versus just skimming through it. There is less friction in seeing these results in Code Scanning, which means they are more likely to take results seriously. You are truly making security a first-class citizen to the developer workflow using Code Scanning. This is where you want security to get to, a first-class citizen to the developer workflow where developers unconsciously fix alerts as they write code.
Regaining confidence in the developer community with SAST tooling
I don't think it's unknown that developers traditionally don't like SAST tools. There are multiple reasons for this. Two main ones are that 1) developers are told about SAST results just before going to production, and 2) the results they get are either in a 23 page PDF or web page with 500+ alerts that THEY are told THEY need to review and fix. For the sake of this section, I will focus on the later; the number of results.
Developers lose trust in SAST tools because they produce so many results, with a large proportion realistically being false positives or quality-focused versus security. This means out of 500 alerts, let's say, only 20 may be worth addressing, meaning a developer has likely wasted 2 hours reviewing 480 non-necessary results. After about five iterations of this over a month, a developer will utterly lose confidence in the tool. Instead of adequately looking through the results, they will skim through results and most likely miss something important they wouldn't have if they had more confidence in the data.
This is where CodeQL comes into play. CodeQL is a semantic code analysis engine that treats your code as data. Vulnerabilities are then modelled as queries and executed against the data (built as a database during a CI run). Due to the fact CodeQL has this "built" version of your code represented as structured data, it allows the queries ran (when well written, of course) to be incredibly accurate and precise with the results it returns. For more details on how CodeQL builds the database, check out this blog post by a colleague, Nick Rolfe: Code scanning and Ruby: turning source code into a queryable database. Although Ruby is in the name, the process is similar for other languages. The above means developers are less likely to see false positives. Therefore when a result (or results) are found, developers will trust the results more and review them appropriately, hopefully leading to meaningful action on the results.
Combine the above with the three query suites you get out of the box with CodeQL per language. It is a legacy approach to run the same SAST scan on high-risk and low-risk applications. They have two completely different risk profiles, so why would a developer want to get notified on alerts that they may only tolerate for the higher risk applications. With CodeQL, you can configure which query suite you want to run on a repository level. This means on a lower to medium-risk application where your acceptance of false positives is close to 0%; you can run the standard security query suite. However, for medium to high-risk applications, where you will have a slightly higher tolerance and acceptance of possible false positives and a broader set of results (like md5 encryption), you can run the security-extended query suite. The accuracy and precision of queries may be slightly lower than the security pack, so expect to see a few more results. There is even a security-and-quality query suite that runs everything in security-extended with some bonus quality queries! This is also advantageous to security personnel as most companies likely have an internal risk rating per application, so it would be easy to map an internal risk rating to a CodeQL query suite.
To conclude, you can write your own CodeQL queries and even your own query packs! Think about the possibilities you can do here. You can take our queries, add some of your own, and maybe create a specific query pack for JavaScript SPA's. Or maybe Python API's? Dial the accuracy, precision and number of results to the number that suits your company.
Helping upskill and embrace educational collaboration around security
It has already been established that security has shifted to being a developer first process over the past five years. Security may be involved as a consultant or advisor, but developers will likely see the data about vulnerabilities before security ever do. Meaning that security tooling needs to adapt to ensure the data returned is primarily aimed at the developer. This is a cultural shift for security tooling. Traditionally these tools have been focused on the security persona; for good reasons to be fair. Security has, up until now, been a security-first mindset, so data aimed at the security persona made sense. This is no longer a suitable approach in this modern age.
With CodeQL, every query comes with information about the query, but most importantly, to a developer, it comes with a recommendation on how to fix the alert, along with language-specific code examples, good and bad! Code examples are what developers want to see. It's great developers getting information about the vulnerability, but, if there are no recommendations on fixing the alert and no language-specific code examples, you introduce friction in the developers fixing process. A good security tool makes it easy to remediate. The easier it is for a developer to remediate the vulnerability, the more likely they are to do it quickly there and then. That's what any developer and security persona wants, high remediation rates!
There will be use cases (I have been there multiple times) where I read the description of a security alert, read the code examples, and still have no idea how to fix this vulnerability. At this point, traditionally, I would just give up and move on. Security can be seen as such a taboo topic in the developer world. Developers think it looks bad on them if they can't fix a vulnerability they caused, which leads to low remediation rates. This is a cultural change every security (and developer) company needs to try and change. Every tool needs to enable and foster collaboration, so if developers are unaware on how to fix it, they can ask and learn for the next time they see a similar vulnerability.
In Code Scanning, within each alert, a developer can click one button, which will open a GitHub Issue, automatically linking that code scanning alert to that issue. In that issue, a developer can mention a tech lead or another developer, maybe a security advisor/engineer, and open a conversation about what they can do to fix this alert. You may be reading this thinking, how does this one-button help really promote a discussion about learning? It's the fact it's so simple and easy to do. A developer doesn't need to copy and paste a bunch of content into Slack/Teams/Jira. They simply click one button, and it automatically opens that issue with all the required information. It's easy to do. Going back to a previous point, streamlining and reducing friction in the developer process will encourage them to use native features like these.
Quick rollout; and therefore time to value
An essential part of adding any tool into your DevOps toolchain is the speed you can roll out and quickly get high adoption rates. It isn't worthwhile purchasing a tool that takes months to adopt and get good uptake. You want a tool where the value begins on day one, not day 90.
To enable Secret Scanning, you simply need to click one button at the organization level. This will enable Secret Scanning on every repository within that organization. Have 100 repositories? 10,000 repositories? It's a button click, and secret scanning is enabled. Custom patterns are the same, you simply add your custom pattern to the organization, and it's applied to every repository automatically. We tend to find most companies adopt secret scanning within an hour or two of getting Advanced Security applied.
Dependency Review is automatically enabled! There is nothing for you to do. No button click, no configuration, you just get advanced security turned on, and dependency review is automatically ready for use.
Code Scanning is the one product within advanced security that isn't automatically consumable via a button click or pre-enabled. This is because you have to update your CI pipelines/workflows to upload data into Code Scanning. Now, you may be thinking, "getting this ready for use across 100's or 1000's of repositories is going to take so much time". However, this doesn't have to be the case. Many customers have enabled Code Scanning (CodeQL) across thousands of repositories within days. If you use GitHub Actions, an open-source tool has been built fully dedicated to getting CodeQL enabled and set up across multiple repositories quickly and automatically, called GHAS Enabler. You can even use a GitHub Action after initially enabling CodeQL on current repositories, which ensures any new repositories automatically get CodeQL setup. Don't use GitHub Actions? Not a problem. Use the API's provided by GitHub to enable Code Scanning, then update your Jenkins Pipelines/ADO Pipelines programmatically with the required CodeQL commands (or any other tool you want to upload data from).
Security Overview will automatically start showing data the more repositories activate GitHub Advanced Security. No configuration is needed!
Finally, rolling out a tool is more than just enabling it and getting teams using it. It's great that people may use advanced security, but are they really using it in a way you want? Are developers revoking secrets? Are developers remediating vulnerabilities found by CodeQL? GitHub has provided a whitepaper on rolling out advanced security in a structured way that helps you see the value quickly and efficiently, hopefully ensuring people are using it expectedly.
Let's find and revoke those secrets ...
When people think about application security, two standard responses are SCA and SAST. Both these tools are critical in every good DevSecOps process. However, a new response on the scene is starting to become as important, Secret Scanning! I have seen some of the best DevSecOps processes complemented by CI/CD tooling, complete with automation and standardization. However, one capability usually is missing, a tool that detects secrets. Let's walk through why this is so important. Let's say a developer accidentally pushes a private key and an Azure Cosmo DB credential to a repository, and no one is aware. Another developer on that project (maybe a contractor who is new to the team?) finds these credentials and stores them for later use. Maybe that contractor is then let go quickly? That contractor then has ALL the permissions they need to access data in that database and delete EVERYTHING that Azure Cosmo DB credential has access to. Scary, right? This is just one example but highlights the importance of a tool that finds secrets. You could have the best SAST, DAST, SCA, etc., but secrets can simply bypass all of these.
GitHub Secret Scanning can detect not just new secrets, but secrets leaked throughout the entire git history of a repository. GitHub only tries to adopt high confidence patterns in its secret scanning service to ensure low false-positive rates, leading to higher confidence in use from developers. Meaning when secrets are found, developers actually action them. We don't want secret scanning to become a problem similar to SAST, where too many false positives lead to low confidence, which means no one uses it and secrets aren't revoked. There is no point in having a tool that finds secrets, but no one does anything about it. Remediation rates are better than the number of alerts found! However, if we don't have a pattern for a secret you would like to find, not a problem, you can simply create a custom pattern for that secret type, and our secret scanning engine will find any values that match that pattern.
Strategically finding the secrets may not be enough. It's great that a tool finds the secrets, but is that adequate? It still requires peoples time to revoke and remediate these secrets, which may take some serious time and effort. That's why within GitHub Secret Scanning, whenever a secret is found, a webhook can be fired and ingested by you. This opens up endless possibilities around automatically revoking certain secrets and even custom update scripts. These webhooks allow you to react to secrets being detected within seconds! No more manual work.
Including security in the developer workflow and embracing data-driven conversations
The final aspect to discuss is likely one of the most important. When security personnel hear about "shifting left" and "giving more responsibilities to the developer", the push back is always, "How can I verify what the developers are doing is correct?" and "What will my role be in this new process?". One of the key elements of driving a developer-first security mindset is bringing developers and security closer together, working in tandem, versus being two separate entities involved at different software lifecycle stages. Being developer-first absolutely doesn't mean security sitting aside and watching along. It's about giving data to developers first, in a meaningful and purposeful way where they can take action quickly and efficiently. Then providing data to security personnel to have more data-driven conversations with developers, ensuring what they are doing is in the company's best interests.
Security Overview is the beginning of that data-driven journey between security and developers. Security overview allows you to answer questions such as:
- Show me the top ten repositories which leak the most secrets
- Show me the top ten repositories that leak the most code scanning alerts, and focus on critical repositories in risk
- Show me the total number of Azure secrets which have been leaked and the repositories they have been leaked in
- Show me the total number of JavaScript SQL Injections and the repositories they have been found in
The value of the above is that you can now create targeted educational campaigns for the repositories that need the most guidance. You can even make specific communication plans only aimed at specific repositories which require contact. You create a much more personalized feel.
You may even have your own SEIM tool like Splunk, Datadog, Sentinel, which means Security Overview may be useful for specific use cases, but there may be more data points that these SEIM tools can provide than Security Overview can right now. This is not a problem. Use the API's and Webhooks (or even native third-party integrations) to integrate with the tool of your choice! Security Overview continues to mature, but it has been so exciting to see the cultural and collaboration changes that Security Overview has promoted.
Conclusion
To conclude, the above does not take away what you would expect from a good security tool, e.g. good high-quality results. But there is more to a security tool than the number of results found. Security has shifted from a security-first persona industry to a security and developer-first industry. Therefore, we need to provide tools and processes that complement both personas, not just one. The developers are the people you expect to fix these results, so let's make sure the experiences provided to these developers are aimed at them, whilst ensuring security have the data to verify the developers are doing what's in the company's best interest.
Reflecting on one of my first phrases in this article, "There are no right or wrong approaches to picking a security tool". I stick by that phrase. Every company has its own beliefs and criteria on what's most important to them. Still, the next time you think about changing/adding/updating developer tooling, especially security, consider more than just the results and data. Think about the experiences you want to create and the outcomes you want to foster.