This is Part 2 in our 4 part mini-series on software supply chain. This week we are focusing on key terms. players and projects you need to know about when it comes to software supply chain security. When we stop to think about the software running in our production environments, a large proportion of it is very likely open source. Are there effective mechanisms to truly understand and have visibility into all of these libraries? How do you ensure that these libraries are secure? To answer these questions, we feature input from Guy Podjarny, Lena Smart, Brian Behlendorf, Aeva Black, Emily Fox, Jim Zemlin, David Wheeler and Simon Maple as we dissect some key terms and promising projects in the software supply chain security space. Tuning in, you’ll learn what the term SBOM means, why the problem of securing the open-source pipeline is such a complex one, and what organizations like the Open Source Software Foundation (SSF) and Open Source Initiative (OSI) are doing to address it. We also introduce some key players that can provide you with assistance as you work to improve your own open-source security or software supply chain security posture. For all this and more, you won’t want to miss part two of The Secure Developer’s software supply chain security series!
About this episode:
Guy Podjarny: “Fundamentally, the most important thing you can do to improve your own open source security of supply chain security posture is to start by just making sure that you know which components you’re using, what are the problems in them, and how do you address them. And in conveying to the upstream maintainers that this is important, to convey to the world that you have chosen library X over library Y because you believe it to be more secure.
Open Source maintainers are amazing individuals that are producing free software that makes the world better and more productive, and they’re really great. Fundamentally, they cater to their communities, especially the top ones. So, the more their communities express a desire and importance for security over functionality, the more weight they get from their consumers to say, ‘I am willing to forego this enhancement.’”
[00:00:55] ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.
This podcast is sponsored by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open source containers, and infrastructure as code. To learn more visit snyk.io/tsd.
[00:01:43] Simon Maple: Thank you for joining us for part two of the Software Supply Chain series. In part one of this series, we deep dived into what software supply chain security is, the components and why it’s become so important to pay attention to it.
This week, we’re going to be dissecting some key terms, like SBOM, and introduce some key players and promising projects in this space. So, let’s get into it.
When we think about the prevalence of open source libraries in our applications, what is actually running in our production environment, a large proportion of it is open source, third-party libraries written by people that we don’t employ, the open source maintainers, and any kind of open source contributors. If you’re really unlucky, even I might have contributed some code to your application.
Are there effective mechanisms to truly understand and have visibility into all of these libraries and this entire estate? Do we have some kind of log or artefact that we can really state what we’re using under the covers? One term that has become quite synonymous with supply chain security is SBOM. And we have Guypo and Lena Smart, CISO of MongoDB, to share a bit more about this topic with us.
[00:02:49] Guy Podjarny: So, I think the object in question and kind of the key word, if you know anything about software supply chain security, you must have heard is the SBOM, the software bill of materials, which actually is a term that dates a fair bit back, but it has only now kind of made its way to prominence. The SBOM is a bit of a love/hate kind of type term for experts in the field. On one hand, it is a very important artefact to it. On the other hand, it is not the be all and end all. It’s not like, if you’ve produced the software bill of materials, then you can solve the world.
I think, fundamentally, in software supply chain security as it pertains to open source components, there are probably two primary streams on how do you secure it. One is SBOM related and the other is more workflows. On the SBOM front, it starts from, you need to know what you’re using. So, just start by sheer curation of knowing what you are using, forget formats and how that file is. But just understand which libraries are there, tracking it in your source code, tracking it in your build system, tracking it in your deployed systems, to just know which components are out there. If you don’t know that, you can’t do anything about it. You can’t manage it. You can’t report it.
On the internal side, once you know which components you’re using, you need to know whether they are any good. Are there known problems in them today? Are there known vulnerabilities? Are there known licence issues? And then, set yourself up to act on those. How do you fix them? How do you triage them? How do you prioritise which ones to address? Because invariably, you’re going to have many, many components that will be many issues in it. So, more often than not, you can’t really address all the known problems in them. But you should build a system where you’re trying to do your best at choosing the ones to address and a variety of solutions out there to help simplify that.
The second aspect of it is how much do you trust a component, and how much do you anticipate that it may or may not actually turn out to have a vulnerability, or turn out to have been malicious and run by someone with a malicious intent? This notion of trustworthiness is a bit of a level up. It’s a bit of a 202 maybe, versus the start. But it is trying to anticipate if you can look for signals like the maintainers from a country that you might be uncomfortable with especially if this is a government system. You can look at things like what are the security practices that may or may not be running as part of this component’s ethos, maybe you’re trying to assess the learning of it. There’s a bunch of projects in OpenSSF that we’ll talk about down the road, like Scorecard that touch on that.
So, those two are sort of on the internal side of SBOM. And on the external side of SBOM, this is really where it’s the most new to the industry, and due to the executive order has been really kind of rapidly promoting, which is now that you know your software bill of materials, and hopefully you’re doing your job, managing it internally, now you are required if you’re shipping software to a customer, to provide them with your software bill of materials. With the SBOM, there are a couple of new standard formats that kind of tried to help streamline that type of exchange. So, you provide them with that information.
Similarly, on the consumer side, now you’re going to get SBOMs from all these different vendors that are all these different providers, you need to decide what you do with them. This reporting and managing of the SBOMs is the newest area in terms of like an industry practice, and so, it’s probably the most formative.
[00:06:09] Lena Smart: Also, to my mind, the perfect SBOM would be one that takes almost a real time snapshot of the way that your shopping list of software is, which effectively is what this SBOM is. It’s a shopping list of all of your software that’s in your product. The flow is constant. If there’s a change, someone has to be notified somewhere. So, what is that flow? What does that look like? And I think that’s where these frameworks are going to become very helpful.
We’re also looking at utilising OSCAL, which is from NIST as well and it’s an automation language. I think, we’ve got so many ideas floating around just now. We are working with your product folks that Synk to see where the best collaboration would be. I think the first thing is going to be building and showing what we as two companies are doing to create our own SBOM, so restoring them and over sharing that data. But then, I think further, there’s going to be a need and a request, I believe from the government of continuous monitoring of SBOMs, because they want companies, and government agencies, want to know immediately when something has changed that could make their security posture less strong.
[00:07:22] Simon Maple: On the one hand, we have a large amount of open source software in our applications across the globe. Now, SBOMs, they could lend themselves to how we may be able to secure open source. But as Guypo mentioned, there’s a significant component related to the workflow and consumption of these open source libraries and components.
[00:07:41] Guy Podjarny: Just to complete the picture, the other stretch of open source supply chain security is around workflows and assumptions. So, that one starts from scrutinising your dependency ingestion. There have been all sorts of examples of cases in which you think you are consuming library x, but you’re actually consuming library y, or that, maybe it’s been manipulated in the process.
The Codecov example is a good example in which the compromise was in the file that was being downloaded into customer’s build systems. If you were using the hash verification that Codeco did have, then you would have doubt, and in fact, that’s how it was eventually infected, that the file you’re downloading does not match the hash that was provided on the site. But there are a lot of examples on it on different registries.
So, dependency ingestion is where it starts, then you look at what happens after. So, the build process is nobody compromised your build system, and the SolarWinds example is an example of a compromised build system in SolarWinds. So, SolarWinds themselves, not their customers, but they themselves were compromised by a compromise of their build system.
And then lastly, it’s a workflow around response. So, you do need to acknowledge that all of these things, you’re not going to do them perfectly, and that new vulnerabilities will be disclosed, and that components will be deemed to be malicious. So, you need to build the ability to have some form of real time response to this. And just like any type of incident that you might practice, you might rehearse, to say, “Okay, fine. If it turns out the component I’m using is malicious, or has this Log4Shell scale vulnerability, am I ready for it? Do I know where this component is? Do I understand the blast radius? Do I know who to wake up and who to talk to, to engage about this?”
So, that strand is a bit more complicated because it involves people and processes, but it’s a different strand around open source-oriented software supply chain security.
[00:09:29] Simon Maple: That really covers a lot of the main risks that supply chain security can really bring. We’ve talked about the vulnerabilities in open source projects, as well as whether an open source project can be compromised or not. And, also, whether a maintainer or a project can be malicious.
There’s also, I guess, the overall health of a project or a library as to whether that might be close to being abandoned potentially by the maintainer. We’ve seen projects and responsibilities that have been handed over to other maintainers who later turn out to be malicious or perhaps overtaking an existing open source project used by hundreds of thousands or more applications and, and all of a sudden, adding malicious code into that. That’s really where it’s dangerous, whereby there are applications that are using components, and they don’t even know about it. And all of a sudden, one small brick in the wall can become malicious in effect the whole project.
Again, the pipeline is a really interesting one, because things like Codecov or SolarWinds, they’re essentially attacks on the actual pipeline, compared to an application which is living in production. Which if you look back at Log4Shell, and that kind of exploit, that was happening against applications, rather than the pipeline.
So, this is more a known vulnerability in open source. But the pipeline example feels more complex. It feels harder than when we think about trying to defend against this, it feels harder than a lot of the application security that we’ve typically tended to in the past with our usual scanners that we’ve used for many years. So, why is this more complex? What is different that is making this more challenging for us to approach this problem?
[00:11:04] Guy Podjarny: I think a variety of components from a consumer perspective, you just have less control of it. So, you’re basically joining midway. You don’t really have any influence over how the software you’re consuming is produced. Especially, when it comes to open source, you’re getting it for free, and you really have – there’s no relationship. In fact, the vast majority of open source you consume, you don’t even consume directly. You’re using some open source component that in turn uses others, that in turn uses others. This sort of indirect place means that you’re working with something that’s in practice a bit of a black box, technically open source is available to you. Technically, open source is open source, so you can sort of see the code.
But in practice, because of the sheer volume involved, it’s just very hard to understand, let alone influence how well the code is built. That’s the first problem. The first problem is complexity and just, there’s this huge amount of unknown around what you’re consuming, and it’s just a lot that you’re consuming, even finding out what you’re consuming is already a challenge and you continue from there.
The second is, who is making the decisions is a decentralised problem. When you buy a piece of software, then oftentimes organisations will have some form of process, and that process will include some form of security assessment of the component, and the decision of whether that assessment is proper or not sufficient or not free to use. Sometimes even compliance requirements kick in from that vendor. And so, at least someone sort of stopped and asked is this solution I’m about to consume secure enough?
When you’re talking about consuming open source, oftentimes those processes do not exist. It is some developer in some team choosing to use something for its functionality. Because there is no purchasing process, because there is no contract with the provider of the open source components, with the maintainer, there’s oftentimes also an admission of a security evaluation or a security assessment. Then, the software just gets consumed. And this also further happens when it comes to updates and changes, they’re just consumed, they’re just taken by it. So, this is the second piece, which is there is no consumption process. There’s oftentimes like very little scrutiny around a consumption that requires a change to a decentralised ecosystem, to a place in many, many different people need to do those, versus maybe a security purchase from a vendor.
And then, I guess, the last key problem here is that fixing this and actually levelling up security requires the whole supply chain to eventually change something. Probably, the best manifestation of it now is indeed this provision of the SBOM from vendor to vendor. So now, suddenly, everybody upstream needs to provide you with their description, and that requires everybody to start shuffling. The big sledgehammer purchasing power of the US Federal Government has really mobilised us into action faster than most. But even then, it’s hard to get everybody to change their behaviours. But if you don’t get everybody to change their behaviours, then you’re just limited.
Someone down the line has stated, “Hey, I guess I’ve given you a flawed software bill of materials, it really affects the validity of anything that comes downstream.” Even further, the ability to sort of assess how well they’ve done their job, how well they’ve done the security, is just very hard.
If you think about physical supply chains, and how much effort and complexity there is in validating that they are good, it’s vast that our international standards about what good looks like and what is legitimate. There are auditing services and components on it. There are government import inspections. Then, at the end of the day, there’s sort of somebody to sue if it doesn’t work. Open source, is first of all, far earlier in the process, far more decentralised, because there’s no physical goods kind of crossing any border, as far less susceptible to sort of government scrutiny or any centralised scrutiny that matter. In some cases, it’s even resistant to this type of scrutiny.
Open source is a very inclusive and surrounding and tries to be borderless. So, it could be that some consumer says, “I don’t want maintainers from China.” But in open source, it’s not always clear if someone is from China or not. And it is not sort of community wise that acceptable to discriminate against someone that is from one country or the other. So, a lot of conflicts that arise.
I think the best analogy for the challenge of software supply chain security is fake news. To me, the analogy to physical supply chains is actually almost less apt, in a word like downplays the complexity of this problem. Just like in social media, where everybody is a broadcaster now, everybody’s a content creator. You could just upload your content in open source, anybody can consume some piece of software, and then it’s the community that decides whether they want to propagate that or not. And downstream, you really lose sight of where this piece of software originates. Just like in news, you kind of lose sight from where did a rumour or a statement or some piece of news originate. And it’s crowd dynamics.
So, I think we need to acknowledge that it is a very complex problem, and probably a problem that we will never truly entirely solve, not to the degree that we do for physical supply chains. But I do think we’re in the very early stages of it. And so, there’s still a lot better than we can be at, and there are a lot of mitigating controls we can put in place to address it.
[00:16:33] Simon Maple: This is a truly complex problem, and not necessarily one that has a trivial solution. In fact, it’s not even that trivial to what we already have in our estate, what we’re using, particularly with the increasing number of dependency graph levels and so forth. With the acceleration of concern that we’ve seen in the industry, which has led to new standards being created, such as CycloneDX and SPDX, and even getting new executive orders and attention from governments. It’s very interesting to know what actions we can actually take. Who would you say is most notable in the arena today? Who should be listened to? And who should we follow in order to help us with this problem?
[00:17:12] Guy Podjarny: I think the industry is rallying to try and tackle this problem and our commercial opportunities here, and there are sort of community motions that come into play. Probably the most notable is the Open Source Security Foundation, or OpenSSF for short. OpenSSF is a child foundation of the Linux Foundation. As you may or may not know, the Linux Foundation and the OpenSSF, they’re not really like a standard foundation that.
[00:17:33] Brian Behlendorf: Started in early 2020, when a set of companies who’ve been leaders in the open source space started to realise, well, maybe there’s some things systematically that we could do with the tooling and infrastructure that the open source communities use, to improve the default level of security, that people should come to, be able to expect from open source.
So basically, it was launched, this idea of how do we systematically improve things. And a whole lot of volunteers showed up. Volunteers from all sorts of different companies, lots of vibrant conversation about, well, it’s an education problem. People don’t know how to write defensive code. Well, let’s create some training materials and guides for what to do when somebody shows up at your open source project and says, “You’ve got a vulnerability, what should you do, right?” Others who looked at, what are the most critical packages? And that famous xkcd comic of that one little box in the lower right, that was holding up everything else, and it’s two people in their spare time maintaining it. Well, how do we look for those kinds of packages and try to identify them before they become a surprise?
All right. So, there’s a whole group focused on identifying critical projects, but really trying to understand if we’re going to focus some resources and attention, what are the ones that work that really deserves the most attention. I think my biggest hope is, we can work with any of the companies out there building security tools, as well as with a need for highly secure environments and get folks to combine efforts on different things, rather than coming up with 20 solutions to the same problem.
[00:18:52] Aeva Black: Yeah, I’m recently elected to the Technical Advisory Council for the OpenSSF, the Open Source Software Foundation, and also elected to the Board of Directors for the OSI, the Open Source Initiative. That’s the sort of long-standing neutral body that reviews licences and gives them a stamp of approval to say what is an open source software licence and what isn’t.
Most open source bodies or foundations that produce software, some don’t. Some are more advisory, some work on policy. But the ones where a lot of us are familiar with also produce code. The OpenSSF is sort of both. It produces lots of guidance and written documentation. So, it has working groups that do those things, and also has some software projects, hence, beginning to collect a couple projects that run a service.
So, based on the charter of the foundation, the governing board handles the big decisions around funding and sort of governance of the whole thing, and delegates technical authority to the TAC. And then the TAC has this role of not micromanaging projects, not paying attention to every little detail. There are only seven of us. We can’t do that. But setting the policies and having check-ins to ensure that all of the working groups and the projects in the OpenSSF are healthy. They’re well-functioning. Things are transparent. People know how to find each other. And it’s a healthy community producing good software and good recommendations.
[00:20:13] Emily Fox: The Security Technical Advisory Group within the CNCF is the group that is designed to support the technical oversight committee in reviewing and kind of driving the direction of security for cloud native projects. It’s a little bit different than the existing technical advisory groups within the foundation who focus on key technical domain areas, such as networking and storage, and really get involved in those projects of those domains to allow them to reach a higher level of maturity and adoption and integration with the rest of the community.
I will say, because open source is an ecosystem itself, it’s so massive, and it has such a rich heritage of different processes around how we do software development. I’d say about 50%, and I say that as we are transitioning into more microservice and containerisation within the landscape. So, while cloud native is, in itself, its own unique ecosystem for the architectural design decisions that are in place with it, promoting distribution and immutability and ephemerality of our workloads, there are still a lot of core security primitives that apply within normal open source projects. And that universally, anybody can go to our documentation or recommendations and be able to apply them at some level to whatever their situation is.
[00:21:29] Simon Maple: The OpenSSF, as Guypo mentioned, is really one of the main community initiatives that is tackling the challenges with open source and its implications on supply chain security. You heard from Brian Behlendorf and Jim Zemlin in part one of this series. Let’s hear a bit more from Jim about how the Open SSF came to be, and where things are at today.
[00:21:48] Jim Zemlin: And then, in December, Log4j happened, and this was this incredibly high profile open source compromise. And then in January, we’re all sitting in the United States White House talking about open source security and the software supply chain. And we sort of are looking each other saying, “Well, at least we were a few months ahead of needing to convene to really find a more comprehensive way to deal with this.”
That’s really how the Open Source Security Foundation started, and I would say we’re at the pretty early stage of what’s truly needed to work on our collective software security, which is bluntly, millions of dollars, and let’s go train a million developers in how to write secure software. If we provide free training, let’s say 50 bucks a developer to stream that kind of a course. So, for some learning system, that’s 50 million bucks to train a million developers in five years, it’s not zero. We need to go and work with the major package managers to provide them resources, knowledge, and engineering support to improve their security baseline, build better systems for cryptographic signing, and other components that would make those package managers more secure.
We need to go and continue to understand what the most critical open source projects are out there and audit them. On average audit of a mid-sized software project costs somewhere between 50 grand and maybe $80,000. When I say audit, in this case, I’m not talking like a static analysis tool or anything, I’m talking like, eyes on code, reading it, going through and really taking a deep dive into that security. That’s something that needs to be resourced and shouldn’t be focused on, let’s say the top 200 most important open source projects that are out there, and then work with those open source maintainers and developers to take things you find in that audit and fix them.
We need to build and provide additional tooling. This is where Snyk has been just this amazing partner to the Open Source Security Foundation, to allow open source projects that are critical to all of us, to scan that code, understand out of date dependencies, vulnerabilities within their code base, do static analysis on that software in order to find and remediate bugs. Provide a set of development resources to not just find the bugs, but have folks who could go in and fix those bugs. And then, continue research on our never-ending search for what are the top most critical open source components out there that we need to really focus on. All of that combined is a multi-hundred, million-dollar collective effort. But if successful, that has a massive force multiplying effect in terms of raising our collective security, and would greatly reduce the risk of another Log4j, which costs a lot more than few hundred million dollars to remediate. The damage of these compromises is in the billions. It’s just we’ve got to get ahead of this, and that’s what the Open Source Security Foundation’s role is, to coordinate with industry and government.
[00:25:16] Simon Maple: While the OpenSSF and its members are playing a pivotal role in this battle, the second critical player is the US government, as Guypo and Jim explain. Lena Smart from MongoDB also shared some information about this topic.
[00:25:28] Guy Podjarny: A second notable player is the US government. The US government is actually doing a lot of work over here. The executive order, we’ve mentioned a few times, is an example of something they’ve done using the purchasing power of the US federal government to drive change. It wasn’t a bill. It didn’t go through Congress. It was a statement that said, sort of simply put, “Here’s a bunch of security requirements that we have of you, including this demand for the SBOM. We demand that you provide us with an SBOM to sell into the federal government and that you cascade that down your chain. And we’re not really going to audit you.” They didn’t require all of that auditing. “But if you don’t do this, right, and we find out, you’re never going to be able to sell to the federal government again.”
So, it’s such a big scare, such a big fear, that it really mobilised everybody into action. But there are more specific guidance points that they provide in the executive order and in subsequent mentions, also, some more guidance coming after the Russia-Ukraine war broke out, out of fear for sort of more activity from Russian hackers. And more recently, there’s actually, now builds starting to try and kind of pass through Congress, like the Security Obstacle Software Act, that they tried to really codify into law, into practices, requirements around security, as a good industry practice, almost like as anti-fraud style protection that say, “Well, you have to protect your consumers. And therefore, you have to do these things.”
On one hand, they talk about that. And on the other hand, they acknowledge open source as critical infrastructure for the government, for the world, and therefore they justify governmental spend in helping secure it. So, the US government is actually a pretty big factor, clearly government with all the sort of pros and cons, their involvement, and maybe the technical accuracy is not always quite as high. But they are another player to watch.
[00:27:18] Jim Zemlin: One of the things that government can help with is just using their influence and convening function to get industry to collectively come to consensus about tackling this problem. So, that’s really what we did in the White House meeting in January, sort of agreed upon about a half a dozen things that we think would really improve the baseline for security. Let’s go audit those critical projects. Let’s provide critical open source project tooling. Let’s train developers on how to write secure code. Let’s shore up those package managers. Let’s build tooling to allow software bill of materials in metadata to more smoothly flow across the supply chain so the end users can have a better manifest of the software, the components that they’re using in order to remediate vulnerabilities that will inevitably happen over time.
That alone is a super useful role for government. The difficult part of that is, even if you have a great specification, even if you have a will, in the case of government to require this, there is a lot of stuff that you need to create to make that supplying the software bill of materials manifests. I think some people would say easy, some others would say even possible. But there’s a ton of things you need to bill. How do I create tools that allow a developer to easily create software bill of materials descriptor for their particular component? How do I make sure that package managers can have an ingress and egress of software bill of materials metadata, as they’re distributing software components? How can I make sure that the place where software kind of all comes together and the build systems, and do I allow for the ingress and egress of software bill of materials to your metadata?
This is actually a problem. But what’s great about this need for building tools to automate all of those different ingresses and egresses as a software bill of material data, I think it’s a relatively definable, doable software development effort to build all that stuff. So, I think that what we’ll end up seeing as government starts demanding software bill of materials as a requirement for the software that they consume, or if they pass regulation, or federal agencies require software bill of materials data, what you’ll see is a demand for software that automates that flow of software bill of materials data across the supply chain. This is something where we literally have a team right now working on – think of it almost like a PRD for the tools that will automate this stuff, and we got to get out there and build it. I think it can be done in fairly short order.
[00:30:08] Lena Smart: The ITS-ISAC, ISAC, they describe themselves as a force multiplier. They are this amazing group of people. It’s been around for 21 years now. And they actually enable collaboration between public and private industry. So, I had a lot of – and there’s ISAC’s – so there are 16 critical infrastructure groups defined by the federal government, and the IT-ISAC is the Information Sharing and Analysis Centre, that’s what the ISAC stands for, for IT.
So, I used to be a member of the E-ISAC which was my electric sector. And again, it has to do with information sharing. It just brings people together who maybe normally wouldn’t sit at the same table. It’s recognised by DHS. So, we get some of the most amazing speakers. I mean, we’ve had Jen Easterly come speak to us, head of CISA. You know that Jen Easterly, she set up so many things. But she set up the security commission basically to not oversee, but I think just be part of some of the policy and fundamental decision-making processes when it comes to cybersecurity for the USA.
There are some members of the IT-ISAC who have joined that commission, I think, it’s just really important that private industry and public industry can work together in a trusted environment. Obviously, before the pandemic, we were meeting in person, and I’ve just made so many contacts of using the IT-ISAC. It’s an amazing resource. They have little working groups that we – I’m a member of the supply chain, risk management working group, no surprise there. And then, we’re also working on some other stuff as well, some special interest groups they’re called, or SIGs.
[00:31:50] Guy Podjarny: And then the third key one that I would say is just a commercial cohort of vendors that are in the space. There are some existing players that were already helping you wrangle your use of open source. Snyk is one of them. There are other sort of software composition analysis, players that help you understand your kind of SBOM and address those. So, these players are generally providing new capabilities related to indeed that reporting of the SBOM, or maybe the trustworthiness of components.
So, that’s sort of one cohort of players to watch that this is a bit of a boost in terms of the business, because more companies need to adopt this security practice. To be frank, a somewhat secondary set of new capabilities that need to be done, because these players to begin with, were sort of mandated to help you know which components you are using and where they are. And so, the technicality of being able to export that into a file in some standard format is very much the sort of the minor feature.
There’s another cohort of newer commercial players to try to tackle this pipeline, security journey of an open source components, or sort of a piece of software, to tackle these problems about secure consumption around anti tampering within your build process, or even to an extent, response to malicious incidents that are kind of coming up as supply chain security vendors. But really, they’re mostly focusing on tackling that, and maybe there’s a slipping a bit into the SCA world, as well.
I’d say those are the primary cohorts. There’s a fourth small stream that is basically SCA and supply chain security related features that are coming up across the ecosystem. Because as we think about this as supply chain, any piece of software that tries to manage, for instance, like your asset management in your system, maybe now add a capability to know what is the SBOM inside of it. Any IT workflow and thing that interacts, that helps you interact with your vendors, might want to do that. Any security assessment tool or even protection tool that might want to be aware of your components. So, there’s a wave of supply chain security features that get added to existing players.
[00:33:49] Simon Maple: There’s a lot of things that we, as organisations, can do better ourselves in reducing the risk in our overall supply chain and our open source, in our pipelines and so forth. But a lot of people will probably feel quite overwhelmed with all this and think, there’s a lot here, there’s a lot of pretty big lifts that we need to take here. I don’t know where to start. I don’t know what to pull in or who to pull in to help me out with this. So, in terms of some of the initiatives that you know that’s out there, or projects that we can use to help us, who would I, as an organisation, want to look to, or look to help me with, with some assistance in some of these solutions?
[00:34:25] Guy Podjarny: I think the top projects to look at from an ecosystem perspective, to monitor, to collaborate, to join are primarily in the OpenSSF world. Top ones to be mindful of and maybe start considering adoption of the SLSA framework, which was donated by Google via SLSA framework, which just provides some structure and some conformity into how to think about the problem and how to break it up. Some debate is a little bit about whether it is attuned for larger organisations. So, smaller organisations may or may not drive them, but it’s good question to raise.
The second key one is really a world of the SBOM formats. So, you just want to be – on top of that, there are two primary standard formats up there. The SPDX and CycloneDX, both good formats, with some deltas on it, probably both have a good shelf life. So, being aware of those formats, and the world of Sigstore, and it’s related projects that help you practically apply at the stations, and know whether something actually has occurred and verified that downstream. And so, there was a whole world of ecosystem.
So, looking at Sigstore is very, very important. Another key project to track is Scorecard as you try to evolve, your evaluation of whether a component is trustworthy or not. Scorecard is very – an initial but good start to how do we score an open source project to it. There’s a lot of effort in the LFX, the Linux Foundation’s platform that curates a lot of open source projects, focusing on those foundation projects. But I think over time, growing to more, and tries to give you all sorts of information about those open source components, it includes some vulnerability information includes finding information. It includes maybe some information of the maintainers to the extent that it is legit to do so. And it will start including things like Scorecard. So, understanding the Scorecard, seeing if those parameters apply to you, seeing if you want to contribute to them is all very helpful to it. I’d encourage using LFX as a means of asking that question about projects who might be consuming.
[00:36:28] David Wheeler: LFX is a suite of tools, primarily intended for Linux Foundation projects, to help them do stuff. Now, there’s a whole bunch. You mentioned LF Security. LF Security is intended to be a suite of tools to improve security. The one in there right now, it’s basically looking for dependencies that have known vulnerabilities. It’s using Snyk’s technology, and we do intend to add others to that tool chain. And actually, just in general, the goal is, as people say, “Hey, I need this.” And we start noticing, “Oh, wait, all our LF projects need that.” Then, sometimes they work with organisations outside and sometimes it makes sense to try to provide them with tools. But coming back to the whole security thing, it’s important to have tools in the toolbox. And that’s one of them, is looking for dependencies which have known vulnerabilities.
[00:37:16] Guy Podjarny: But fundamentally, the most important thing you can do to improve your own open source security or supply chain security posture is to start by just making sure that you know which components you are using, what are the problems in them, and how to address them? And in conveying to the upstream maintainers that this is important to you, to convey to the world that you have chosen library X over library Y because you believe it to be more secure. Open source maintainers are amazing individuals that are producing free software that makes the world better or more productive, and they’re really great. Fundamentally, they cater to their communities, especially the top ones.
So, the more their communities express a desire and importance for security over functionality, the more weight they get from their consumers, to say, “I’m willing to forego this enhancement, this performance boost in favour of you prioritising during the security practice,” the more they will lean that way. In some cases, if you yourself, have the security expertise, the security cycles, this is also a great place to contribute back to those projects. To go to those projects, find a missing security controls, maybe it’s a security assessment, maybe it’s a security review, maybe it’s just integrating some tool into a pipeline, or fixing a security bug and apply your resources, your cycles, and your contributions. Because oftentimes, enterprises and corporations as a whole have more security competency inside than necessarily those open source containers.
So, I’d say more than anything from a broad all out perspective, it’s less about having more people join in these open source projects, they have a lot of attention, they could use support, but they don’t need a thousand more people joining them. Mostly it’s around driving sort of the importance of security across open source projects, wherever they are.
[00:39:10] Simon Maple: So, Guypo, really beautifully wrapped that up there, and I actually totally agree. This is such an important area for us to be able to give that education and feedback right into the maintainer spaces. When we consider our own organisation, and internally, within our own organisations, there’s probably a lot of folks who are equally concerned about the supply chain risk, equally concerned about their organisation becoming their next door which suffers a breach due to the supply chain exploit or an exposure. It’s not just down to those folks as to whether they’re in a security team, or whether they lead the dev. You need that buy in, and you equally need that similar high priority or similar concern among your development organisation. I mean, grassroots developers who are the thousands of developers that people will have in the organisation.
So, I guess there’s two pieces here. First of all, what advice would you give for any security individual, security team that is trying to raise that awareness in a development org? Many developers as possible we want recognising this is a potentially high-risk concern. And secondly, what are the best strategies here to actually give actionable good security hygiene practices for developers to use?
We’re going to tackle these questions and delve into recommendations and suggestions from experts that we’ve spoken to in part three of the Supply Chain Security series on The Secure Developer.
[00:40:32] ANNOUNCER: Thanks for listening to The Secure Developer. That’s all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you’d like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don’t forget to leave us a review on iTunes if you enjoyed today’s episode.
Bye for now.
Founder & President at Snyk
About Guy Podjarny
Guy is Snyk’s Founder and President, focusing on using open source and staying secure. Guy was previously CTO at Akamai following their acquisition of his startup, Blaze.io, and worked on the first web app firewall & security code analyzer. Guy is a frequent conference speaker & the author of O’Reilly “Securing Open Source Libraries”, “Responsive & Fast” and “High Performance Images”.
Field CTO at Snyk
About Simon Maple
Simon Maple is the Field CTO at Snyk, a Java Champion since 2014, JavaOne Rockstar speaker in 2014 and 2017, Duke’s Choice award winner, Virtual JUG founder and organiser, and London Java Community co-leader. He is an experienced speaker, having presented at JavaOne, DevoxxBE, UK, & FR, DevSecCon, SnykCon, JavaZone, Jfokus, JavaLand, JMaghreb and many more including many JUG tours. His passion is around user groups and communities. When not traveling, Simon enjoys spending quality time with his family, cooking and eating great food.
About Lena Smart
Lena has more than 20 years of cyber security experience. Before joining MongoDB, she was the Global Chief Information Security Officer for the international fintech company, Tradeweb, where she was responsible for all aspects of cybersecurity. She also served as CIO and Chief Security Officer for the New York Power Authority, the largest state power organization in the country. Lena is a founding member of Cybersecurity at MIT Sloan, formerly the Interdisciplinary Consortium for Improving Critical Infrastructure Cybersecurity, which allows security leaders in academia and the private sector to collaborate on tackling the most challenging security issues.
About Emily Fox
Emily is a DevOps enthusiast, security unicorn, and advocate for Women in Technology. She promotes the cross-pollination of development and security practices. She has worked in security for over 12 years to drive a cultural change where security is unobstructive, natural, and accessible to everyone. Her technical interests include containerization, automation, and promoting women in technology. She holds a BS in Information Systems and an MS in cybersecurity. She is also a member of the Cloud Native Computing Foundation’s (CNCF) Technical Oversight Committee (TOC).
About Aeva Black
Aeva Black is passionate about privacy, ethics, and open source. They currently work in Azure’s Office of the CTO and hold seats on the Board of the Open Source Initiative, on the OpenSSF’s Technical Advisory Council, and a Shadow seat on the Board of the CNCF. Aeva previously served on the Board of the Consent Academy, on OpenStack’s Technical Committee, founded the OpenStack Ironic project, wrote a lot of Python, and even managed a few small MySQL databases.
Aeva is a lifelong student of the Buddha Dharma, an incurably queer geek, and a frequent keynote speaker at conferences around the world. They are also an aspiring, yet time-starved, writer whose recent works include contributing to “Transending: An Anthology Of Trans Buddhist Voices” (2019), and being the technical editor for “Trust In Computer Systems And The Cloud” (2021).
About Brian Behlendorf
Brian Behlendorf is the General Manager for the Open Source Security Foundation (OpenSSF), an initiative of the Linux Foundation, focused on securing the open source ecosystem. Brian has founded and led open source software communities and initiatives for more than 30 years, first as a co-founder of the Apache Software Foundation and then later as a founding board member of both the Open Source Initiative and the Mozilla Foundation. In parallel, Brian co-founded or was CTO for a series of startups (Wired Magazine, Organic Online, CollabNet) before pivoting towards public service serving the White House CTO office in the Obama Administration and then serving as CTO for the World Economic Forum. Brian joined the Linux Foundation in 2016 to lead Hyperledger, the distributed ledger initiative now core to supply chain traceability and central bank digital currency efforts worldwide, and has led the OpenSSF since September 2021.
About Jim Zemlin
Jim Zemlin is Executive Director of the Linux Foundation. In this role, he has helped industries navigate transitions from proprietary software to a world in which almost all infrastructure technology is built collaboratively with open source. Zemlin’s work has helped create industry support for open source software, from the Core Infrastructure Initiative, which supports projects critical to cybersecurity, and Let’s Encrypt to transformative projects such as Hyperledger, the open source blockchain project working to provide distributed trust online. Home to the most influential open source project in history, Linux, and its creator, Linus Torvalds, the Linux Foundation is focused on sustaining the greatest shared resources of our time and protecting them for generations to come.
About Dr. David A. Wheeler
Dr. David A. Wheeler is an expert on open source software (OSS) and on developing secure software. He is the Director of Open Source Supply Chain Security at the Linux Foundation and also teaches a graduate course in developing secure software at George Mason University (GMU). He has a PhD in Information Technology, a Master’s in Computer Science, and a certificate in Information Security from GMU; he is also a Certified Information Systems Security Professional (CISSP) and a Senior Member of the Institute of Electrical and Electronics Engineers (IEEE).
Stay up to date on all the episodes
Tackling Software Supply Chain Security as an Organisation
with Adrian Ludwig, Aeva Black, Jim Zemlin, Emily Fox, Guy Podjarny, Simon Mapleplay_circle
The Future of Software Supply Chain Security
with Emily Fox, Aeva Black, Brian Behlendorf, Adrian Ludwig, Simon Maple, Guy Podjarny, Lena Smartplay_circle
Defining Cloud Security with Rick Doten
with Rick Dotenplay_circle
Exploring data security in social media with Roland Cloutier
with Roland Cloutierplay_circle
Responding to a Security Incident
with Rob Zuberplay_circle
About The Secure Developer
In early 2016 the team at Snyk founded the Secure Developer Podcast to arm developers and AppSec teams with better ways to upgrade their security posture. Four years on, and the podcast continues to share a wealth of information. Our aim is to grow this resource into a thriving ecosystem of knowledge.
Hosted by Guy Podjarny
Guy is Snyk’s Founder and President, focusing on using open source and staying secure. Guy was previously CTO at Akamai following their acquisition of his startup, Blaze.io, and worked on the first web app firewall & security code analyzer. Guy is a frequent conference speaker & the author of O’Reilly “Securing Open Source Libraries”, “Responsive & Fast” and “High Performance Images”.