EP. #89

Containers and Developer Experience in the Cloud Native World

WITH Justin Cormack

In episode 88 of The Secure Developer, Guy Podjarny speaks to Justin Cormack, CTO at Docker who is passionate about security, software development, and the open source community. In this conversation, we hear more about what Justin’s position as CTO involves and how Docker is getting back to its roots as a developer-focused company that concentrates on developers’ needs. We also discuss what Justin has seen in terms of how companies use containers. Given that containers are still relatively new, their problems require unique solutions, and Justin unpacks some of the security-related concerns that Docker clients face.

Subscribe to get new episodes delivered directly to your favourite podcast app!
Listen on Apple Podcasts
About our Guest
Justin Cormack

Justin is the CTO at Docker and is passionate about software development, security, and the open source community. He really enjoys serving on the CNCF Technical Oversight Committee, helping projects and communities grow.

Transcript

[INTERVIEW]

[00:01:24] Guy Podjarny: Hello, everyone. Thanks for tuning back in to The Secure Developer. Today, we have a guest that really comes from more of the DevOps side of the world and we’re going to have some interesting conversations about containers and developer experience around this sort of cloud native world. It’s great to have to discuss all these things Justin Cormack, who is the Chief Technology Officer at Docker. Justin, thanks for coming onto the show.

[00:01:46] Justin Cormack: Thanks for inviting me. Looking forward to this.

[00:01:49] Guy Podjarny: So, Justin, before we dig in, tell us a little bit about what do you do? What does it mean to be CTO of Docker? And maybe a bit of the journey that got you here?

[00:01:57] Justin Cormack: Yeah, so I’ve been at Docker for five and a half years now. So, I started off in engineering at Docker. My first project I worked on was Docker for Mac and that was a really fun project, and it was really focusing on developer experience and what developers are actually trying to do. The desktop experience before that was a bit cobbled together, and this was like, let’s make a unified product that developers really love. It has been a really successful product. Obviously, we had Docker for Windows since then, as well and we’re thinking about how to do it for Linux, too.

So, that was the first thing I worked on. And then I really worked across everything in Docker since then, I guess, I started working on our open source projects, Docker Engine and the maintainer, and worked on the internals of the Docker runtime, and particularly security things I worked on seccomp when we first shipped that, so that’s a system called filtering to stop exploits in the container runtime.

I started working pretty much full-time in security across the whole of Docker. I did then enterprise products. But then, since the end of 2019, we sold off the enterprise business to Mantis and we’ve been really focusing on being really – our core to our roots as a developer focus company, a company that really focuses on the needs of the developer on an everyday basis, and the things that developers are asking for directly rather than the enterprise products that we’re kind of top down sales, we’re kind of bottom up usage focused.

So, I became CTO recently. I’ve been kind of doing a lot of the bits of that role, recently. And this was kind of formalized in me becoming CTO a few months back. So, I’m really working with our team on strategic direction on helping our developers really develop products that are ready, that the developers really love. But also, I mean, we’ve always had this thing at Docker, that what we do, if you look at containers in the first place, we took a really deep complex technology and made it really simple and magic. And that’s kind of my role is to try and help us do that more. It is a really, those kinds of things that really add value is when we take something that’s really complex and make it actually really usable and part of your everyday routine. And that’s really the heart of what Docker is about and what we’re trying to do more of.

[00:04:21] Guy Podjarny: Yeah, that’s definitely a worthy goal and we’re probably going to circle back to that and talk about developer experience. Just to be curious, during that journey you described, for a bunch of years, you actually wore the security engineer hat, so you went from maybe more of the sort of software engineer hat to more of a security engineer hat. How big of a transition did you feel that was personally? Was that a very different day to day? Was it just software engineering on a different set of problems? Do you feel it was different in the organization to have that different formal title?

[00:04:52] Justin Cormack: It was an interesting transition or actually, I’ve always been involved in security to some extent, before I was at Docker. I’ve had a quite miscellaneous career in different areas. I’ve worked in financial services and publishing and all sorts of businesses. So, I’ve always had experience and I was an ops person, kind of before I was a developer as well. So, I had experience from that point of view, but it was definitely an interesting transition.

I recommend people should move into security. I think it’s a really valuable thing for engineers to do, is to shift into security, because it’s something that I think that you bring a lot of background experience from your previous roles into, and the way of thinking about it. I think security is something that everyone should do some of in their career, because it’s really important. And I think that experience of moving into it full time is something that really helps. You think about how security relates to other developers, and their day-to-day work, and what things are important.

I’ve always kind of seen security as a kind of risk management type thing, I guess that’s from my finance background as well. It’s not a sort of absolute, that you’re trying to fix things and people sometimes think of it like that, which is always a mistake. It’s always about trying to handle risks, trying to work out, what things are problematic, what things really apply in your situation, what’s the bad things that could happen, what are the easy mitigations, what are the hard mitigations that you need to take longer over and so on. I think it was interesting, because obviously, Docker is a really vary company that does a lot of different things. We have products like Docker Desktop that we shipped onto people’s machines. We have Docker Hub, which is a SaaS service. We were shipping enterprise products. We were shipping security as part of our products.

So, it’s really a mixed and interesting area to kind of get involved in security as well, because it was just really, really varied. I think it was quite an interesting way of getting a kind of broad exposure to different aspects of security as well.

[00:07:10] Guy Podjarny: Yeah. Sounds like it. I think that was my sense, as well, which is like in different, the forcing function of sort of digging into it and really embedding securities is insightful. It’s good to hear that it felt that way, doing the actual change.

Let’s focus on Docker. You mentioned Hub and I think people are probably well familiar with Docker, as a context and also the dominant technology for handling containers these days. You work with probably a lot of companies that consume Docker, and a lot of complaints, maybe, or sort of a lot of concern in the ecosystem, is talking about how even if you provide more up to date containers, ones that have resolved their vulnerabilities and the likes, companies might just not consume those. People might not move along and take the new upstream. It was an interesting sort of ownership are sort of paradigm, challenges over there. I guess from your experience, how do you see companies use containers today? Are most of the images, public or private? Do they update often? What do you see as the lay of the land?

[00:08:15] Justin Cormack: It’s really interesting, and it’s changing, and it’s also very diverse, as well. So, the public/private thing is really interesting. One of the things we’ve been seeing is that people are consuming a lot more public containers. There’s a bunch of data. We have a lot of data from Hub, but as I’ll say in a minute, it’s perhaps not totally typical of the whole industry. So, I think one of the interesting, there was a cystic survey recently that said that consumption of public images had risen last year from 40% the year before to 47%, which is way higher than anyone I think would expect.

So, from Docker Hub, we actually see way more than that, because the Docker Hub is probably the primary source of public container images. So, this is something that we had obviously seen that people are consuming public images a lot longer, because we’ve been probably the primary source of these.

But that figure is like image as a whole, so it gives a better spread across people’s private repos. And, you know, their use of things like private ECR, and on-premise things like artifact and so on, as mixed in with things like Docker Hub. So, I think that’s probably a reasonably accurate view and the interesting thing about the increase is the – I remember when we started things like the Docker official images, which are the most widely used public images a long time ago. I think it was in 2014. Back then, the assumption was that people would use these really, in their Docker file, from Ubuntu, things for Ubuntu and the programming language type containers are still the case. But I think it’s the use of things as whole applications that’s really changed why people don’t do from Nginx and add in their kind of content and conflict so much. They add conflict and content as volumes and bimounts and leave the actual Nginx image as it comes from upstream.

So, this idea of actually using whole image components as part of your application is something that’s really been growing in use over the last few years. I mean, it’s the model that Docker composed was kind of built around and that kind of a lot of people’s mental model and their most popular development tool for containers, but it’s also just a way of – I think Kubernetes, things like Helm charts, also very much reflect this kind of model, they’ve just use these components and then construct an application with your pieces in it.

Most of it is comprised of existing components and it reflects really the whole way that software is constructed as a whole now, where every piece of software is constructed out of open source components, whether they’re libraries or base operating system, or actual whole applications. And the more compositional these things are, where you just take a component as is, the easier it is to use them. The less you have to glue and program these things together, the simpler it actually becomes and that people are building more things like that. People are thinking of software like that.

I think SaaS, the consumer services model also reflects the way people are like, “Let’s just consume an API rather than let’s build this into my software.” Because it’s just so much simpler, and more flexible to just consume an API. It’s more uniform. It’s easier to understand. It’s better documented like that, and then those things. So, I think that consuming public images is part of that process of let’s make software more compositional and people are better at making units that are easier to consume like that as well.

So, Docker Hub, I think 20% of Docker Hub traffic is the official images, which are really like the kind of – we build them to make it easier to consume open source software. Because really, right from the beginning, we saw that people were wanting to consume software like this, and it was kind – I think that the model, a lot of things we created at the beginning or early in Docker have stockers, this is how you use containers.

One of the things that the official images do is we update the software, as the upstream software updates it. So, when Python releases a new Python release, it’ll go into the Python official image the next day, and people expect that rather than the kind of model that people had before with VMs, and so on, where you install a long-term support version of Ubuntu, and then you stick on the same version of the same major use of Python for a really long time.

And then after two or four years, you might update to a new version and do a drastic change. There’s a much easier path to keeping up to date. But there’s also an easier path to not updating because all the versions of Python are available from official images, and you can choose and that’s also very different from the [inaudible 00:13:36]. We give you one choice or maybe they might give you two choices, if there were two major versions in support at the same time. You might get in Python 2 and Python 3 for a long time before Python 2 expired, but after that it goes away. Whereas with containers, it’s like, you have all the tags, pick any tag you like, you can use it in any way.

[00:13:57] Guy Podjarny: How do you see that in companies? So, it’s really interesting to sort of see the growth in public imagery or image usage. I guess, it’s maybe just the aspect of you know open source really permeating and people are getting better at taking advantage of the sort of the ecosystem that is available to them. And as you point out, maybe indeed, this sort of compositional usage or not actually what we’re, I don’t know what you call it. So, by being able to use the image as a whole and just sort of swap the volume.

In terms of updates, though, like, I guess, are you seeing any patterns? Do you track that? Do you track how often do organizations kind of continue to pull down older images and did previous tags or just kind of fail to update, I guess, what they have deployed, despite the fact it’s even easier now? Because indeed, they have that Helm Chart, and they can just pull in the new version.

[00:14:46] Justin Cormack: Yeah, we have a lot of data about this. We’re actually working right now on how to surface it to users effectively and so we’re kind of iterating on this right now, because it’s something that we did a bunch collaboration with you at Snyk on making vulnerabilities visible with Docker scan and then showing the vulnerability via Snyk and Docker Hub and so on. But these, we’re really looking at how to – we started with shipping, like you can scan something, but it doesn’t tell you – it tells you a little bit what you could do. But we want to really make those recommendations much clearer for people, we really guide them into, this is what you should do now and why you should do that. I think there’s a lot of work we’re doing around showing the scan results as things like, “Well, if you move from here to here, this would actually fix this, this major vulnerability”, which is obviously – so from that point of view, but also just surfacing stuff that we haven’t really surfaced before, like Ubuntu 16.04 is going out of support in a month. You probably want to stop using it.

So, think about updating, which is Python 2 is something that came up very particularly recently. I mean, Python 2 has been able to support for a while now. But we still see people using it. I looked at their numbers, and there are maybe 30,000 people pulled Python 2 images in the last month. But what they probably don’t realize is there was a recent serious CVE on Python 3, which actually also applied to Python 2. But this was not really made very clear to people because Upstream don’t support Python 2 anymore. So, Upstream didn’t say this CV actually also applies to Python 2, they just fixed Python 3.

Python 2 has been supported in some places, but not others and it’s not very clear. So, the Python 2 official image is not being updated anymore, because we share Upstream ships. Debian and Ubuntu are still supporting Python 2, but last time I checked, Debian hadn’t actually shipped a patch yet, probably because they’re not focusing on making upstream. So, you can be pulling images that are not actually being updated at all, and might not necessarily yet be marked as even having a serious CVE.

There are backports and we’re wondering if we should ship these in the official image, or whether we should tell people how they can get a supported version of Python 2, because I think that’s probably, it’s more helpful to say like, at this point, this image isn’t being updated. There is a CVE waiting around here and waiting for it to be fixed like you normally do isn’t going to work because it’s not being fixed. You need to actually take a positive action to go to a version that is being fixed if you’re not going to switch to Python 3, at this point. So, it’s really about trying to provide more guidance on top of just, here’s the CVE. It’s about really, these are your options at this point, and you should probably be doing one of them.

[00:18:01] Guy Podjarny: Yeah, so it sounds like it’s two different problems, that we’re talking about here. One problem is, all the tags are there all the time, and people might not appreciate that they might be using one that they really as kind of, you know outlived its applicability. And they really should be upgrading in that context, you need to be highlighting that information and I’m excited to sort of have Snyk, help contribute there and sort of help with with more intel to inform it. A big part of it is really about how do you surface that to developers. And that is a separate problem, to the problem of, “Hey, I have a very valid, it’s the latest version, or it was the latest version on say Python 3”, but then, a new version came about, and you just need to pull again.

I think in that problem space, in that latter one, there’s a fair bit of conversation about this notion of like deterministic deliverables versus staying up to date. Tell me if I’m misrepresenting here, but a lot of people are, on one hand, you want to say, “Well, when I run the build, I want to get the exact thing that I got yesterday. And so, I might want to sort of pin myself on a specific base image or a specific container, because I want to know that it’s the thing that I deployed yesterday. So, it’s deterministic, and I get no surprises.”

On the flip side, if you do that, you might get a vulnerable version that’s outdated and you suddenly make all of your sort of security updates, you require all of them to be explicit. What are your thoughts on, like what is the best practice you advocate at Docker and what do you see people actually gravitating towards here?

[00:19:36] Justin Cormack: It’s a difficult thing. And I think that it’s something that, again, we’re looking at how to address it. There’s a lot of confusion about the latest problem and was the latest a huge mistake, how to really manage it. I think that what we see from the data is that mostly people are pulling fairly generic tags. It varies a little bit from image to image, depending on the kind of policy of that image because things like Ubuntu where there are multiple streams, people might be on 16.04 or 20.04 or 18.04, we see quite a few people on those streams rather than the latest, which is the latest is 20.04. It’s not pre-releases or anything else.

[00:20:27] Guy Podjarny: But even those are not deterministic, no? If you have 16.04 today and you pulled 16.04 in a month, you will probably get a different set of bytes, right?

[00:20:37] Justin Cormack: Yeah. There’s even a more complicated problem with something like Ubuntu where you’re probably using Ubuntu to pull a bunch of additional packages, and those are not deterministic either, because when you pull Python from Debian, that’s whatever version of Python, Debian happens to be shipping on that day as well. It’s really difficult to go back.

I mean, Debian does have an archive of what package was on what date, but it’s not designed for production use, it’s designed as an archival reference, really, and they would not be happy if you point it out. So, there’s a whole bunch of issues around repeatability there which are, and they vary from image to image. So, for something where you’re just using the image, as is, like, say you’re using Nginx, it’s actually much simpler, because you don’t have to worry about what’s inside the piece and what else you’ve done with that piece is, and you can address images by their full hash to get the exact version.

To make that easier, we are going to be exposing the historical mapping and current mapping between hashes and tags, easier to see in Docker Hub, for example. So, you will be able to see how things were updated. Which version, if you have an image by hash, what tag this used to be and those kinds of things, which are not very visible now. So, we’re working on that at the moment.

But the problem that people have with really pinning things is, the workflows are still quite difficult around that. In general people are doing a lot of commits to commit the hashes back to their repos. And the workflows for those are kind of messy, I think. Some of the GitOps tooling, Flux has an image workflow that kind of manages image updates and commits them back to your repo for you. People do use that and it kind of helps them manage that a bit. I think, we haven’t made that pattern as easy as it could be. I think that we need to work on improving the usability of actually pinning things so that people can do it more easily.

[00:23:04] Guy Podjarny: Yeah. And if I challenge that a bit, do we want to? I can definitely see from an operability perspective that this term, probably determinism is compelling. But you know, at the same time, we’re talking about that concern around dealing with updates and there’s a lot of upstream updates and we talked about how official images, for instance, can make it easier for you to just consume something you know to be good or is blessed. Do we want as an industry to instill the best practice to prioritize reducing – aren’t we basically prioritizing reducing operational risk at the expense of security risk of using stale imagery? Because if you’ve pinned the versions that it means you need to work harder to get the latest updates, is it not?

[00:23:55] Justin Cormack: Yeah. So, I think what we actually want is that we want to append operationally, because you really want to know what was in production so you can actually work out why something failed in production, or why there was an increased error rate on this version that shipped into prod. And if you don’t know exactly how to rebuild that image, that makes life really difficult, if you want to play around and see what happens. So, from that point of view, obviously, you want to know what the image hash was that you can actually actually replicate it.

But ideally, you want to be able to rebuild it, tweak it, maybe add some debug and things like that, which is actually difficult if you don’t know what every component was. Stage one is like, you should be able to know the hash and prod so you can replicate it. Stage two is you could replicate the build of that if you needed to.

But from the developer’s point of view, you don’t want to over specify the (inaudible) or you will never update it because it’s too difficult. I mean, I think that the intermediate setup we’ve got where some automated process does pull requests into your repo to update tags is not ideal either, because I think the developer doesn’t want to see all those pull requests and keep rebasing over them and things about the exact versions.

I think we want that to be part of an automated test stage that happens, without actually touching your code. I mean, it’s like the kind of log file and module configs that we have. We’ve worked out some reasonable compromises. The developer wants to say, “Don’t use the version of this before this version, because I know it wouldn’t work.” But you don’t want to specify exact versions and pin everything, because it will be impossible to update. You want to mostly have automated tooling manage the updates. And when things work, it’s okay to ship that. But you want to be able to go back and say, “Why exactly did that automated update break?”

We shipped this security effects into production, because the tooling told us to, which 90% of the time works fine. But what about the time when it turns out that we were relying on some feature that got fixed with this bug fix, and it broke our code? And we need to go back and say exactly, how do we work around the fix and things like that? So, we need to know why did this version on Thursday afternoon actually changed something when we didn’t think there was a change?

And those things do, mostly security updates are pretty good, and don’t break anything. And mostly, they can just go into but someone always relies on every undocumented feature of every piece of code, whether it’s performance, or undocumented APIs, or whatever, and some of those things will break sometimes with updates of one sort or another, not necessary security updates, but security updates sometimes there’s something you just can’t support anymore, because it turns out that it’s not security supportable, and you have to fix it.

[00:27:21] Guy Podjarny: And it’s effort, right? Sometimes it’s also, the maintainers are not infinitely funded, if funded at all. And sometimes it’s just hard.

I find this whole space quite fascinating in terms of the balance, because it ends up just being an exercise in risk management, right? It’s actually true for log files for whatever NPM dependencies or the like, as well. The NPM ranges are the equivalent, if you will, to a latest or a 16.04. I want roughly this area, I want this type of combination, and you acknowledge that you’d run a build. And then that causes problems because you get stuff you didn’t really want and suddenly something breaks.

And the flipside, if you use the log files, you’re now stuck. So, now you do need to get these repetitive pull requests that tells you, “you need to upgrade, you need to update the log file.” And again, and again, and again. You do and you don’t want that, you find yourself automating the merges of those, and then you come back like, “Well, so why did I even sort of lock the files in the first place? Why didn’t I not just sort of run it again?” It probably doesn’t really have a single correct answer. Do you feel there’s something we should push kind of the industry towards? Or do you feel it really just boils down to, as an organization, you need to figure out, do you prioritize, ease of reproducibility over ease of update?

[00:28:44] Justin Cormack: I think ****with containers, you’ve always got some ease of reproducibility, because you’ve always got the bits you shipped to production. So, that’s the benefit that you’ve got that you didn’t have before, with containers. So, I think that you should kind of make use of that, in order to make things easier for your developers as a priority. So, if the developers use latest and you rebuild frequently – because you also have to rebuild frequently to make this work and you should be rebuilding frequently – then most of the time, you’ll get all the benefits and the security updates, and you’ll still have a reproducible – okay, this particular hash actually broken product, but I can debug it. I can go back and go inside the container and take a look.

So, that’s actually probably the best option to combine ease of use and a bit of reproducibility. If you want to write it now, if you want to build more than that, you’re going to have to – the tooling isn’t great, to be honest, and I think we can get a better tooling over time that works for everyone that gives you more understanding of what’s going into your containers. This comes down to understanding your supply chain and being able to reproduce where everything comes from. It’s really important to understand where everything comes from, and exactly which versions of it is.

We not have really answered the point where you can take a container in production and work out every piece of source code that went into it. That needs a lot more tooling and I think those are the things that there are some work going on to try and make that easier to understand. But I think that’s the point that we need to get to is where we, it is just possible to go back to the source code for everything you’ve got and trace how you got it and those things. I think there’s a whole bunch of work going on with things like SBOM, which is software bill of materials, which is a metadata standard for describing what’s in a container. So, that’s kind of taking it from the – we’ve got some software, how do we describe it in a useful way. But then there’s also like, from the other side of it, let’s put this software together in a way that we understand the process we’re putting it together, whether we know where everything came from. We don’t randomly download things from the internet in your software, like, commit to get them from source first or get them from an internal repository where you actually can trace where they came from first, so that people are not just putting things together in a really ad hoc way that you don’t really know. If you curl something into your container build, you don’t really know what it was afterwards.

[00:31:58] Guy Podjarny: Yeah, where does it go from. There’s a whole world of complexity here and I think you make a good point, which is there’s still tooling to kind of allow you to make a better decision or a better trade off, you know, between the two. If I come back to the to the first problem, we talked about sort of the two problems. One is, how do you help me know I need to use a different container image or an updated or make a change? The second one was the one we just discussed, which is, when do I update versus when do I kind of maintain equilibrium?

For the first one of knowing what to do it, I know, we’ve kind of had conversations about this in the past, for instance, you know, with Snyk, you can run Docker scan, you can find vulnerabilities. And I think a lot of the conversation that that occurs, also what you’re sort of surfacing or raising the question of telling you, “Hey, this is an out of date library.” There’s always a question around this developer experience involved in doing these things.

How much as a developer tool as a key piece of technology being used by developers pulling down and making decisions around the operating system and the kind of containers that they are going to be using, how do you drive them towards making the secure decision? How do you kind of drive them to being secure by default? Versus how much are you just being annoying? I guess, how do you think about maybe sort of two questions. One is just in general, when you think about that balance, what comes to mind around how do you balance it? And second is maybe even from a Docker’s role in the ecosystem perspective, how do you see your responsibilities divided here? Is this really like one continuum? You have to choose where it is? Or am I presenting it incorrectly?

[00:33:41] Justin Cormack: We’ve always tried to have a model that Docker is trying to provide security by default, and it shouldn’t be something you opt into. I think for this piece around supply chain security, we still haven’t got good output. We’ve been iterating and thinking around things to do here for a while. And I think that, with this focus on all developers and developer, I think we’re looking at the different solutions than we were when we were looking at sort of enterprise products in the way, because we want to make it really, really usable. Usability means you can’t really have people just opting in, because our experience is that most of them don’t opt in where there’s more work. You get a very small number of people who really care because they’ve been told to usually, and they work for a few organizations, but you have to make things just happen by default.

We have to build processes rather than one-off things you can do is not, because I think that you can’t have people have to manually do something every day or think about it constantly. It’s just got to be something that is happening and maybe they have to help it along sometimes because the processes are not perfect or they’ve got to make choices. When do you upgrade to a new major release? Do you do it early or at the last minute and whatever? Developers have to have those kinds of control because they’ve got other requirements. But fundamentally, security has to be automatic for it to work.

I think, the industry as a whole hasn’t been great at doing that. I think, ‘Let’s Encrypt’ is a great example of actually making security work and there was some amount of evangelism, but actually, a lot of it was pretty good tooling. For example, rotating certs automatically, which was just not – certificate rotation was manual before Let’s Encrypt for almost everyone. And now for almost everyone, it’s automatic. And that’s the kind of thinking about what automated process that we need to put in to make this actually palatable. I think the security business as a whole has not been good enough about thinking like that, and I think that’s something that’s changing gradually. Obviously, Snyk is a developer focused security company, too. I think that’s the thinking that you have, but the industry as a whole hasn’t really approached it like that. Security has been sales driven, not developer driven, and about scaring people, not helping people. Those things are gradually changing, because helping people works.

[00:36:42] Guy Podjarny: Yeah, it actually what mobilizes the change. I would say that there’s also changes on the other side of tooling, I would say. I’d like to think that Snyk had a hand in it, but that also, the developer tooling ecosystem itself appreciates now more than ever, that there’s real value and real kind of responsibility and commitment for developer tools to sort of build a certain amount of security functionality in and indeed be secure by default, and not need to opt into it and all those elements. I think it’s just also hard to always balance that against not being a pest and sort of not being seen, because it’s easier to not bother in the short run for a developer. And so, finding the right way to do it.

I agree with you, the Let’s Encrypt was a brilliant technology, innovation and kind of a UX experience and just kind of figuring out the bottlenecks, and really can help drive, and as well as some business models, disrupting the certificate generation, making it free. So, that’s another piece of big friction removal there, a lot came together there. I guess, for the listeners, the good news is we’re working on it in sort of on the Docker and the Snyk side to try and help crack this nut. So, if you have thoughts of how you think this balance should be struck, then do reach out to me or Justin.

[00:37:59] Justin Cormack: We absolutely want people’s feedback on security problems. These update problems are something that I feel that we definitely haven’t helped people enough with over the years. When people have grown into patterns of doing things, that kind of work, but I think they’re suboptimal and people know they’re doing things as suboptimal. And we really want to actually make these things better and change the way that these things happen. With small changes and big changes, I think that there’s some incremental, that’s making a little bit better. And then I think there are some bigger changes as well that probably have to happen after that.

[00:38:45] Guy Podjarny: Yeah, absolutely. So, I think, Justin, while we haven’t solved this problem, yet, I think we’re kind of running out of time. So, before I let you go here, I like asking every guest coming on the show, one kind of final question, if you kind of took out your crystal ball and you thought five years out about somebody facing the same responsibilities, someone in your role in five years’ time, but clearly kind of the world around has changed somewhat, what would you say would be the biggest difference in that person’s reality? Will it be easier or harder?

[00:39:18] Justin Cormack: We have already made some progress towards fixing these things. I think we’re just seeing as a Cambrian explosion in software, and there’s so much more of it, and it’s becoming much more diverse. And in five years’ time, we have to be able to manage this better, because it’s just going to get more and more messy and complex. I think about the sort of shipping industry and containers quite a lot because I like history, and I mean, we’re in the container business. I think that if you think about the container shipping industry, we think about in software terms, we think a lot about containers as these boxes and I think the first years of the content, we’re all about, like, can you escape from the box? Runtime security type things. But the value in containers in the shipping industry was not the box, it was the supply chain and the modern industry that enabled, that everything changed. Because you could get things from China to Europe, fast and cheaply. You could build things where you used a lot more external components, and it was more reliable. And we built all the machines for getting them off the ships, we build gigantic container ships, all those bits of infrastructure that we built around it.

People think that containers are about the container and the runtime, but they’re really about the content of what’s in the containers, the things that you’re using to build in your software from, that reach you in containers. I think that we haven’t made a lot of those sort of industrial processes kind of easy with containers yet. We’ve spent a lot of time thinking about the boxes, and those things and not about helping people build stuff using the supply chain that we’ve built from modern software. The Cambrian explosion kind of metaphor, I think someone came up originally with like JavaScript frameworks, but every area of software, there is more and more diversity, and we got to be able to manage it. The Cambrian explosion lasted for 20 million years. There’s just a huge amount of stuff going on.

[00:41:33] Guy Podjarny: It’s not going to go away anytime soon. I think it’s well said, and we focus on the new technology bits and how to use them. But really, it’s easy to forget that containers are still relatively new to the ecosystem, and that the practices around them have evolved. It’s not just containers, it’s indeed kind of a holistic change of process and culture and means of software development. We have to sort of think about problems a bit more from first principles and understand what’s the right way to approach them versus constantly iterating and kind of retrofitting some past processes to to address them. That’s very, very well said here.

Justin, it was great to have you on. Thanks for coming on to the show.

[00:42:11] Justin Cormack: Thank you. It’s been good fun.

[00:42:13] Guy Podjarny: And thanks, everybody for tuning in and I hope you join us for the next one.

[END OF INTERVIEW]

[00:42:20] ANNOUNCER: Thanks for listening to The Secure Developer. That’s all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you’d like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don’t forget to leave us a review on iTunes if you enjoyed today’s episode.

Bye for now.

[END]

Related Posts

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.