Listen to the latest episode of the Secure Developer podcastListen now

The Secure Developer | Ep 86

Implementing DevSecOps Transformation

with Nicolas Chaillan

About this episode:

In episode 86 of The Secure Developer, Guy Podjarny speaks to Nicolas Chaillan who is the United States Air Force’s first Chief Software Officer, responsible for enabling Air Force programs in the transition from Agile to DevSecOps to establish Force-wide DevSecOps capabilities and best practices, including continuous authority to operate processes and streamline technology adoption. You’ll hear about his career trajectory, what it means to be a CSO, and his approaches for mitigating risk, “failing fast”, and redefining our understanding of security versus compliance. He also shares some interesting perspectives on the role of open source in government security, transparency with commercial parties, and the importance of continuous learning, as well as his advice and predications for someone in his shoes.


Application Security
Open Source
Secure Development
Security Transformation

Episode Transcript


[00:01:49] Guy Podjarny: Hello, everyone. Thanks for tuning back in. Today, we’re going to talk about security and DevSecOps at a very large scale, in a security conscious surrounding. We have the pleasure of having with us Nicolas Chaillan, who is the Chief Software Officer for the US Air Force, and is really the force behind the DOD’s DevSecOps motion. Nicolas, thanks for coming on to the show.

[00:02:10] Nicolas Chaillan: Thanks for having me.

[00:02:11] Guy Podjarny: Nicolas, before we dig in, beyond my one-liner introduction there, can you tell us a little bit about what you do and just a bit about the journey that got you here?

[00:02:20] Nicolas Chaillan: Yeah. Obviously, you can tell with my accent that I was not born in the US, which is interesting. I was obviously born in France, in Marseilles. I moved to the US about 11 years ago. I am an entrepreneur. I created 12 companies and I build software, cyber products, and a lot of great innovations. I work with a ton of great Fortune 500, helping them move to test tables and lots of innovative technologies. That’s been very exciting for me to build innovation around payments, around cyber, you name it. We’ve done a lot of different things over 20 years.

I created my first company back in France, I was 15. I’m 36 now, so I’ve been doing this stuff for a little bit of time. What was interesting is I wanted to make a difference. After starting my eleventh company at the time, I started at DHS right when there was a lot of terrorist attacks, particularly in France and Paris. Remember in the theater, the big attacks. I wanted to try to see, okay, what could we do to make a difference?

I started as a chief architect at DHS, and then moved to DOD. We created the DOD Enterprise DevSecOps Initiative you talked about, which is really helping the DOD move to a place of relevance and to continuous delivery of capabilities using DevSecOps. Then, we wanted to find a home pullback program to scale across the department. My boss at the time, Dr. Roper, created this concept of the Chief Software Officer.

Funny enough, the title, I was not a fan of it, because it doesn’t exist in the commercial side. I really wanted to bring the commercial best practices into the DOD. And I wanted to stick to naming conventions and everything that’s being done on the commercial side. I came to realize that maybe, for once, we were leading the pack. Maybe it was a new title that maybe every organization would need, which is about empowering the software teams to move fast while having that baked-in security into their software lifecycle processes.

When you look at things, like the adoption of containers and Kubernetes and all these complex technologies around abstraction of cloud, and elasticity, being agnostic across edge to IoT, to cloud, to jets, bombers, forex based systems, you name it, and the complexity of all that tied to DevOps, DevSecOps culture. I mean, it sounds like a full-time job doesn’t it?

[00:04:58] Guy Podjarny: Sounds like you’re not bored. How do you find, I’m curious, Chief Software Officer as compared to Chief Technology Officer, or probably a slightly more common title in the industry?

[00:05:09] Nicolas Chaillan: Yeah. I guess the CSO is more focused on the software components of the CTO. Effectively, it’s a subset, but a more precise set of capabilities. I see it as bringing enterprise services around the software stack to make it able to move at the pace of relevance. There’s a few things we’ve built right at the cloud office, we created Cloud One, which is our Amazon and Azure cloud office, all the way to the class five clouds. Then we created Platform One, which is our DevSecOps team to bring a CICD pipeline with Kubernetes, our content monitoring stack, DO trust, baked-in security, behavior detection, all these technology, they want two teams that just want to focus on building mission software and not have to build all these layers.

Effectively, the CSO office is responsible for bringing to life these enterprise services. One of the foundational aspects of good teams, obviously, beside the culture and the tech, is about continuous learning. I really believe continuous learning is going to be the difference between teams that are able to keep up or just get behind. It has to be a self-learning and powered through AI, or technology, because the pace of what we have to learn is changing so fast, that we can’t just rely on train the trainer or the teacher to train the people and do a once every five years certification is worthless.

That continuous learning piece, where we have to train a 100,000 people this year, for example, and that’s just one year, that tells you the scale we’re getting into when you look at moving some of the big DOD programs, like F-35, all our nuclear arsenal, space systems, to Kubernetes and containers and DevSecOps and be able to release multiple times a day. That’s the difference between getting behind all leading, and potentially winning the next war.

Effectively, timeliness is the key to success. Without baked-in security, if you don’t move fast you’re creating more risk. Of course, when you have the mission we have, we can’t afford to mess up. We can fail, but we have to define what failure means. It has to be small, incremental, tiny failures. Not big event. Fail fast. Don’t fail twice for the same reason and move fast.

[00:07:42] Guy Podjarny: Yeah. I’m going to drill into that topic a bit more, because I think this notion of fail fast in active duty context is indeed, not as easily digestible. Sounds indeed, a big and impactful role and probably more akin to platform teams, but at a larger scale, I guess. Hence, the title for it.

On that topic, you and your team have written, I have in front of me, illustrious document here. 79 pages, I think, of the DevSecOps motion. It’s titled the ‘DOD Enterprise DevSecOps Reference Design’. I’m not a 100% sure I have the latest document, but I think I have August 2019. It is rich with a lot of statements here. If you’d be up for it, let’s go through and I have some highlights there and we drill down into the meaning behind statements that are well crafted into the doc. Shall we do it?

[00:08:31] Nicolas Chaillan: Yeah, let’s do it.

[00:08:32] Guy Podjarny: The first thing, as you start reading through the document, the thing that really caught my eye first is you have a section there that says, “The benefits of adopting DevSecOps include,” and the very first bullet says, “Reduced meantime to production. The average time it takes from when new software features are required, until they are running in production.” I loved that initial quote, because, and maybe this is my security lens on things. When we say DevSecOps, this is a true internalization of the DevOps need. That line is oftentimes what you see at the top of a DevOps, but not so much a DevSecOps element.

On your third bullet in the four that you have here, it says, “Fully automated risk characterization, monitoring, and mitigation across the application lifecycle.” Are these bullets meant to prioritize? How do you view the balance between the needs to reduce meantime to production and this controlling risk, or automating risk characterization, handling risk correctly? How do you see the balance of those in this motion?

[00:09:35] Nicolas Chaillan: Yeah. I think that’s why it’s so interesting to look at the door matrix as a cohesive set of metrics. I think if you start just tracking one versus the other, you get to put yourself at risk, because you’re going to try to optimize one at the detriment of the other. Obviously, that doesn’t work. They are all important. I would probably argue, equally important. If you improve one at the expense of the other, you’re not improving anything. When I look at some of the numbers we’re seeing, with a 106 times faster lead time from dev to prod, 208 times more frequent code deployments, seven times less change failure rate, 22 percent less time unplanned work, or we work. The most mind-boggling one that’s going to speak to you more is the 2,600 times faster recovery. That’s pretty mind-boggling.

[00:10:29] Guy Podjarny: Yeah. That’s quite boggling.

[00:10:30] Nicolas Chaillan: If that wasn’t enough, the 2.2 times employees are more likely to recommend the organization to retain and attract talent too. I think they all matter. Obviously, for us that failure, that’s why defining the word fail, right? A nuclear explosion is unacceptable. I think it’s pretty obvious.

[00:10:52] Guy Podjarny: They have a draw line. Yeah.

[00:10:54] Nicolas Chaillan: Right. You probably want to put the line a little bit way past that. At the same time, it has to be small, incremental change. I think that’s what people don’t understand. When I started saying “DoD, hey, fail fast. Don’t fail twice for the same reason and learn fast”, people look at me like I was crazy. Because they assume, fail fast meant waste an entire program funding.

A nuclear study five into trillion 0.2 or 3 trillion. 1.2 trillion budget. Go beat that. We are the largest organization on the planet with the largest budgets on the planet. You can’t compare with anything you know. That’s just one program. That’s just one of my many teams. If you think back, you realize pretty quickly, fail fast has to be small, incremental, tiny change. Otherwise, you’re wasting a lot of money and potentially, putting a lot of people at risk, particularly when you touch weapon systems, which obviously, makes everything easier.

[00:11:51] Guy Podjarny: Yeah. Well, that’s wise words here. I especially liked the comments on the nuclear explosion not being an acceptable failure. You have to gauge those as well. I guess, if we – still, if I push back a bit on that point, which I totally relate to, I agree that you have to embrace a sense of failure and you have to prioritize agility in shipping things, as well as recovering from problems you’ve identified. How do you communicate to the teams who are presumably very, very security aware? How do you communicate to them, when to be willing to accept risk? How do suggest they draw the line? If clearly nuclear explosion is past it. How do they define what’s within?

[00:12:40] Nicolas Chaillan: Well, I think we have to define security. Unfortunately, in DOD, many people think of security as compliance. I can tell you, you can be fully compliant and totally insecure. That’s a big concern I have, because we’re checking all the boxes on paper, but then you look at what’s running in the system and the runtime drifted miles away from the paper. Effectively, your paper is worthless and you’re compliant with a fake stack, that that’s not even the one that’s actually running.

Then security also means – for us, we have a personal security, physical security, versus of course, cybersecurity, which creates a whole set of different problems. It’s all about first, understanding security and the risk. Back to your point is, how do you accept risk? Well first, you have to understand it. The issue is often, we don’t understand the risk. I mean, you look at a container, even have new CVEs. I’m the one, ultimately, that approves the containers on our own bank, which is our hardened container repo, where we have 350 products centrally assessed and hardened. We open source all of that, by the way, which is pretty exciting for DOD.

We even open source all of the Kubernetes and hardening and fresh dress code and terraform scripts, and everything is open source on repo one. That’s pretty rare. I don’t think we had such a massive engagement with industry and open source communities before. You look at the CDs of some of these containers, and you have big companies that – not going to give names today, but they’re going to recognize themselves. Massively huge, tremendous, publicly traded companies, with 2,400 TVs on their containers.

Then it had to take us to say, “Look, if you don’t fix it, we’re not going to use your product anymore,” for them to start fixing it. Some are doing a very good job and some are not doing a good job. Some cyber companies actually tell us, “Well, there’s a lot of CVEs. We don’t want to take the time to justify why these are acceptable or not. We’re just not going to do it.” That’s mind-boggling for a cyber company.

Often, it’s dependencies of dependencies, and you inherited that risk. When you’re shipping software, you ship the entire piece of software, regardless of who decided to use as a dependency. You’re responsible for that dependency and to update it. People just have these dependencies out 12, 14 years tell, because it’s all data, because it’s not their software, that’s wrong. It’s your software. You’re shipping it.

It’s back to understanding risk. Honestly, many times, we see vendors saying, “Well, that’s not applicable, because we don’t use this, or it’s not something that can be accessed.” You don’t know what a bad actor is going to be able to do if he gets a foothold in one of the container. If you have 1,200 of them, you don’t know how they could be used as a joint capability to get into the system and escalate privileges, or necessary removal. Or, who knows if one could be used with the other.

There’s no way someone can make the argument that they actually know it’s mitigated. It’s back to understanding risk. If you talk to some of these CISOs and this publicly traded company, which I do, in a very nice way asking for them to fix their 1,200 CVEs, or whatever, they start to understand, even while after SolarWinds, we have to address this and we can’t just accept that risk. Again, it’s risk we accepted for years. You have these products being used by the largest organization on the planet. It took DOD to make them wake up.

[00:16:08] Guy Podjarny: You define – you go through, and I know the document, you actually map out specific types of technologies. It sounds like you’re still drawing a high bar in terms of what you consider legitimately secure. You’re talking about the CVEs, you want to understand risk. When you talk about empowerment and you talk about different teams operating and running their own at their own pace, a lot of the concepts, or the principles behind DevOps, or DevSecOps are about making them independent, allowing them to run fast. How do you define which decisions can the teams building the applications be allowed to do, versus which ones move up to indeed someone who might be more proficient at assessing risk and making that call?

[00:16:55] Nicolas Chaillan: We’ve set the bar pretty high when it comes to our cyber security post jobs, our DevSecOps pipelines. I would probably argue, and I might be biased, but it’s probably one of the most advanced. I talk to healthcare, financial institutions, retailers, and a lot of people are coming to use our source code, because I’m probably now biased, or maybe I’m right. We put the bar pretty high, with zero trust, moving target advance, behavior detection. Because, look, all these scanning technologies, and we use multiple scanners, because in fact, we find a lot of disparity between scan results.

You’re not going to see the same results between scanners, which is already a problem. They don’t even agree with each other. A lot of false positives, a lot of false negatives, by the way too, that people forget about. We use multiple scanners, which complexifies the problem, by the way. There’s so much you can do with scanning, because look, the [inaudible 00:17:46] days and they said it’s going to come out that no one knows. It’s dependencies of dependencies. It could be a lot of different things.

It could be malicious code in nature. Look at SolarWinds. It’s not a bad code. It’s malicious code in nature. Signed code by the company, so how do you know it’s malicious? Well, none. You can’t find many scanners, or any scanners that can do a good job of detecting malicious code, like a time bomb, or opening politics, filling data.

At the end of the day, it comes down to detection, rapid detection, and reducing the attack surface. One is zero trust. Container A can only talk to B, if it’s whitelisted, reducing that attack surface lateral movements. Behavior detection, looking at what the container is doing, if it’s doing use things that we prevent that and we kill the container, go back to immutable state. GitOps. Everything is in Git. We don’t make change in production, so everything is enforced in Git.

Then finally, the conscious monitoring, alerting capability, where we push all the logs and telemetry to a central place to be alerted, and all that. It’s behavioral, zero trust, and all that combined that reduce your cyber risk. Now, back to your question on how do you define who does what? Well, we provide, as part of our central hardening of containers, hardened containers of base images for 16 programming languages, 23 databases, tons of commercial products, open source libraries.

Centering a data set, and work with the vendors and open source communities to have the latest versions as soon as possible, even sometimes before they release. It’s on ripple, it’s on Iron Bank, so people can consume these bits. Then we tend to re-harden DOS. We do all that work for each programming language. Effectively, that’s inherited by the teams. Now, each team can then use that as a base image to build their container to do something additional. That delta is going to be scanned the same way, but they have much less findings and they bring a lot of bad dependencies.

Effectively, we try to centralize that central assessment, and I have a little delta. If we look at the NIST cybersecurity framework heterostructure tree controls, we do 90% of them at the platform or infrastructure layer, and only 10% is left as the application team layer. Much small streamline and that’s how we can move faster.

[00:20:10] Guy Podjarny: Yeah, sounds great. That sounds very much like the Netflix paved road model, where you define best practices but you make them easy. I have another quote that I like. There are many quotes I liked in the introduction. I’ll read one more piece one, which is, “Say there is no uniform DevSecOps practice. Each DOD organization needs to tailor its culture and its DevSecOps practices to its own unique processes, products, security requirements and procedures. Embracing DevSecOps requires organizations to shift their culture, evolve existing processes, and adopt new technologies and strengthen governance.”

The things I liked about it, there’s no uniform practices, you need to tailor it to your specific surrounding, and it requires a culture shift in this process, which I relate to very much. When you balance this, so the conversation we just had, you build this great technology, you make it easy to be secure on one front, how do you balance the flexibility that the organizations get? Do you have central governance on your end, where you, I guess, govern them and assure it? Does that get disseminated to the different parts of the DOD? What do you find works?

[00:21:20] Nicolas Chaillan: Yeah. I think it’s a tough balance, right? Because DOD for us is huge. You can’t compare it to a normal company, right? We’re just too big. Four million people. I mean, the budget is $800 billion a year. It’s just too big to compare with any other organization. Effectively, I think a one size fits all is probably not a good idea.

Now, there is still a need to have enterprise services that can be adopted with diversity. Now, if you think that it does make sense to have a central options, like identity management, single sign-on, some of those zero trust stacks. If you want to do east-west traffic, ingress, north-south traffic, you can’t have a bunch of disparate technologies. It’s going to create issues to integrate. That’s why Platform One brings a lot of these enterprise services, collaboration tools, chats, why do we need 20 of those? Do you want to go on different shelves to find which team is there and go another one to find out the other team? It’s just not efficient.

The quote is mostly right. You need diversity. That’s how we do it also with Platform One. Like I said, 16 programming languages, 22 databases. We’re not saying, “Hey, use Java, use Python, use whatever.” We give options. That’s the key. There are things you can give a lot of options, there are things you should do give a few options, and there are thing that maybe, you don’t give options. It’s a tough balance.

Even just the Air Force is as big as the normal, I guess, four to five. It does make sense to think of it this way and cut it, sometimes in pieces. Then we have issues integrating services and APIs and things across Air Force, Navy, and Army, and so on. It’s a tough balance. I don’t think there’s an easy answer. Honestly, the number one impediment is the silos that we have for everything. It creates this little kingdom/egos, fields. It is difficult to please everybody and get everybody excited at the same pace and on the same page.

If you don’t break these silos, if you have different teams during testing, cyber, and then by the way, that’s per program, so you’re talking how do you connect F-16, 22, 35 and so on. That’s why we’re having so much work to do. By the way, oh, you have multiple classification levels. Think of how complex it would be if your network was air-gapped in multiple different times and ways, because you have different classification levels, and they can talk to each other.

Oh, by the way, you need the same stack to run with no drift, so you don’t have different disparity of technology across dozens of these environments. That’s where GitOps is conditional, to have no drift. It’s tough to do this and find the talent, by the way. If you were to do this separately for the Air Force and Navy, the Army, how do you find even the time? I mean, most companies can’t even find the time, period, 3, 4, 5, 10 times.

There is a point where, okay, maybe we need to bring enterprise services and bring diversity of options within them, like multiple clouds, multiple scanning cyber products, whatever. Then a few of them should really just have one option.

[00:24:37] Guy Podjarny: Yeah. If I cut this back, it sounds like, it’s really about making it easy. You’re providing a value back. Again, very aligned to that paved roads concept, which is, there’s all these things, you’re specifying what they should be doing at a more high level, there are security requirements, not just you. There’s a myriad of other requirements coming on each of these departments, into these parts. You’re helping them out. You’re providing them with a platform that helps them achieve all of that.

In the process, set them straight, or puts them on a certain path that helps you also achieve certain consistency in the security controls and the operational excellence that applies throughout the organization, despite its size. Or maybe not throughout the whole thing, but rather enough of them, because they’ve opted into that platform. Does that sound correct?

[00:25:25] Nicolas Chaillan: Yep, that’s correct. Absolutely. Makes total sense.

[00:25:28] Guy Podjarny: In that process, I think it’s a powerful perspective. The other piece that you have highlighted here is transparency. Beyond the security mindfulness, the DOD is not naturally transparent in all its parts. What has been your perspective around being transparent when we’re talking about vulnerabilities and potentially mission-critical systems here? How do you balance transparency with secrecy, I guess, of some sensitive information?

[00:25:58] Nicolas Chaillan: Yeah. The issue with obfuscation, or secrecy, like you said, is a false sense of security. You think you’re secure, but you don’t really know. Look, we have to know. When we put some of our systems at DEFCON and it took three minutes for white hats to get into the system as root, it was pretty clear that it wasn’t good enough. I argue that by open sourcing some of the DevSecOps pieces and using some of the work with a cloud native computing foundation, incubating containers, and all the suite of products around it and that transparency and open sourcing this back and having multiple identities outside of DOD use it, more organizations, we have healthcare, finance organization, and brought on adoption by contractors that sell capability to the department.

Effectively, we become more secure, because you have more added code, but you also, sure, opening the risk of other nations that are less friendly, to look at what we’re doing and have insights as far as how we’re doing it. We also make it more secure. There’s pros and cons. We think, open sourcing that, basically, are of automation and abstraction with domains and containers and DevSecOps and CICD pipeline, and – will really, ultimately make it more widely adopted outside of DOD, which will make it more secure, create a community around it, with more people contributing back.

We have dozens of commercial organizations that announced they’re putting full-time engineers on our open source repo. Lockheed Martin is putting six people, CISCO two. I mean, it’s endless how many organizations are looking at putting people full-time on that open source work. That is a first for the government, to have that engagement at scale with commercial organizations, because they can also make money by reselling and adding services on top of what we open sourced. Effectively, I think it’s a win-win.

We have to be more comfortable living in a world, where people will find it. Let’s face it, that false sense of security doesn’t exist once the system is in production, because if they manage to get any foothold on the network, in any way, they’re going to be able to get access to those systems pretty quickly, no matter what we think of perimeter security, which is completely outdated. That’s where zero trust, and all the stack we’re building, and that that behavioral detection will help proactively detect these things instead of taking months to even know that bad actors are inside of the network.

[00:28:48] Guy Podjarny: Yeah. Perfect. Very well said. I love this notion of commercial companies coming along and putting people on it. In part, clearly, it’s a benefit and maybe some public service, that they are helping improve government capability. Also, I think a lot of these organizations, it’s in their benefit, given the federal government, the DOD’s buying power, it’s to their benefit to actually help the DOD advance some practices if they’re looking to sell a service that might be most beneficial in that more modern surrounding.

It’s really nice to see the way to harness, I guess, the power of and the needs, the agenda, if you will, of the commercial entities in favor of strengthening the actual security and capabilities of the platform for it. I think that’s awesome.

[00:29:35] Nicolas Chaillan: Yeah. The other benefit of that is that the companies, sure, they want to make money, and then that’s the point of any company on the planet. They will also now contribute back and make it better and stronger and faster together. Also, it’s less risk of getting locked into one vendor for the department and the taxpayer, because now, you still have that open tools landscape.

Effectively, we mitigated a lot of different things. We improved security. We mitigated the risk of vendor lock-in. We made companies contribute to single codebase. If you look at SpaceX, 80% of the code is shared across nine platforms. In nearly, not even 5% of code is shared between our jets, for example. We effectively paid to rewrite code that could have been reused day one.

We want to improve that. By open sourcing and moving to these Lego blocks, microservice capabilities, we can effectively start accrediting microservices centrally, make them available across DOD, and have that reuse across the department and streamlining that process. That’s a game changing aspect too. We have over 2,000 microservices that were built in six months.

Now, organizations that used to sell of those fake systems can cut it and bring to us containers, so they can get accredited on the Iron Bank and we have a process for any organization that wants to sell to DOD, even startup that never did any business with DOD, to be able to be onboarded within two to four weeks on Iron Bank, to be accredited and consumed as a container, as a Lego block. That’s game-changing, also to extend the up-bridge to smaller companies, startups, so we can benefit of best of breed and innovations all across.

[00:31:25] Guy Podjarny: Excellent. Yeah. All makes perfect sense and very forward thinking. Loving that for sure. I have two more core questions for you. I guess, yeah, not committing to not asking a third one at the end of that. Let’s talk about skills a sec.

We talked about the different technology stack. We talked about a different approach here. How did you think about the talents required amidst the team? You talk about how both infrastructure as code and security as code are to the software and go through the rigorous software development processes, including all these different steps. That’s a different skill set than maybe a more sysadmin background security person coming along. How do you think and how did you approach the delta in talents required, or skills required across the DOD to implement this new approach?

[00:32:22] Nicolas Chaillan: Well, that’s obviously a giant list. That’s why I think continuous learning is foundational because, even if you get the best talent today, if you don’t keep them – and we give an hour a day on platform for people to learn. If you don’t give them that time, and you don’t invest in your people, you’re making them stale and get behind, effectively ruining your own talent. Of course, you need diversity of talent. We have cyber liability engineers, and all these new fancy titles around Kubernetes, D2, and cyber, and container security, and understanding zero trust, and how to enforce it, and what does that actually mean, because most people think they know what it means, but they don’t.

We have curriculums. We have self-learning capabilities. We give access to a learning hub, where we have created unbiased content from Linux Foundation, CNCS, and O’Reilly, the books that we created. Then of course, the cloud sandbox to put it to good use, so they can learn and continue to learn.

It’s not just about getting talent that already have that knowledge, but I find honestly, the best people are people that can learn and self-learn, because of the pace of IT anyway, quite honestly, I mean, I’m not just saying that because I didn’t go to school. Don’t get me wrong. I don’t have a degree, other than high school diploma, I guess, that doesn’t count, I guess.

[00:33:39] Guy Podjarny: I’m in the same boat here.

[00:33:41] Nicolas Chaillan: Yeah, I mean, we can obviously learn fast. I think that’s foundational. Because if you look at the landscape we’re talking about, even stuff like service mesh that didn’t exist three years ago are now foundational to our zero-trust architecture, and things like Istio, and Envoy, the reverse proxy. Effectively, I think it’s about learning and not just skills. Skills will change and move and technology will change. It’s all about continuous learning. How do you empower your people to learn?

[00:34:13] Guy Podjarny: Yeah. That’s a great insight. It’s not about the reskilling. It’s about embedding a competency of learning, because it always changes. I guess, on that note, if I can ask, what’s an example, or two of mistakes you’ve done in earlier iterations of this? These are very sharp answers. I’m sure it wasn’t all out of the box perfected. Take us a bit along the journey. What’s an example of something that you’ve tried, you thought it was a good idea, and you backed out of, or you considered and you backed out of?

[00:34:43] Nicolas Chaillan: Yeah. I think, the biggest mistakes I made were due to the fact that I was dropped in the DOD with no background of the department, the military, the bureaucracy, the silos, how the system works. You get a commercial guy in there, and it’s like, you’re talking a different language. You don’t even understand how the system works and how to get things done. I was always a little struggling when it came to engaging the leadership to get the buy-in faster. I was trying to show that sense of urgency and I was doing this for our kids, so we can be a stronger nation.

At the same time, those people have been there for years. They understand the system and they didn’t always see that urgency, so it was frustrating for me. I cut corners, or tried to push things to happen maybe faster than I should have. It was more about the culture shift and getting the momentum started. Once I got the trust and started to deliver and show that, hey, the French guy is going to be able to actually get stuff done and not just talk about stuff all day, I think that’s where I started to get a little bit more buy-in from people and then you have a momentum going.

I wish I spent more time trying to understand the system and why those silos exists and who does what, and who is leading what. Instead of just showing up and like, “Yeah, you’re doing everything wrong. We need to cut the silos. We need to ship last. We need to rethink security versus compliance. We need to automate, automate, automate.” People were always freaking out. We were trying to remove humans in the loop. We’re trying to trust automation, when nuclear surety, or air-worthiness is foundational.

I think that was my biggest mistake, is how to bring people from a complete diverse background, outside of IT, alongside of what we were trying to do. Once I got the gist of it, it got easier. I think what the department is not doing very good is we have a process to bring people like me in and it’s a big, annoying, and complex nightmare, including having to divert stock and potentially lose a lot of money on top of the pay cuts you’re taking on your paycheck, for conflict of interest. You’re going to be managing a lot of money. You can’t have stock in companies and stuff that could potentially benefit.

There is no process to train people on “Hey, here is DOD 101, how the bureaucracy, the Pentagon, the building works.” That, I think, is a big issue if we want to bring more people from industry into the department. That’s probably the biggest mistake I made.

The tech stuff, I think we got right 99% of the time. I don’t have a big story that comes to mind, other than I guess, we need diversity and execution. Instead of awarding contracts to a single company to execute that vision of DevSecOps, we ended up shipping to 36 companies and cutting the work into value streams and mixing teams together to bring more options. That’s probably one of the lessons learned. Again, it’s more acquisition related and contracts than it is about the technology vision.

[00:38:11] Guy Podjarny: Yeah. Still a good learning here as well. Thanks for sharing that. It’s hard. People are always the hardest bit. I guess, I have a different closing question for the podcast this year. Listeners may know. Actually, I feel the one I was using in the previous year is as fitting here.

If you think about the team in your shoes, in a company, maybe not quite a large scale, a Fortune 100 company, or other large-scale organization, not quite running the Air Force or DOD. If you want to give them one bit of advice, one key focus, or do or don’t, trying to implement this type of DevSecOps transformation, what would that be?

[00:38:52] Nicolas Chaillan: Organizations will be as successful as their people. Continuous learning investment, continuous learning gives people time. I would go as far as an hour a day for people to learn, paid. Then, build these enterprise services to remove the impediments. I can tell you, if I was still an investor with my VC, I would look at companies, not about just a product and people, but also look at their maturity in DevSecOps and how capable that team is to allow the team to learn and move fast and move at the pace of relevance and adjust based on that learning.

It’s all about timeliness. If you don’t have a strong investment in DevSecOps, and not just DevOps with that baked-in securities, zero trust, service mash, cloud, that necessity, so you’re not locked into a single cloud provider, you’re not building software the way it should be built in 2021.

[00:39:49] Guy Podjarny: Yeah. Well, that’s a good advice and encouragement here. Thanks for that one. Hey, so one parting question, that is my regular question here in the podcast. All of this is present and getting ready for the future. If you take out your crystal ball and you look at it and you say, a person in your seat trying to do what you’re trying to do now, but in five years’ time, what would you say is the biggest change?

[00:40:17] Nicolas Chaillan: That’s a good question. I think, cybersecurity is going to evolve. I think it’s going to be a lot about that continuous monitoring capability. I think that the big risk is the dependencies or the products you use that you don’t know much about. On the cyber side, I think, my biggest fear is the follow-up of the SolarWinds breach, and that risk of supply chain, and paying attention to your bill of material, paying attention to how you control your updating, JCT capability, multi-party signing, multiple set of eyes on code change.

It’s all going to be about automation in GitOps and multiple set of eyes on code to approve that change, so not one person can make a change that could be malicious in nature, and really, that supply chain management. I think that’s going to be the big risk in the next two to five years.

Past five years is tough. I think it’s going to be a lot about AI. We were spending a lot of time on cube flow and AI machine learning, deep learning capabilities. I think obviously, that’s going to be a way to automate and scale, including in cyber risk and scanning capabilities. Beyond that, I think that’s probably too far away right now.

[00:41:36] Guy Podjarny: Yeah, it’s hard. We’re getting out of the agile mode. Thanks for that perspective. Hey, Nicolas. This has been great. Time flew here. We’re going for quite a while. I’m sure you’ve got wisdom to share here for a full day more. Thanks for coming on the show and sharing your wisdom with us here and your experience and congrats on this super impressive program here that you’re driving forward.

[00:41:57] Nicolas Chaillan: Thanks for having me.

[00:41:59] Guy Podjarny: Thanks, everybody, for tuning in. I hope you join us for the next one.


[00:42:06] ANNOUNCER: Thanks for listening to The Secure Developer. That’s all we have time for today. To find additional episodes and full transcriptions, visit If you’d like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don’t forget to leave us a review on iTunes if you enjoyed today’s episode.

Bye for now.


Nicolas Chaillan

Chief Software Officer at United States Air Force

About Nicolas Chaillan

Mr. Nicolas Chaillan is the Air Force’s first Chief Software Officer and responsible for enabling Air Force programs in the transition to Agile and DevSecOps to establish force-wide DevSecOps capabilities and best practices, including continuous Authority to Operate (c-ATO) processes and faster streamlined technology adoption. In addition to his public service, Mr. Chaillan is a technology entrepreneur, software developer, cyber expert and inventor. Mr. Chaillan founded 12 companies and is recognized as one of France’s youngest entrepreneurs. He has created and sold over 180 innovative software products to 45 Fortune 500 companies.

The Secure Developer podcast with Guy Podjarny

About The Secure Developer

In early 2016 the team at Snyk founded the Secure Developer Podcast to arm developers and AppSec teams with better ways to upgrade their security posture. Four years on, and the podcast continues to share a wealth of information. Our aim is to grow this resource into a thriving ecosystem of knowledge.

Hosted by Guy Podjarny

Guy is Snyk’s Founder and President, focusing on using open source and staying secure. Guy was previously CTO at Akamai following their acquisition of his startup,, and worked on the first web app firewall & security code analyzer. Guy is a frequent conference speaker & the author of O’Reilly “Securing Open Source Libraries”, “Responsive & Fast” and “High Performance Images”.

Join the community

Share your knowledge and learn from the experts.

Get involved

Find an event

Attend an upcoming DevSecCon, Meet up, or summit.

Browse events
We use cookies to ensure you get the best experience on our website.Read Privacy Policy