Submit to your local DSC chapter CFPSubmit now!
close

The Secure Developer | Ep 135

What AI means for Cybersecurity with Sam Curry

with Sam Curry

About this episode:

Artificial Intelligence is innovating at a faster than ever before. Could there be a better response than fear? Sam Curry is the VP and Chief Information Security Officer at Zscaler, and he joins us to share his perspective on what AI means for cyber security. Tune in to hear how AI is advancing cybersecurity and the potential threats it poses to data and metadata protection.

Sam delves into the nature of fearmongering and a more appropriate response to technological development before revealing the process behind AI integration at Zscaler, why many companies are opting to build internal AI systems, and the three buckets of AI in the security world. Sam shares his opinion on eliminating the offensive use of AI, touches on how AI uses mechanical twerks to get around security checks, and discusses the preparation of InfoSec cycles. After we explore the possibility of deception in a DevOps context, Sam reveals his concerns for the malicious use of AI and stresses the importance of advancing in alignment with technological progress. Tune in to hear all this and much more!

Tags:

AI security
Application Security
Secure Development

Episode Transcript

EPISODE 135

 

“SAM CURRY: The AI is using mechanical twerks ironically to get around the robot checks, right? If it can’t do it, it can hire humans to do it. How is AI actually engaging as the lead in some of these attacks, and simulating, and trying to be like humans in attack? At what point are they taking over identities? How are they able to recognize what’s a decoy credentials or applications more effectively than a human could when they encountered them in an environment? I actually belonged recently on something I called negative trust rather than zero trust, which is, how do we make the environment hostile to attackers when they come in?”

 

[INTRODUCTION]

 

[0:00:41] ANNOUNCER: You are listening to The Secure Developer, where we speak to leaders and experts about DevSecOps, Dev and Sec collaboration, cloud security, and much more. The podcast is part of the DevSecCon community found on devseccon.com, where you can find incredible Dev and security resources, and discuss them with other smart and kind community members. This podcast is sponsored by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open-source containers, and infrastructure as code. To learn more, visit snyk.io/tsd.

 

[EPISODE]

 

[0:01:27] GUY PODJARNY: Hello, everyone. Welcome back to The Secure Developer. Thanks for tuning back in. Today, we’re going to talk about all sorts of interesting things, but maybe specifically focused on AI security and some sort of experienced operator views on it. For that, we have Sam Curry, who is currently the VP and Chief Information Security Officer at Zscaler, but has a really impressive list of a set of CISO, and CTO roles over his history. Sam, thanks for joining us here.

 

[0:01:53] SAM CURRY: Guy, thanks for having me. It’s exciting. Man, this is cool.

 

[0:01:56] GUY PODJARNY: Started sort of looking through the resume, and I knew of Sam and even of others. But you were CISO, or security leader anywhere from CA, to RSA, Arbor, I think Microfocus.

 

[0:02:06] SAM CURRY: McAfee. 

 

[0:02:08] GUY PODJARNY: I’m sorry, McAfee in it.

 

[0:02:09] SAM CURRY: Oh, MicroStrategy you might have been thinking of, yes.

 

[0:02:12] GUY PODJARNY: Then of course, Zscaler now. Maybe, you don’t need to sort of give us the sort of the full resume over here. But, can you give us like a quick highlight on how do you think of your journey, and maybe a little bit of just context of that field, you’re tied in today?

 

[0:02:25] SAM CURRY: My resume is a mess; I think I told you that earlier. But the reason for it is that I think I’m fundamentally curious about stuff, and I think that’s really important. My formal, I guess, education, and training has been in a few areas. The first is physics and linguistics, strangely enough. I speak a few languages. Also, literature and philosophy, and then strangely in counterterrorism. My master’s is in counterterrorism. These are areas that interested me at different times, so I just dove into them. I’m also a fellow of the National Security Institute, but I think I have a habit of not saying no. That’s the problem. People always said, “Hey, Sam. You got to learn to say no.” I’m like, “Actually, no, you don’t.” I mean, yes, it’s important to say what you’re not going to do, and I do that from a professional perspective, and been managing portfolios and things. But if you want to do something, you will find a way to do it. And if you’re fundamentally curious, you can be deep in many areas.

 

AI, I think we’ve abused, as a term, but it is actually a bucket of many, many different neat technologies, some of which have been around for hundreds of years. I think we throw it around from a marketing perspective. I’ve been thinking about, and using a practitioner around AI for a long time, especially the interface with security and privacy. When the latest wave hit us, it wasn’t a big surprise, even if the applications were surprising. I’m certainly excited about where it’s going, and I’m a little bit sceptical of some of the fear-mongering going on that it is belying some rather nasty intent. But I’m actually super excited about some things you’re doing, Guy. and I’m looking forward to this chat.

 

I don’t know if you want me to unpack anything in my background. I’m sure people can look at LinkedIn. But most of it, I think applies quite readily here.

 

[0:03:59] GUY PODJARNY: Yes, I think it’s long and impressive. I think maybe one thing I would note is, above and beyond sort of the security leadership roles, you do have quite a bit of sort of education background and you’re an adjunct –

 

[0:04:10] SAM CURRY: I still teach. I really want to pay it forward so I teach, especially undergrad and grad in cyber, and some adjacent areas. Yes.

 

[0:04:17] GUY PODJARNY: That’s great. Maybe we just sort of jump a bit into the AI security path. You mentioned, you were sceptical, I thought your continuation there was that you’re sceptical of the capabilities on it. But before we maybe we go into AI security, how do you feel about AI technology? Are you – sort of seeing it evolve. but were you kind of surprised, amazed? Maybe not so much.

 

[0:04:39] SAM CURRY: Delighted, actually. I think it was delighted is the word I would use. Now, that’s the first reaction because there’s a lot to unpack there. We tend to mix how we think and how we feel, and it’s very difficult to unpack it. The first thing I’m going to say, it is going to be disruptive in ways that we don’t entirely expect. Many years ago, I forget who said it. It might have been Tim Berners-Lee about the internet is both simultaneously the most overrated and underrated technology we’ve ever had. I think this is even more so, maybe this is supplanting that. But I had a series of conversations recently with Daniel Miessler most recently about how – it’s really about augmented humans, and that the human experience is fundamentally interesting to me.

 

I know it’s going to be very difficult for society in some places to deal with the introduction of AI, just as it was with automobiles, just as it was with the Internet. I’m also super excited about what it means for what we can do and what the human condition can be like as we learn to apply this. I’m interested for how it will empower everyone to be a developer, and for artistic creation, and innovation. Where we can take, that’s really an amazing thing. The fear of AI, though, has been grossly exaggerated by people with very, very little to no evidence to back it up. Very often, special interest groups are in particular very, very protective around that, and they’re amplifying that voice that really disturbs me, especially because it’s back with very little science or evidence.

 

My initial reaction was delight when I first interacted with it, and saw what it can do at scale. I mean, forget the ones that were rushed out too quickly. Certainly, the sort of ChatGPTs when they hit the market, they hit it amazingly. But learning how to use it correctly, and how to interact with it is non-trivial. I never thought my linguistics would collide so completely with my cyber so fast and ith my own background the AI toolkit, all of it came together, and I went, “Okay. Well, this all makes sense to me.” Because there were so many different domains in AI, that were mutually exclusive in the research. If you go look at the research, they were in different areas. The advances were mutually exclusive. What you’re finding now is, if you look at the real academic research, it’s all mutually reinforcing now. The rate of advancement in AI is actually accelerating, and the applicability of it in more domains is becoming much more interesting and accelerating, which is really cool.

 

[0:07:06] GUY PODJARNY: I love the word delight for it. I haven’t sort of used this word.

 

[0:07:09] SAM CURRY: I only have just thought of it just in, because it was a reaction.

 

[0:07:12] GUY PODJARNY: Yes, it’s a good one because I feel when I talk to my kids or something, and talk about this, I really feel like a bunch of these tools give you an opportunity to feel like a kid and believe in magic again. You interact, it is different. With all the fallacies and we’re going to talk a lot about them, and the areas where there are pitfalls. But fundamentally, this sensation of it is indeed one of delights, generating an image in Midjourney or having some juggling that ChatGPT manages to get to kind of go through when you ask it to just sort of feels fun.

 

[0:07:44] SAM CURRY: Yes, and magic is a great word, by the way. What we want is, we want a world that responds to as magically. Your children, my children grow up in a world that magically understands them better. We almost can’t understand to the degree to which that’s going to be true. The line will blur between what is a digital world, what is the physical world? The flip side of that is the word hallucination, and it’s true for AI, it’s true for humans too. How do we keep them grounded in what is real? Which was a philosophical question, also part of my academic background. It is a very distinct question. We were like, “Yeah, what do we really mean by what’s real in the epistemological side of that?” But it’s going to have real meaning soon, what is really real and what am I hallucinating with. Yeah, you’re hallucinating. And when is it magical is a different question. Magical as opposed to magic, right?

 

[0:08:35] GUY PODJARNY: I guess, the risk of kind of belabouring this analogy a little bit. The magic and the history with magic also includes, of course, the witchcraft and sort of the evil magic aspect, and sort of countless tales of magic that spun out of control.

 

[0:08:48] SAM CURRY: The Golum story, for instance.

 

[0:08:49] GUY PODJARNY: To some extent, I didn’t think it might sort of stretch that far. We’re going to get into this a little bit critique in a sec, but just want to kind of get a bit of a picture. Have you managed to find yourself using any of these tools beyond just like exploration or fun? ChatGPT, or any of these other. do you use them in your day-to-day?

 

[0:09:05] SAM CURRY: I don’t, I don’t. My brother does. My brother is also in cyber. For instance, I write very quickly, and I express myself almost transparently as a writer. I speak and I write – I think the styles are maybe a little bit different. For instance, I tried, but the writing didn’t come out as well. My brother, and I don’t think he would mind me saying this. He’s very verbally proficient, but he’s never been a good writer. He’s found ways to make ChatGPT spin, and to represent him in writing, as an extension of himself much, much better than he can write. 

 

In about the same period of time, for me, it takes 20, 30 minutes to write a thousand, 500 to thousand words and it comes out. Well, you can read my blogs. My brother now takes about the same period of time. He spends most of his time massaging the inputs, and the prompts, and the edits. Whereas I spend most of my time direct input. Then, I maybe get someone to edit it for me for quick typos, and things because that’s painful to do to yourself. Maybe for other people, no.

 

Now, I’m reminded of a conversation with Garry Kasparov back in 2018, he was speaking at a show, and buttonhole him after. He was talking about Deep Blue, and about what it was like to be beaten by a chess machine. He actually – the conclusion of his talk was, for about 10-year period, the best chess was played by assisted humans. That’s where the excitement comes in, what’s it going to be like to live in a world where people who’ve had a barrier of how they communicate in writing, or in art, or in any form of expression where they haven’t built the ability to do that now have an AI, or LLM, or whatever tool you want to call it.

 

[0:10:41] GUY PODJARNY: Whatever sort of –

 

[0:10:42] SAM CURRY: Or coding.

 

[0:10:43] GUY PODJARNY: It is AI, is the right term in that context, which is –

 

[0:10:45] SAM CURRY: Or coding or scripting. Yes, QA, those sorts of things. Somebody who can now have the tool made transparent, where you focus on the task rather than the tool. Now, the question is, if you can identify what’s good, which is something I tell my students. I’ve said, I expect you to use these tools, because they’re part of the workforce. Now, I’m going to judge you on how you use them and the fact that you cite them. That’s an interesting world because so many more people can enter into the more advanced things, and productivity can actually weigh up.

 

[0:11:17] GUY PODJARNY: I absolutely agree with that. It’s interesting too. There’s a whole – we’re not going to open that realm, but there’s a whole area of what should schools educate on with regards to those AIs. Universities are one flavour of that in earlier stages, even more so. AI is amazing, and so we could probably ramble on here for a while on that, on the set of the great potential advancements that we can have on future world.

 

Let’s sort of shift gears from a security perspective. AI security, also a concern, one that many security leaders, company leaders, frankly, individuals, they’re sort of facing very quickly for the first time where companies quick to commit large percentages of their companies to really kind of invest in AI. Many people are kind of caught unprepared for something like that. And say, “Well, what is it that I even need to think about free AI security?” Maybe let’s start high level. When you say, theoretically all the –

 

[0:12:12] SAM CURRY: In theory.

 

[0:12:12] GUY PDJARNY: Well, you’re a security leader of an organization with some significant business, and you are now tackled to say you should do something about AI security. How do you start? How do you break it down?

 

[0:12:23] SAM CURRY: It’s funny, because Zscaler, of course, is in authorization, and we talk about zero trust. We’re writing the authorization pathway of should somebody do something or not, right? So we get this one a lot. The first thing that people worry about, and reminds me if – I’ll just say three other things that came to mind, as you were talking. I was like, the first one was, should companies allow these browsers, and specifically search engines? This came up. The second one was, should they allow the use of instant messaging, and this came up most immediately with respect to trading. 

 

If you remember, brokers started to use trading, because it was instantly able to communicate and they could get – we would call it insider information, but they were able to get data sooner. Same thing happened later with SaaS applications. Should people be able to interact with these unknown entities that have services, and now it’s coming up with AI, and with large language models, and the interfaces? What happens if people started asking for briefs, and if they start entering data, and asking for summaries? You can look at this at scale, right? What does it do for privacy? What does it do for IP? What does it do for intellectual property that isn’t yours that you have on behalf of a customer?

 

The questions come up immediately now, who owns them? How are they tapped? Which other learning sets do they go into? Is it private? Of course, almost none of that is answerable. When you look around, most of them were built on dodgy inputs from a copyright perspective, to begin with. There’s fights going on over where these chained on public domain things, or on private IP to begin with. Then the question is exactly where it went with some of these SaaS models. What’s the APIs into those? Who’s tapping into that? Who’s instrumenting? Who’s saying – well, how many people ask questions about gold and mining in the last 24 hours of all these LLMs, and is that changing?

 

I picked that one at random, right, but that can be useful insight, and trending for somebody who’s trading on commodities exchange. I just picked one. There’s so many. The first thing people are saying is, don’t allow it. Just stop it. But that doesn’t work, because if you just stop it, people will find a way to do it anyway, just as they did with instant messaging. They go to their phone, or they go to another machine, especially in a work from home or hybrid environment. Or people start to disguise these services as other things, they change in the shape of it, they change the input, it’s now another – it doesn’t look like an LLM interface. Then you get dodgy offerings where people actually like, “Wait a minute. I can actually not only offer this. I could offer it as a rogue service and sell the information on the back end.” The next thing people do is, okay, how do I vet?

 

[0:15:01] GUY PODJARNY: Before we go to the next thing on, so you’re saying sort of the first concern, if I go back, the first thought that comes to mind for security leader is, “Well, can I stop this?”

 

[0:15:08] SAM CURRY: Well, yeah. If you do business and says –

 

[0:15:11] GUY PODJARNY: You’re saying the answer is, no, you cannot. This is an opportunity, you can kind of waste some cycles, trying to put something –

 

[0:15:18] SAM: But you can actually stop it.

 

[0:15:18] GUY PODJARNY: And prevent it. But people will find a way to use it anyway, and the demand, the appeal is too high for the security leader to say, “Thou shalt not pass for anything, but like a very brief period of time.”

 

[0:15:30] SAM CURRY: Right. I mean, usually, it’s actually the business is saying, “Security leader or IT leaders, stop it.” It’s sort of like putting sandbags up for a flood, it’ll work a little bit. As the flood continues, it starts to leak, right? Then the next phase is, they say, “Okay, we’re going to allow some, we’re going to try and vet these.” The next phase after that is, “We’re going to build our own, because a lot of is open source, and there’s these companies exist, they’re going to make a private instance of it.” Then at that point, is when we start to say, “Okay. So now, what can tap into it?” That’s when you get into the more mature phase of sort of figuring how to use it for the business. 

 

The faster you get there, by the way, it’s not easy, you can’t just jump to it, you have to think about it. You’ve actually got to process what it means to the business. If you just jump to it, you’re going to make a bunch of mistakes, but you can accelerate it by starting the strategy talks now. When you actually start to integrate to the business, and say, “Okay. What does it mean for how we operate, and processes, and what jobs look like?” It is disruptive, but it is going to be a force multiplier for a lot of productivity. It’s going to be scary for many people, but it is going to open up new jobs, it’s going to open up new ways of doing work. It is one of those things. You couldn’t hold the minicomputer back from the business, you can’t hold the mainframe back from the business, you’re not going to hold this one back.”

 

[0:16:47] GUY PODJARNY: Yes. It’s interesting, you’re tying – like when you talk about using your own, so step one, put the sandbags, try to sort of stop the flood. It might buy you a moment, but just sort of know that it’s not going to last long. Then after that, some form of control usage, which is also, I guess, it stems the flood a little bit. That’s maybe the drain, but you allow some of it. But then from there, you’re saying that the next evolution is really to build your own, versus to really lean in and embrace.

 

[0:17:13] SAM CURRY: Or you could be both. You might say, “Hey, this vendor, we’ve vetted them, we’re working with them, they’ve passed our privacy controls, we trust them.” Now, that’s a big step it would require, but we did it with IS and passed. We trust AWS, we trust Azure, we trust GCP, we trust OCI. Like those things have happened, but we didn’t originally. That can happen here too. These will become platforms like them. At some point, whether it’ll be these vendors or not, it has to be seen, because that’ll play out. But the other possibility is just like in the cloud, we have private cloud, right?

 

[0:17:45] GUY PODJARNY: It’s also having the analogy, yes.

 

[0:17:47] SAM CURRY: Private AI, yes. It’s happening, by the way. Companies are saying, “Oh, we’re building our own, so we’ll be able to do our own internal search to our knowledge bases. Don’t worry, it’ll be great.” But the fact remains, just like with the cloud, which was, of course, other people’s computers, is how we denigrated it in the old days. But with AI, the larger models, and the more training, and the more experience really will be more adept at, let’s just call it, seeming to think.

 

[0:18:15] GUY PODJARNY: The advantage of the professional at scale, the scale economies themselves, that AI can tap into are even stronger. Those played a key role in cloud, which I guess, I wouldn’t say the private cloud is dead.

 

[0:18:27] SAM CURRY: It’s not, no, no.

 

[0:18:29] GUY PODJARNY: But I think, it has diminished, it started. You’re right. To a large degree, is for the fear factor, for the lack of trust.

 

[0:18:34] SAM CURRY: Very much so, yes.

 

[0:18:35] GUY PODJARNY: I want to use cloud, but I can’t because it is not trustworthy enough. I’m going to run my own. I think that specific use case has diminished a fair bit as we build that. 

 

[0:18:44] SAM CURRY: It’s a good intermediate step, and it may develop longevity its own. One of the things that I think will come about is how to keep it fresh and private. How will you keep it in sync, up-to-date, relevant, as intelligent as the other AIs. Think of these AIs as going through school. They’re going up through various grade levels. The public ones are going to be going up through grade levels much more quickly.

 

[0:19:06] GUY PODJARNY: In all of these cases, you’ve kind of taken the down the path, which I understand, I think is almost infrastructural and data. So you’re saying, the starting concern you have is, there’s this data that I have, whether it’s my employees who typed in something, and that data is mined, or whether it is the data access that I got, and the authorization who can access what, even on the internet. Then from there, you kind of get it down into infrastructure. How much do I allow? How much do I contain that data? Do I even let the data leave my premise, or do I –

 

[0:19:36] SAM CURRY: It’s more than the data. It’s also the metadata, and how you’re interacting with it, what you’re asking of it, but yes, you’re right. That’s the first concern.

 

[0:19:43] GUY PODJARNY: That’s the first concern, it makes sense. Artificial intelligence, machine learning has data very much at its core. There’s something very sensible about that being the first concern on it. I guess, if we take it down to sort of the system’s level, then what’s the next concern. So you’re worried about data, or you know what, before we go to systems, let’s dig into the part in which you build your own, and you use your own data. This isn’t about data leaking, because someone typed it into ChatGPT. But rather, you want to leverage this technology, whether you’re building your LLM, or using open AI, or whatever. I guess, how do you think about the security concerns there? Maybe, should that be taboo? The authorization world at the moment? I mean, is it sort of safe enough for that?

 

[0:20:29] SAM CURRY: In the taxonomy, we’ve sort of started at the highest level. I think of a taxonomy in nature, we start at the highest level with the AI for the business, and then the security, and privacy concerns around that. At another level, there’s also AI for security, right? There’s another one that’s security of AI. Those are three big buckets. In the AI for security. If we break it further down, we’ve got – how it helps humans do the security job. You assist in human, right? This is the – how does it help you predict, find signal from noise? In the blue team, how does it help you do guided response? How does it help you in the red team to build better tools for attack? 

 

Super cool example is, I was at a conference recently at Duke University that was talking to regulators in particular from the developing world. Somebody suggested, you should not allow offensive use of the AI. I said, all due respect, “No, no, no. You need to find the ethical way to do it, because the opponents are using this. Most of it’s open source, how it’s done. It’s not, this is that out.” If you hamstring pen testers in red team, then the blue team never built the calluses and the ability to resist the adaptive renting that’s using these tools. It can help you in defence, as well in blue teaming. It can help you with scripts.

 

I mean, I’d love to get to a world where the defenders, and I think this is particularly relevant for Snyk. When the defenders become iterative developers very quickly, where they build little devilits, little hacklets. Most of the blue team environment is a development environment, and they’re iterating questions very quickly assisted by, and maybe accelerated by AI. There’s also this notion of yellow teaming, which isn’t very spoken very often. That is not security for DevOps, but DevOps for security. How do you build better tools for the blue team and the red team? They can have their sprints accelerated as well. Then there’s the AI actually on offence, not just helping the attackers, but the AI engaged.

 

You may have heard, the AI is using mechanical twerks ironically to get around the robot checks, right? If it can’t do it, it can hire humans to do it. How is AI actually engaging as the lead in some of these attacks, and simulating, and trying to be like humans in attack? At what point are they taking over identities? How are they able to recognize what’s a decoy credentials or applications more effectively than a human could when they encountered them in an environment? I actually belonged recently on something I called negative trust rather than zero trust, which is, how do we make the environment hostile to attackers when they come in? That’s just the second bucket, and then there’s the security of the AI itself.

 

[0:23:22] GUY PODJARNY: So AI for security, that’s a fascinating space. I guess, again, just sort of echoing back here. You’re pointing out two things, right? One is the arms race. The attacker is going to use it. You either use it or you fall behind, and you lose the battle, the war. That’s the kind of the meta. Then secondly, as you were saying also on a practical level, and give a whole bunch of examples. Why wouldn’t you? AI can do all these other things, all these problems. I like the evolution. So you’re done describing at the beginning, it can do the grunt work for you. At some point, you might be doing the grunt work for it. But eventually, the end system should be one that is more secure, and it can aspire to maybe negative security, and a hostile environment.

 

The UK hostile environments is something totally different at the moment in immigration. Yes, tricky terms you use. But in like a very self-defending, adaptable defensive environment that tries to identify an attacker that is in the network and seal them off. Those are things that are basically impossible to do at human speed, so it requires AI.

 

[0:24:23] SAM CURRY: There’s one of the buckets, though, Guy, which is highly relevant to Snyk. During the triage in OpSec, it’s the predicting – it’s not just the – what’s the most important thing to address based on the sort of measure, countermeasure world that will come in active conflict, but where are you likely to have vulnerabilities? There’s the known vulnerabilities, and there’s a prediction of where you’re going to have vulnerabilities. Therefore, where you should be focusing things like virtual patching. You’re likely to have a buffer overflow here, you’re likely to have a vulnerability to hide shellcode, or something, or SQL injection, et cetera. But that only exists in that same bucket, and the attackers were likely be predicting similarly, “Oh, there’s probably going to be a vulnerability there. It will accelerate their research cycle.” That’s one step removed from the warfighting, if you will. 

 

How do you prepare in the InfoSec cycles to be efficient to reduce the attack surface, and how do they predict likewise, where the attack surface is going to exist now? They call it the zero-day predictor.

 

[0:25:21] GUY PODJARNY: Yeah, I think very relevant. We have – there’s like 15 different AI strands going on at Snyk. Many of them, by the way, far before the current trend. But indeed, for instance, when we talk about – you’re talking about sort of the risk areas. But also. there’s a lot around automatically bolstering the code or sort of strengthening, hardening the code. It starts from the more straight vulnerability fixes. We have this auto fix at the moment. Now, the moment auto fix is like a spell checker that shows you the squiggly line, and you right click, and fix it. So instead of a little bit more human action, so it makes it easier. But when you think about the destination of all the fixes, like why wouldn’t it automatically correct as you go through? That freaks some people out. But over time, feels like it’s an obvious destination. There were times as well, if you can do that, why wouldn’t you just glance at a system, find those issues, and just fix them, and just kind of make it happen on the go. 

 

From there on, you can go to things, and you can take away, just like you’re talking about with a risk. You can go from systems that are vulnerable systems, that are just not sufficiently robust. Can you add defence in depth, and add to that automatically on top of the code? It’s not feasible to do in a non-AI manner, because the automation is what takes it to the next level.

 

[0:26:34] SAM CURRY: We never thought about this because it’s such a waste, because it’s so expensive from a debt perspective. But right now, we think of deception is something that’s done in an IT context. It could be done in a DevOps context, it could be done in code. For instance, we’re busy closing vulnerabilities, and we’ve attacked that in security debt. But it’s entirely possible at some point, we could say, you know what, we would normally have a vulnerability here of type X, right? Why don’t we put a decoy there that looks like there’s a vulnerability, so that I can have a little signal that I can send to an incident response team if this app is attacked in the obvious place? 

 

Because the reason we don’t right now is, there’s just not enough resources to do that. But what if it was trivially inexpensive, and we could code those in de facto, as we were going proverbial expense. It doesn’t expose a real vulnerability, it’s just – it looks like a kick me sign. But actually, it sends a signal. That’s the kind of negative trust I’d like to get into the application layer. But I don’t think that’s imminent, I think that’s probably a ways down the road when we get much more efficient in our DevOps cycles, but yes.

 

[0:27:40] GUY PODJARNY: Yes, for sure. To be frank, we stayed in fairly positive land here, right?

 

[0:27:44] SAM CURRY: Uh-oh, do you want to go – how’s security for AI by the way?

 

[0:27:47] GUY PODJARNY: Do AI for security positive land, of like how AI could really help us, not just with functionality, but actually would be more secure with a slight warning sign that if we don’t, the attackers will. Let’s indeed kind of open up the Pandora box, sort of the dark side, and talk about security for AI? Maybe let me start with a bit more of a meta question. What scares you the most, and what security concern around the use of AI or LLMs, you find most scary?

 

[0:28:13] SAM CURRY: Okay. Yes. There’s two concerns. I’ll go from the biggest to the second biggest. The biggest that I have is so far, for the last 20 years, attackers have been evolving their capabilities. They’ve been advancing and adapting faster than defenders. What do I mean by that? Well, the market for doing bad things is growing faster than the security market. Now, there’s a healthy, active, robust, dark market out there. The ability to pop a company is getting shorter. The availability of toolkits that blast through most people security is becoming more readily available at a faster rate. Compare that to the security debt most companies have, the rate at which is burned down now. Why is it that way? Well, attackers, nation-state levels, and financially motivated ones, 100% of their energy is focused on busting in.

 

In defence, we have to worry about convincing a CFO if we need a tool. We don’t get to put all of our resources on defence. We are a fraction of the IT budget in terms of people, and in terms of spend. If you think your typical company spends 12% on R&D, right now, we realize small companies is higher, bigger companies can be a little bit less. Let’s imagine the bad guys are a typical company. That means 12% of their budget goes to R&D. That’s not the number that goes to security in a typical company. It’s a fraction of that. It’s like maybe 1% in a really leaning forward company. Compare those numbers, it’s frightening.

 

My number one fear is that AI will hyper-accelerate, then their curve will go higher, faster than the impact in defence. So that’s my number one fear. What’s the fear of that, really? It’s a business fear. It’s that, we love to embrace technology, because what it means for us societally, and what it means for the economy, and it’s one of the big engines for human society right now, and all the wonderful things we want to come from it. But the risk curve is catching up behind it because of these threats. My fear is that it will cross over and become a real inhibitor to new investment in new business, that’s number one.

 

[0:30:36] GUY PODJARNY: The thing to do about it beyond the meta is to invest in the use of AI for defence.

 

[0:30:44] SAM CURRY: Oh, yes, and to rethink in defence how we apply ourselves. The biggest problem in security is the gap between the security department and the business. We have to make advances in alignment. We don’t just have to spend more. We have to adapt faster. We have to get more relevant. One of the great things that that regulations and auditing does is it forces companies to pay attention to security. One of the terrible things it’s doing is, it’s locking us into how we spend our money for decades. Most of our spend is check boxes of stuff we’ve been doing for 20 years. It’s great that it made us do it. Now, we’re stuck doing it.

 

Most of the spend is a CISO is on things like antivirus, and firewalls, and VPNs. Most of that stuff hasn’t changed in 20 years. How do you free money up to place new bets, to try new things? We’ve got to get more agile and flexible in security operations, and toolkit, and stack.

 

[0:31:39] GUY PODJARNY: Yes, really kind of important point on regulation. You’re not objecting to regulation existing, you’re sort of encouraging it to for someone to invest. But you’re saying, once invested in the zero-sum game, that is my security budget, it is forcing me to spend money in areas that I do not want.

 

[0:31:55] SAM CURRY: I call it discretionary versus statutory spend. Your discretionary spend should be about reducing risk. Whereas, your statutory spend is about avoiding fines. How do you convince the regulators that you’re removing out of the checkboxes because the risk behind those things is resolved. Now, you’re moving the funds to where it’s useful to reduce risks in emerging, changing, adaptive world where the opponent is a human being innovating and doing R&D.

 

Now, the second risk has to do with the human side. We talked about a talent gap and not enough people. One of the great things that LLMs will do in AI, is it will rapidly increase the rate at which people can learn and enter the industry. But engineering has this horrible thing called leaky abstraction, which is whenever you build something, you create an abstraction layer that hides complexity behind it. If you just deal with the abstraction layer, and don’t truly understand what’s underneath, you begin to make mistakes. 

 

For instance, if the LLM starts to tell you, “Hey, you’re being attacked by this attacker, and it fits here in the mitre attack framework, and here’s the next step, and here’s the recommended next step or counter from the mitre engage framework. It becomes a rote ability without much intelligence. The operator doesn’t know where is this coming from under the hood. Now, that’s an opportunity for two kinds of mistakes. I used to call this the mirror chest problem with automation.

 

The first is you become predictable. In mirror chess or played chess, you marry your opponent. Sorry, I love chess. I mentioned that earlier, right? But if you marry your opponent, you can be defeated because they know what you’re going to do in response to them. It’s an asymmetric game, believe it or not, chess because there’s a reason for it. But the time, it’s asymmetric and on the board, actually. That’s the first problem. They know what you’re going to do when you do something. That’s a weakness.

 

The second thing is you can be subject to poisoning. Meaning somebody will know how your LLM is learning or how your AI is learning. As a result, they already do this with virus samples. Over 30% of virus samples are planted to create false noise. They will start to poison the feedback loops such that their signal will get through it, and they will test against it. They’ll get the exact same LLMs and the exact same AIs and say, “Let’s run attacks until they don’t find us.” I’m worried about that, that leaky abstraction, that ocean of talent that will come out, and we will no longer be able to find the deep subs that are going through.

 

[0:34:29] GUY PODJARNY: Yeah. I guess that’s part skill and part blind trust, right? Because for you to be able to benefit from an AI-powered system, there has to be an element of trust. If you go off and you verify everything, then you would lose the very efficiency that you sought out to get with the AI in the first place. We see there’s a bunch of the status from Stanford, and the likes, and the show how code completion suffers from this type of element. On one hand, more people can generate code. Notably, developers can become 10 times more efficient because they can generate code, and that’s amazing. No, it’s truly is. No, but do it.

 

What’s happening in parallel is that that code that gets generated gets more likely reviewed. If you need to invest in at the same amount of effort that you would have had to if you had written it, you should lose the productivity benefits. Almost by design, you do not scrutinize it that much. The code, shall we say equal in the number of vulnerabilities depends on, of course, the individual developer that you compare it to, but it definitely frequently has, I think it’s like 40% of the original code has more abilities.

 

That’s probably like a similar number to human-written code. Most code does have vulnerabilities in it. But I think the difference is that, it doesn’t get scrutinized, it doesn’t get properly reviewed, because it’s being trusted. You can’t not trust it, or you’ll lose it. You have to trust it to a degree. But if you trust it too much, then you become –

 

[0:35:48] SAM CURRY: Yes, so you change how the sausage is made. The question is then, do you get more sausage? Or does it taste better at the end? 

 

[0:35:55] GUY PODJARNY: Yes, you do end up getting. You’ve noticed if you got a burger –

 

[0:35:58] SAM CURRY: That’s right.

 

[0:35:58] GUY PODJARNY: But you don’t really know what’s inside. On the human side, so we have – like one concern is really one of investment. That sounds a very practical thing. We need to invest in AI. 

 

[0:36:07] SAM CURRY: And quality of investment.

 

[0:36:08] GUY PODJARNY: The quality of investments, specifically on AI defence. It was a very good point. I love this definition of the statutory versus discretionary spend. Then the second one is on skills or loss of the skills.

 

[0:36:23] SAM CURRY: The nip skills that come over time, yes.

 

[0:36:25] GUY PODJARNY: This conversation happens also in art, as you talk about, fine, everybody can be able to prevent art, but the masters will continue to be valuable. But lo and behold, to become a master, you need to be a junior at some point, you need to work through those.

 

[0:36:38] SAM CURRY: I forget who – did Picasso said – somebody’s going to get mad. It took me four years to paint like a master, and a lifetime to paint like a child.

 

[0:36:47] GUY PODJARNY: Those evolutions, but you have to go through a journey to try some – maybe, maybe if you’re procrastinating for years, they can get to that. How do you feel about, as we talk about trust, and we talk about skills, there’s also the human threat element. How do you think about that? How do you think about the top concerns that come to mind when people talk about AI, and notably LLM is sort of social manipulation, social engineering facts at scale, whether it’s on the phishing front, or the supply chain front, or a variety of others?

 

[0:37:17] SAM CURRY: It’s huge. Spearfishing becoming almost like a sniper rifle, right? It’s very real. It’s precisely because it can iterate, it can test, and it can get very, very good one. Look at most phishing attacks right now, the way that we spot it. If you do the awareness training, you’re looking for the stupid errors made by the people who built it. Did they get the logo right? Why is their grammar off? Does the link look weird? That’s stuff solvable at scale, and reliably.

 

A friend of mine, [inaudible 0:37:54] and I, we did a webinar back in February where he has ChatGP to write a phishing email from the voice of – if you’ve seen Succession, Logan Roy, the CEO. He said, write it, and he wanted it to cite – I forget who the poet was. But like he wanted to have a quote from this poet. It was a phishing email, and it was perfect. It was his voice, it cited the poet, it was Robert Frost, actually. He came up with the obviously the path less travelled, but then he said, “Find a better poem that’s more subtle,” and it did. It was frightening how believable it was, and you can in your hand imagine somebody from this company, if you’ve seen Succession, how much it was in his voice, and you could imagine somebody actually going, “Yeah, this is from him, click.”

 

When it had been on the market in this version for a few months, and you could see how – if a bad actor had a version, and trained it, and then kept improving it, and refining it, tuning it, just how effective it would be. That is a very real threat. Now you combine that with things like deepfakes, and other technologies, and voices, so it really sounds like the voice of your boss calling. It’s really their personality, using words they use, responding to you as they have. It happens to fall on your social media network, so it knows when you were with them and can cite those instances. You’re in trouble.

 

[0:39:23] GUY PODJARNY: Yes. What do you do? I share the concern. I did a little workshop at my kid’s school, sort of secondary school, and within 10 minutes, we’ve generated a video of Taylor Swift saying something in her voice. And ChatGPT created text in her style, sampled voice from a podcast created her audio, and a bunch of other sort of DID, and ElevenLabs tool created the actual video clips. It was impressive. Good conversation there in the school to talk to them about both children, the power of AI, and talking about the negative implications. I think, or generally, I am seeking good solutions here. What do you do about it as a security? The concern is real, the phishing is real? I mean, is this a training solution? I mean, how do you prepare for this on the organization?

 

[0:40:09] SAM CURRY: Training is not the solution. Training is part of the solution, because you never do without it for two reasons. One, you’re regulated on it. Two, it’s always smart to have people be more informed. You can try to even gamify things, have them be more active, but you don’t want to be a complete pain, right? It’s got to be practical and usable. But the real answer here is, we have to stop being terrified of what could be done, and deal with what is being done. Even when the rate of innovation is very, very high. What do I mean by that? I’ve been in cyber, or as we call it, InfoSec back in the day for 30 years. I have – the doomsday scenarios have existed, most have not happened, the vast majority, 99.99. That doesn’t mean that it will always be that way. But I used to go to conferences like RSA in the early days, back in the mid-90s, and late 90s.

 

Somebody put their hand up and say, have come up with an amazing security control. Everyone’s like, yes, and then somebody finds one corner case that breaks it, it never goes. They’re also depressed. The fact of the matter is, most things are in the middle. We can talk about hypotheticals of what the bad guys could do. What we have to focus on is what they do do, because there’s an economy of scale associated with doing it. It may seem trivial in the proof of concepts; it may seem trivial in the execution. But until they actually do it, you don’t have to do much about. The first cases will break through, fine. But we can’t be preparing for every possible application. That may sound really weird, and someone’s out there going, “Oh, hell, no.” But at the end of the day, you got to deal with what really happens, and there were controls for this. 

 

Let’s take your scenario. Now, not Taylor Swift, but let’s say somebody tries to be your CFO, and says. “Go buy gift cards.” Okay, got that. The way to deal with that is that you have very specific ways in the company that money can move, and it never moves in other ways. So even if the real CFO called you, and ask for gift cards, your simply can’t do it. You put business controls in place to deal with that. The terrifying scenario that gets you in the gut, and that could happen right now for low cost, you can actually put a relatively simple business controlling place to avoid. You will do that; a few companies will get hit when someone does it. But you know what, that’s the main reason that that hasn’t got that sophisticated. Because relatively simple, controls can prevent it for large companies, certainly. For very, very small companies [inaudible 0:42:39] now they used to refer something called the security poverty line.

 

Below a certain level of size, company, and income, and in some verticals, there’s simply not even a security person. In many cases, not an IT person. Those companies, it’s much more realistic threat. Let’s say your, I don’t know, your small restaurant with three restaurants in a town, right? I’m actually thinking of one right now. Your accountant calls you and says, “Hey, the IRS is after you, you got to pay $15,000 in the next 24 hours, or else, you’ll do it.” How are you supposed to prevent that? That’s much more real than the Taylor Swift scenario applied to a large bank?

 

[0:43:20] GUY PODJARNY: Yeah, I think those are good practices. I think there’s a broad application of suggestion you just made, which is, for the bank, I think it’s relatively straightforward. But in practice, you can also do this around applying common sense on who gets access to, don’t have passwords, for instance. Work with two-factor authentication in those. So you can’t really give someone access to it. Then, I guess, explaining and teaching people not so much about – just about the fraudulent email, but rather about the ask, about the oddity of the ask itself.

 

[0:43:47] SAM CURRY: You’ve also reminded me. I tell my kids that Taylor Swift will not contact them. So, yes.

 

[0:43:52] GUY PODJARNY: It’s someone’s recommendation, and well, I don’t think we’re going to use it. We actually defined a safe word in the family. One thing to use to it, which kind of had – I don’t know if anybody will remember it, if the situation actually takes place, but it’s a good exercise to even just remember to ask it, or the oddity –

 

[0:44:08] SAM CURRY: Yes, say phrase to trigger it as well.

 

[0:44:11] GUY PODJARNY: No, it wasn’t really – I’m in distress mode. It’s more, if someone want – if a kid gets that, and asked for me to meet them at some spot, or to send something for them, or to do – then they know it seems like an odd request that it would – 

 

[0:44:23] SAM CURRY: You have to practice that, right? You got to tabletop it. 

 

[0:44:26] GUY PODJARNY: Yes, indeed. I need to do it. I’ve not done a red table on the kids yet.

 

[0:44:30] SAM CURRY: That’s awesome.

 

[0:44:32] GUY PODJARNY: I don’t know, but I’ll get to that anytime soon. We’re kind of approaching the end of time, I’d love to just sort of try and get – we’ve tried to kind of unpack here, the high-level excitement from AI, the acknowledgement that you can stop the train, you have to accept that you’re doing it. Talk a little bit about the separation of AI for security, and its positive opportunities, and maybe the necessity because of the AI arms race, talked a little bit about security for AI, yours have top two or three concerns there.

 

If we kind of bring it down just to the world of very practical, a situation that many CISOs are in today, some groups in their team. I mean, even putting aside the use of AI, because I think you actually gave a good answer to it. More people want to add AI to their apps, they want to add whatever, a chatbot to something that is not public data, we call it in their system, and they want to run it. What’s the sort of the primary, the kind of the high level, sort of actionable advice that you would give someone in that position to do?

 

[0:45:34] SAM CURRY: The first thing I would say is have first-hand knowledge of it, like play with it. I can’t stress enough how well-directed play is a good tool for learning, and your brain is a place where ideas meet, and we combine. That happens through when you’re reading and listening to a podcast and when you’re actually using a tool. So go do that. Personally, go do that. That’s number one. Number two, convene a group of people, and socialize the ideas, don’t turn up the answers. Number three, start a project and make sure it’s business-driven. It’s not just for the sake of doing it, and it also got security present. 

 

For instance, most of the companies I’ve worked at have had multiple places where there are knowledge bases, and tools, and competitive battle cards, and PowerPoints, and brochures, and training material. But after three or four years, try to find something. I mean, really, try to find something. Especially if you’re a new employee. [Inaudible 0:46:35]. You’re like, “Oh, now I’ve got to go see a customer when they’ve asked about how we stack up against that company.”

 

[0:46:42] GUY PODJARNY: It’s a little bit funny, because like one of the solutions at the moment is to create a chat interface to search that data.

 

[0:46:48] SAM CURRY: Precisely.

 

[0:46:49] GUY PODJARNY: But that said –

 

[0:46:50] SAM CURRY: But everybody is trying –

 

[0:46:52] GUY PODJARNY: – that data gets lost in the system. You might not even know that you need to search for anything.

 

[0:46:55] SAM CURRY: Yes. By the way, we had this exact problem with searches. We tried to build Google for Insider, or whatever. AltaVista for Insider if you go back far enough, right. But a lot of those projects are business driven. If they can produce results internally on the internal data, fantastic. So now, go socialize. That’s a business-driven thing. You can put real metrics, and a business case around it, and you can have security present at the table. Because now, you got to ask the question, who can get to it? Under what conditions? How do you authorize it? Where can they get to it from, et cetera? What are the APIs that plug into it?

 

It can see everything. Shut it, what can it see? So if a bad guy got in and can give an interface, and they can say, show me all contracts. My advice is first start at the simple level of understanding it, and then go right through to the point of having a project, and then this is key point, seek external communities to either advise and validate, or to socialize the ideas of what you’re doing and see what they’re doing. Because you’re not the only people going through this, every company’s going through this right now. And governments are going through this right now, by the way. Imagine what it’s like to do this for a national government, the risks to things like national security, but also the opportunities are huge.

 

[0:48:08] GUY PODJARNY: Yes. That’s the, I guess, the truth for most amazing technologies. This one just sort of landed on us.

 

[0:48:13] SAM CURRY: And that is the truth, yes.

 

[0:48:15] GUY PODJARNY: I think that’s great advice, demystify it, get a seat at the table. It’s going to be part of the conversation. But remember to keep it business-oriented, and then kind of collaborate not just for the people inside on the business, but also with the community.

 

[0:48:26] SAM CURRY: Have fun. Have fun with it, by the way. It is scary, but I you got to play with it.

 

[0:48:30] GUY PODJARNY: I think the demystification and playing with it is such sound advice/ For what it’s worth, we’ve been, at this point for a bunch of months now. But even just having AI primer conversations with the company, opening people’s eyes towards it, it’s amazing, especially if you are one that is a bit more in it, to realize how many people do not have it, and the level of understanding you get from just a few hours of play with it versus the others. 

 

Literally, as we’re recording this podcast, there is an AI hackathon happening right now at Snyk, which will be fun. We’re already sort of seeing a bunch of very cool initiatives. But above and beyond whatever comes out of that is just the fact that people are exposed to it both from a ability and a risk perspective is priceless.

 

Also, to steal the last couple of minutes. I don’t know, I’m asking you. It wouldn’t really have a two-minute answer to it. But when asked you to take out your crystal ball, and say, “Hey, we’re in this sort of world of AI security.” Well, we started with magic. We talked about the balls are totally in line with that analogy. And say in five years’ time, eternity in AI world, what do you see as the sort of the state of AI security?

 

[0:49:36] SAM CURRY: What was it Einstein said? I don’t know what the next war will be fought with, but the one after will be with axes and stones. I’m paraphrasing. It’s been a long time since we thought this would be a simple problem. It was 1960s someone said, “It will solve this in a couple of years.” We still don’t have a general intelligence. We still don’t have another sentience to speak to.

 

One of the biggest insights that I heard was from Daniel Miessler, who I mentioned earlier. He said, ChatGPT and LLMs have understanding, but they don’t have sentience, and I would add that they don’t take initiative. They don’t have reasoning. ChatGPT doesn’t come and ask you questions, right? That’s very important. Now, there’s other forms of AI, other tools in the AI toolkit that can do things like that. But we haven’t broken through a number of very critical barriers to intelligence. This is why a lot of the hype that I hear is really hype. But I think the most important thing, we’re going to get into sort of three-to-five-year timeframe, if that’s what you’re concerned about is, it will become boring, and commonplace. 

 

We’ll hit what Gartner calls the plateau of productivity. We may be faced with two big things, and I say may, because, I think about crystal balls is, it’s easy to break hundred years out, it’s really hard to predict short period, right? But there’s two big things that could come along. One is a Luddite-like revolution, where people want to break the mills and really put the genie back in the bottle. The other one is further breakthroughs in sentience, in reasoning, or inability for intelligence to take initiative.

 

One of the things that’s fascinating is the degree to which intelligence appears to be emergent in complex systems. Actually, not as complex systems as we’ve always thought. Many more animals appear to be intelligent, many more forms of life, including – believe it or not, plants and ecology seem to have emergent intelligence, which is taking the world by storm right now. Even some seemingly natural structures seem to have intelligent like patterns. So hey, if this is a simulation, it may be the starting parameters for the simulation have consciousness easy, I don’t know. 

 

But if something happens there, it would change the game, and then I would expect another wave of fear, I would expect another wave of concern, and, frankly, more disruption. It seems like disruptors are coming more quickly. The most likely thing is that plateau of productivity boredom, and we adjust and adapt what we’re doing. We start to find AI in every part of the business. We start to get used to it. I think we’ll probably still be in a hybrid workforce, most of which is about real estate prices, by the way, just putting it out there. But it’s those two things. Will we have a general revolution against it or will we break through the true sentience barriers? Those are the things that have changed. That’s my high-level crystal ball sample –

 

[0:52:35] GUY PODJARNY: A significant deceleration or a significant acceleration.

 

[0:52:38] SAM CURRY: Exactly.

 

[0:52:39] GUY PODJARNY: Super interesting. I guess we’ll wait and see.

 

[0:52:41] SAM CURRY: Otherwise, it can be boring.

 

[0:52:42] GUY PODJARNY: We’ll try to keep the doors open, and the security lock, so it’s still functioning. Sam, this has been great. Thanks a lot for joining us and sharing all these views.

 

[0:52:48] SAM CURRY: Thanks for having me. This has been fun. I appreciate it. Thank you.

 

[0:52:52] GUY PODJARNY: Thanks, everybody for tuning in, and I hope you join us for the next one.

 

[END OF EPISODE]

 

[0:52:59] ANNOUNCER: Thank you for listening to The Secure Developer. You will find other episodes and full transcriptions on devseccon.com. We hope you enjoyed the episode, and don’t forget to leave us a review on Apple iTunes or Spotify, and share the episode with others who may enjoy it and gain value from it. If you would like to recommend a guest, or a topic, or share some feedback, you can find us on Twitter @DevSecCon, and LinkedIn at The Secure Developer. See you in the next episode.

 

[END]

Sam Curry

VP, CISO at ZScaler

About Sam Curry

Sam has spent  2+ decades as an entrepreneur, info sec expert and executive at companies like RSA, Arbor Networks, CA , McAfee, Cybereason, and more. Sam is dedicated to empowering defenders in cyber conflict and fulfilling the promise of security, enabling a safe, reliable, connected world.  He is also a public speaker, hold multiple patents, host a podcast (Security All-In), sit on some select boards and publications, and is an InfoSec mentor.

The Secure Developer podcast with Guy Podjarny

About The Secure Developer

In early 2016 the team at Snyk founded the Secure Developer Podcast to arm developers and AppSec teams with better ways to upgrade their security posture. Four years on, and the podcast continues to share a wealth of information. Our aim is to grow this resource into a thriving ecosystem of knowledge.

Hosted by Guy Podjarny

Guy is Snyk’s Founder and President, focusing on using open source and staying secure. Guy was previously CTO at Akamai following their acquisition of his startup, Blaze.io, and worked on the first web app firewall & security code analyzer. Guy is a frequent conference speaker & the author of O’Reilly “Securing Open Source Libraries”, “Responsive & Fast” and “High Performance Images”.

Join the community

Share your knowledge and learn from the experts.

Get involved

Find an event

Attend an upcoming DevSecCon, Meet up, or summit.

Browse events
We use cookies to ensure you get the best experience on our website.Read Privacy Policy
close