Submit to your local DSC chapter CFPSubmit now!
close

The Secure Developer | Ep 88

The Changing Landscape of Security

with Dev Akhawe

About this episode:

In episode 88 of The Secure Developer, Guy Podjarny speaks to Dev Akhawe, Head of Security at Figma, the first state-of-the-art interface design tool that runs entirely in your browser. Dev pulls back the curtain and gives us a look at what security at Figma looks like. The relatively small organization has a culture where the security team earns their trust and works openly. This has resulted in far greater cohesion between the security team and developers. Along with this, we discuss some of the positive changes in how startups are thinking about security, the value of exposing people to different parts of an organization, the place of security champions, and having a curious mindset as a security professional.

Tags:

Application Security
AppSec
Open Source
Secure Development
Security Transformation

Episode Transcript

[INTERVIEW]

[00:01:35] Guy Podjarny: Hello, everyone. Thanks for tuning back in to The Secure Developer. Today we have a guest that I’ve wanted to bring on, even from his previous job at Dropbox. Now that he is head of security at Figma, I managed to get him on here to the show. That is Dev Akhawe. Thanks, Dev, for coming onto the show and kind of sharing some learnings here.

[00:01:54] Dev Akhawe: Thank you. Thank you for inviting me. Glad to be here. Thanks.

[00:01:58] Guy Podjarny: Dev, before I dig in, tell us a little bit about what is it that you and a bit of your journey of how you got into security and in the spot.

[00:02:06] Dev Akhawe: Yeah, sure. Right now, I’m the head of security at Figma, pretty much all aspects of the security team at Figma, including helping the company shape secure products, sharing with our customers about our security program, as well as just like just improving latent risk and security hygiene and best practices across all aspects as sort of my set of responsibilities. Yeah, before that, I was leading a bunch of teams at Dropbox, working on abuse prevention, application security, infrastructure security, and detection and response on the product, and threat intelligence, and stuff like that. I got started in security just almost sort of accident. I was never a person, I think, who was breaking into things as a kid.

I started this PhD program at UC Berkeley in computer science, and one of the things you have to figure out when you start a PhD program is, what do you want to work on and what do you find interesting. I remember thinking that, like everything’s interesting. I love computers, I love technology, I love all the things we can do with it. Security was really the only field where I felt like could do all aspects of security, all aspects of technology and computing. I did measurement studies, I did formula methods, I did programming languages, I did usability studies, systems building. It was amazing. It was a great time and I got into security for that reason. I like to say I got into security because I like building interesting things, and I think at that time and even today, probably the best place to build the most interesting thing is security.

Some of the most challenging problems and you have an adversary that you’re like continuously playing chess against. It’s fun. I’m glad I picked working on security and it’s been fun ever since.

[00:03:49] Guy Podjarny: That’s cool. That’s kind of a great motivation to get into the space, that’s awesome. Dev, maybe we’ll start by just understanding a little bit the way security is structured at Figma and maybe even, we’ll kind of slip back into Dropbox to compare. Can you tell us a bit about how is the Figma security organization is structured?

[00:04:11] Dev Akhawe: Yeah. I mean, right now, the Figma security team is pretty small. We have two engineers and me, really being the security engineering function. Then we also have a team that is focused on governance, risk and compliance. Focusing on getting our practices and policies audited and then showing our customers our success at achieving these certifications like SOC 2, and ISO 27001 and stuff like that. That’s also important. It’s not enough to just be secure, you have to also show your customers that you are secure. Both of those things are probably what we focus on.

Roughly speaking, we currently sort of work on AWS Security, sort of securing our AWS set up, starting to use best practices and then application security. Shipping, hardening frameworks, systems and platforms we use. For example, the security team worked on content security policy, sandboxing on the server side, additional security best practices, like stronger cookies and authentication logic and stuff like that.

A pretty broad space, like I think, one of the fun things about a startup is structure of the organization and like lines in the sand and the organization aren’t that strong. I would honestly say that like the security team at Figma is also like a lot of the engineers at Figma, like some of the smartest security people at Figma are not in security. They just know security so well, and that’s just amazing to work at. Being a security person where engineers are teaching you about security is absolutely amazing. So, a very different set up than what I experienced at Dropbox.

[00:05:53] Guy Podjarny: Yeah. Well, I think before we go there, that was almost my kind of next question. But just for ratio, we have the application security team and you separated the application security team from the security engineering team.

[00:06:05] Dev Akhawe: No. The security engineering team is a broader function. We do AWS infrastructure security and application security and even some aspects of corporate infrastructure security, cloud security, but it’s just all security engineering. It’s all building systems, and frameworks, and reviewing code, and advising developers on how to do secure development and how to keep Figma secure and customer data safe.

[00:06:32] Guy Podjarny: Who drives like the backlog of the security engineering team, who decide — I imagine there’s different requirements coming from these different facets of security.

[00:06:43] Dev Akhawe: I think we as a team maintains sort of a risk register of what are our biggest risks and what are our biggest priorities. Separately, we also maintain a backlog of like tickets and ask from the rest of the company. But really, the execution of what we work on is driven by, first, we sit together, prioritize on what we believe are the biggest risk. The impetus to this might be like the bug bounty program, pen tests, input from other engineers and leadership. Then deciding, “Okay. How do we want to tackle it and which order do we want to tackle it?’’ Then we work on that quarter by quarter. That’s been working pretty well for us, I think. Again, at the startup, the set of things to do and the potential set of things to do is pretty large. Sort of me as a manager and a leader, I feel like my contribution is to make sure the team is focused in executing. It’s very easy to trash by working on lots and lots of different things. But I try to get us to like, “Let’s pick something. Let’s pick this as a priority. Focus on execute and let’s move to the next thing.”

The decision on what to prioritize and decision on what’s our biggest risk is done as a team, but my role is to like make those discussions happen and then make sure we focus and execute. And yeah, do it with a great, positive culture like that. That’s really important to me too.

[00:08:01] Guy Podjarny: Awesome. I guess one more question, and then we’ll kind of maybe contrast this a little bit to what was the Dropbox reality. Can you describe a bit maybe the ratio or relationship between the AppSec folks and the engineering team?

[00:08:16] Dev Akhawe: Yeah. I think, the issue is tricky because Figma is a tiny company. So, I would say like you we are at 5% security team and we have — we’re talking about a company that’s like doubling every year, so it’s hard to keep an exact count. Last I checked, I think we’re at like 70 or 80 engineers. It’s a pretty healthy ratio I think in terms of that number. The application security function is probably like two people working on application security and product security more broadly. For 70 engineers, it’s something what I would say is where we’re at.

But again, we are growing. The security team is doubling, and the engineering team is doubling. It’s an exciting time. The relationship is — I like to think it’s super positive and helpful. I feel like we learn from each other, we talk to each other. Obviously, some of the constraints for context I joined Figma at the start of the pandemic, I was in the first class, first class of new comers who joined remotely for Figma. Initial hiccups around like sort of how to build strong relationships over Zoom. I had learned how to do that with like a beer or a coffee, but those are no longer an option. But building a strong, trusted relationship is something that we really value.

There are lots of different aspects to it that I’m happy to do it again. But yeah, I think the relationship is not very sort of bureaucratic or defined in terms of like, these are the processes to do it. Figma is pretty small right now, and so the relationship is more through sort of level of trust and almost social relationship that everyone sees us as hopefully as helpful, and positive and someone who adds value to the discussion. In my experience, most teams are actually reaching out for help. One of my core theses is that most engineers want to write secure code, most engineers want to do the right thing.

I think the security team that is helpful and seen as a positive force that is going to get them do their work more securely and explain it to them in a way they understand, developers reach out to help, right? If you look at Y-Combinator any security topic is just so hot, like every developer has opinions, and thoughts and questions. At least, I mean, that might be a privilege of the Silicon Valley population. But developers are super curious about security, want to learn more. They have their own lessons and learnings from previous jobs, as well as Internet. I feel like that relationship, like we are coming in as people who are experts and so they love talking to us. It’s been really positive and really great.

[00:10:57] Guy Podjarny: That’s awesome. I agree, that size, you don’t really need sort of formal relationships. There’s enough personal familiarity there. Out of curiosity, what did you land on as a replacement to sort of sitting down and having coffee with somebody. What works best for you when you form a relationship over Zoom?

[00:11:16] Dev Akhawe: Yeah. I mean, it’s been tricky. I think the honest answer I think in hindsight — I’m almost finishing a year at Figma, so I think like there are many theories I had. But honestly, the sort of thing that has worked the best is working together on an incident. I think if you have an incident and a startup, not necessarily even security incidents, right? Liability, availability, any incident, anything where there’s a fire and you’re putting it out together. Man, that builds a social bond that is hard. Like you’ve been in the trenches together as one could call it. Like that’s a strong bond, that’s hard to stop. Not to say we want to have incidents, but like when you do have them, like squeeze that lemon.

[00:12:06] Guy Podjarny: It’s pretty awesome. You can probably even apply that learning a bit more concretely, because it does imply — for example, you can be tempted when there are incidents to be light on the distraction. You say, “Hey! Can I just sort of deal with this without involving the other group?” Whether it was on the dev side or security group, but there is a plus. Yeah, maybe indeed sort of the lemons to lemonade thing, but there is a plus of saying, “Well, make sure that at least if this happens, you use it as a collaborative opportunity.”

[00:12:38] Dev Akhawe: Yes. I mean — I personally feel even strongly and maybe again, this is a particular thing for like smaller companies. But when we started the team, one of the exercises we did was values exercise, right? I’m lucky enough to work with a lot of smart people in the security team, and really, they are much smarter than me on figuring out the technical choices we need to make. But sort of, we wanted to make sure that the team agrees on values. One of the values we chose is, we will earn trust and not require it through authority or policy that like you must stop the security team. We’ll earn the trust of the rest of the company.

Second, openness. I think security teams that will like say, “We can’t tell you. You need to do this, but we can’t tell you why” or “Something is happening, but we can’t.” Like it’s secret. I think sometimes it is necessary, but often it’s not. The value of an open security team that is like open to everyone, engineering and open to sort of like, “Hey! You can come in and help us out” or “You can come in and correct us.” One we learned a lot from the engineering team. Because often, the engineers on the tea, are the ones closest to the problem. But second, I think it creates a culture of trust amongst the different teams, which is again sort of the biggest currency for the security team is how much the rest of the company trust the security team. If the rest of the company is trying to figure out how to bypass it, there’s no way [crosstalk 00:13:57 – 00:13:58]. Yeah, exactly.

[00:14:00] Guy Podjarny: Let’s indeed kind of go back a little bit and contrast. Before this, you were at Dropbox. Dropbox, clearly a substantially bigger company. Tell us a bit about the layout of the security organization there, maybe with a bias towards sort of the AppSec side of the fence, which is where you are most focused.

[00:14:20] Dev Akhawe: Yeah. I started at Dropbox at a very similar size in terms of the security team. On a side note, one of the more gratifying things as a security practitioner has been, that startups — like the first security hire at startups seems to happen at a smaller and smaller size over the years. Like I think, back in 2013, 2014, like the first security hire happened at like 300 people. Now, I know startups are hiring their first security person at 30, which really like sort of like shows how important it is for companies to sort of have secure software.

But yeah, I started at a similar size, focused on application security. I think at Dropbox, what was interesting is just the sheer number of users, and how attractive a target that made it. I mean, again, Figma is currently going through that growth spurt, but Dropbox was like — when I joined, it was already really popular and a very attractive target. Just the technology stock was very different. When I think of application security, a lot of the things that we are able to now use at Figma in terms of technology, were just not existent then. Something like Snyk or just like to scan your dependencies. Like back in 2013, 2014, oh my God! Like you try to do it ourself. Something like sandboxing, like if you want to sandbox at the server side at some code, you try to do it yourself.

I think the Dropbox security team in the AppSec space had to be larger and had to do more things, partly because of like the sort of ecosystem being much younger in security. And partly because, I think Dropbox as a product was more like very widely used and also had a desktop client, an iOS app, an Android app, massive server-side application, a very popular API. So, sort of the breadth and scope of things there was much bigger than I think right now at Figma. Obviously, I hope Figma grows and becomes that successful. But it was a fantastic learning experience to just get that whole breadth and gamut of things to secure.

But yeah, the application security team is what I was sort of initially a staff engineer. I started as engineer, staff engineer and then became the manager. Then over time, sort of grow into more functions. I think in terms of values, very similar. Again, something I’m passionate about, like open culture, let’s earn the trust, right? Software, I think, rather than purely as an advisory function and ship mitigations, like content security policy, ship dependency scanning, shape sandboxing and stuff like that. I think that’s another approach that we have continued at Figma, and that’s been really positive. I think the thing that was most noticeably different about Dropbox was also for the application security team, was — again, an earlier company, AWS.

We’re hybrid company, so we had our own datacenters. We were using internal tools like classic shops, so we had fabricator installed internally. We then use GitHub. All these sorts of things meant that today, I can integrate Snyk with GitHub with like [inaudible 00:17:07]. But integrating something in those days when your core system and CI system is only VPN, that so much more complicated problem. I think some advantages is you got to build more stuff, some disadvantages, in the sense that a lot of the ecosystem was harder to start using trivially.

That’s been really exciting moving to Figma, it’s just like the tools available to a modern security team are amazing. Like you can set up a very solid security program very rapidly today. That’s great. It makes me very optimistic about the future of software security.

[00:17:39] Guy Podjarny: That’s awesome and that’s a really great perspective, just understanding maybe the progress that the industry has made, that we keep beating ourselves up about all the things that are not right yet. But there has been substantial improvement, and also one that does try to increasingly orient towards the software development side of the fence. I’m curious, beyond the history, you kind of made two points here, I think. One or three. One is the history, the fact that this organization was built up and the tech stack was built in a different era. There’s not much you can do about this when you kind of come into a company, see the reality or not.

The second was the size, which is also probably something that just changes over time and in short order, Figma, will sort of each of those sizes well. But the third thing is interesting, which is, it has a bunch of those different clients and all that. For example, Figma is I believe much more sort of single app on a server that you work with. Would you structure your security organization differently on that alone? If in Figma, you had an iOS client, and an Android client, then a desktop client, there are some conversations often times around whether you should then structure your team to be more aligned with those deliverables and packages. Or do you still keep them more mission-oriented? How do you think about that?

[00:19:04] Dev Akhawe: Yeah. I mean, one, again, it’s a very long topic, right? Like organizational structure for security is a thing that gets discussed a lot. But like honestly, even in the broader industry. I think of it in sort of two different steps. I think one, I think about it as like organizational structure is a way to like, a means to an end. You are looking to get some particular outcomes. If your particular goals aren’t aligned with the outcomes that you want, that’s something that’s probably off. Like I think of one of the exercises I remember doing that was very effective was like, let’s look at what are our biggest risks? What are the things that we’re really worried about, then let’s look at what are our best people are working on, right? It was pretty apparent very quickly that our best people were not working on our biggest risks or biggest problems. It kind of make sense, because over time, your best people have reduced the risk, that’s why they are your best people.

You need to be continuously really evaluating your organizational structure and seeing whether the highest performers are working on the most important problems. In my experience, they also want to work on the most important problem that motivates them, that excites them. I think organizational structure is like, what is our biggest risk, what are the goals that we want to achieve. Let’s structure around that.

I think the mission orientation is very useful, but only in as much as it helps to serve those ends. I think just forcing an AppSec team to exist just for like, because everyone has an AppSec team. I don’t think it’s the right structure. I think for some particular companies, right?

Dropbox for example, like we had an AppSec team, but there are other companies where like they’re pure mobile play. So having a team that is dedicated to mobile security makes complete sense to me. Like if you’re an Uber where all of your users, everything is happening through your mobile app make complete sense. Like the Snap security team, I don’t, but probably does have like a focused mobile security function. I think sort of understanding the business, understanding the risks and the outcomes that you want to achieve, and structuring your organization around that is probably the most important one.

The second thing I think about when it comes to organizational structure, and this one is more sort of controversial is, I think the biggest danger of organizational structure is what I call atrophy. That the AppSec team starts thinking of, “Yeah. We are the AppSec team and those other people don’t know what they’re doing in security.” Then like someone else is like, “Well, we are the max security team and like all these other people in security don’t know what they’re doing.”

I think the organizational atrophies are very dangerous. Like in the end, we are one security team and, in the end, we are one company. Sort of like, there is some value to just like shuffling things around just to make sure people don’t atrophy into the structures that were made a while back to achieve particular aims.

This one is tricky because again, no one likes reorgs and shuffles, but like I think, moving people around between teams, especially encouraging that because there are some people who want to expand their horizons. Encouraging that is a really positive outcome, because you don’t want these like sub-silos inside the team. In the end, like all of us are working for security and for the company.

[00:22:12] Guy Podjarny: There is a lot of wisdom in sort of the stuff you said. First of all, I want to say that, I love that [inaudible 00:22:17] sentence there, but sort of you need to be re-evaluating the organizational structure, so your highest performer is working on the most important problems. That’s just gold, I think. If all managers kept thinking about that, I think the world would be a better place. I very much relate to the switching of people around so that people learn skills. You don’t have to reorg the whole thing. Do you encourage that between dev and security as well, when you think about, the picture in your mind is mostly between the different security teams, or is it more transitioning people in and out of the security team?

[00:22:47] Dev Akhawe: We did both I think at different sort of levels of that. I think one thing, some of the strongest security engineers we have at Dropbox were internal transfers from the dev team, from the software engineering functions, who are always like sort of passionate about security. They’ve been there for a while and looking for the next challenge, and they’re excited to come work for security, right? Like one of the most popular security projects to come out of Dropbox is the zxcvbn password meter, like widely used. Like most companies that you’ve seen probably have used it. Written by a software engineer at Dropbox as part of a Hack Week project. Then he was always interested in security and I was excited that he, in the end, like sort of move to security and worked with us for a couple of years. That was amazing.

The other direction also has been really effective, I think security engineers, moving to software teams, we haven’t done — I haven’t seen an example where like someone who really loves security moving fulltime, but the devs who came to security, and then move back to dev teams again has worked well. The team that I encourage and sort of did, like it almost calling it an OKR was what we call rotations. What we would do is like, let’s say there’s a new product being built. At Dropbox, I was at Dropbox Paper for a while. We would just do a rotation, where we would say like, someone from security team is going to go and sit with that dev team and work on anything. It doesn’t have to be security, just be a part of a team for one quarter or a sprint.

That was super effective, because like nothing beats that like relationship building when you are in the trenches, working on something, building something new, you understand the developer’s perspective, the product manager’s perspective. I’ll call it the metric of success if like, hey, you come back to the security team and the dev team still invites you to their social events, to their team events. Because that means you really sort of like jived within the team and became a member of the team. That’s been super effective, I think. The dev team learns the security team’s perspective. But also, more importantly, the security team learns the dev perspective.

I think, it’s very easy to be a security team and opine on like, “Well, developer should do this and that.” Then you try to actually shape a product and feature and see all the things, the user feedback, the design choices, like the complexity that goes into building new product is amazing. I’m always humbled by it and I think sort of more security engineers should experience that.

[00:25:08] Guy Podjarny: I love the embedding approach, and I think it’s working well. I guess in that vicinity of embedding, it’s also the practice of security champions. What’s your view on it? Have you employed it? Do you employ it? Do you like it? Do you dislike it?

[00:25:24] Dev Akhawe: Security champions, absolutely. We have employed them. I think there are different ways that companies talk about security champions as a program. I think we always had security champions in different teams. We actually identified people, talked about them during sort of review of risk. Are there teams where we have high risk and we don’t have a security champion? I think the concept of security champions has been very effective. I personally sort of, am cautious around creating a program and creating sort of forced educating thing or saying, “Well, if you come to this training, then you’re a security champion and stuff like that.”

I think what I’ve seen to be the most effective has been, again, going back to sort of the thing I said. Like some developers just really are interested in having their core be secured, using that sort of internal motivation, and sort of encouraging that, and rewarding that, giving feedback to their manager doing performance review cycles. This person has helped prevent tremendous risk for the business. This person helped sort of make sure this thing ships on time and without any security issues. Stuff like that has been really positive. I’ve struggled to sort of structure trainings and stuff. I find that security champions learn the most because they’re deep in the code and when we work with them together and build that relationship.

[00:26:39] Guy Podjarny: I love that view. I think it boils down if I can echo it back. You’re saying, security champions are good and well, and they might be sort of a tool to a degree. But what you should be weary of is not to end by saying, “Hey! I have these security champions.” You want to still say, “It’s a step. It’s a step in a journey in which developers do represent sort of security activities, security knowledge. Actually, implement security controls and capabilities in the product. And you want to make sure you take it all the way to actually celebrating that, acknowledging that, whether or not you did it through security champions. That’s the eventual outcome. Is that correct?

[00:27:14] Dev Akhawe: Yeah. I think really, the celebrating and rewarding that is something that I think is really important. I think in the end, like developers are caring about this for the goodness of their heart is only going to work for so many quarters. In the end, everyone wants to be rewarded and be celebrated for their work. Sort of talking to their managers, and saying like, during performance reviews, that this person really helped the security of the company is also pretty important, I think.

The other thing I would say is like, the security champion, the thing I was also trying to add is, the champion program is good, but being really respectful of the champions this time, that they’re not a part of the security team. Any time we ask them to do something or ask for their help, just being very respectful of their time because they’re helping us out is really important.

[00:28:07] Guy Podjarny: Yeah. Very important indeed to do it. It’s not just sort of free cycles. It’s an actual, that they have more job as well. I wonder, Dev, if we talk a little bit around. You have all these different techniques and approaches that work, if I flip it around, what are the couple of learning you learned the hard way during these journeys of something you wouldn’t repeat or you would do better?

[00:28:36] Dev Akhawe: Man, a lot. I mean, life is full of these lessons, right? I think one thing that I learned the hard way, probably the most important lesson I learned the hard way is, I was an academic, right? I finished my PhD and started at Dropbox. I think I had this urge and I honestly still do, that like when I see a bug, I used to say like, “Oh! Why didn’t they just do it this way?” or “Why would anyone write something like this?” It’s been my experience repeatedly that almost always, there is this complexity that I didn’t think of. That there is a reason why someone did something. That at first glance, I was like, “Why would anyone never do this?”

I think the lesson is probably like humility around seeing something new and a curiosity that like if I see something where it doesn’t make sense to me, rather than like saying, “That doesn’t make sense,” saying, “Why? There must be a reason. What could be the reason? Let me try to learn it.” Having that sense of humility and curiosity is something that I learned the hard way. I think it’s something that honestly, it’s so easy to forget, but it’s the most important lesson. Thankfully, security is a field where I call it, you have to be humble because your life as a security engineer is designing defenses that sooner or later someone breaks. Either in a pent test or bug bounty. So you’re repeatedly taught to be humble.

But I think like the lesson around humility and curiosity is probably the biggest one, but bunch of other lessons. I think something as simple as like — I once set up a process around threat modeling and security reviews. I set up like this whole farm, that you can structure everything, you can press this button. If you say, “I’m working on web,” then we’ll ask you these five questions. It’s like this interactive application. Completely failed, because it turns out, threat modeling as an exercise is collaborative. You don’t do it alone, writing out a survey questionnaire, sort of thing. You do it by working with engineers, and ICs, and product managers, and designers and thinking together.

Google Doc, even if it’s not like a directive but a Google Doc where you can comment amongst each other, collaborate together is much more effective than a structured format questionnaire. We did this work and like completely flopped and had to throw it away and go back to Dropbox Paper or a Google Doc, so that like people could collaborate, and work together, and leave comments, and questions, and flags, and stuff like that. That’s like another smaller example maybe.

[00:31:07] Guy Podjarny: No, that’s an awesome learning. It’s best when they are obvious, when you say them out loud, and yet you didn’t really think about them before. Well, I guess, Figma demonstrates that even a collaborative process can be helped with kind of the right tooling and technology. But it does mean you can’t structure it terribly much, you do need some degrees of freedom over there. I think, maybe pulling a little more longest thread, because I think there’s a ton of experience here. I don’t know if we can kind of extract enough of it in one podcast episode. But what are some other tactics or tools, like the sort of comment now on threat modeling was a great learning? What are some tactics or tools that you feel were most effective you could have carried in your token?

[00:31:52] Dev Akhawe: Again, this is controversial but sort of like a rubric or a litmus test that I think of is, I try to apologize whenever I have to tell someone to do something via like training or knowledge or consulting, trying to rephrase that. I want to build a system where you can’t write insecure code. I want to have an ORM where you can’t have SQL injection. I want to write CSV and react in something like that, so that you can’t have XSS. And sort of like having this tactic that like the goal of a security team is to make sure that we can reason about invariance, and be sure that this cannot happen, that I have enough framework, and platforms and monitoring in place is great. The way to do that is like, any time I find myself saying, asking the developer, “Make sure you remember to do this” or “You remember to do that.” That is not a great outcome, because relying on humans is almost always a bad idea.

Not in the sense that like human — I make mistakes. I do the wrong thing when I’m coding. I forget something. How do I — any time I find myself saying, “Oh! Remember to do this,” or “Make sure you check this.” I try to think about, well, I almost want to rephrase it as like, “I’m sorry that I don’t have a better answer that I haven’t automated this,” or “That I haven’t returned the system that will automatically prevent this bug.” So, while I try to figure that out, you need to like take care of this. This goes everywhere in security. I feel like phishing is another example. Like I think the training users to like watch out for the other or watch out for your password and where you type is an anti-pattern. I think like security keys is the right pattern. Where like, I don’t have to think about whether or not I get phished. I cannot get phished.

While I was deploying, like at Figma, we have deployed security keys everywhere. But for a file in the middle, I literally used to apologize, being like, “I’m sorry. We haven’t deployed security keys yet.” You have to be cautious around phishing. That trick probably, like that sort of has been very effective, that mental trick of remembering that, really it’s on the security team to make sure that the systems we used and the systems we build cannot misbehave. Rather than telling developers you need to do these hundred things.

[00:34:04] Guy Podjarny: I love this both from an empathy perspective of sort of you’re communicating to them as well, you’re reminding the person you’re talking to. It’s just like, “I’m kind of resource constrained as well. I can’t do everything that I want to do.” It’s an acknowledgment that you could have helped in theory, so we’re in it together. Yet, there’s the sort of the last piece of the sentence, which is, you still need to do XYZ because in the meantime, we need to be secure even though we haven’t built the system that simplifies it. I think that’s pretty awesome.

You’ve had some great answers, Dev, that I have to ask you like the tough one that I don’t think really has an answer to all of this. How do you know that you’re doing security right?

[00:34:48] Dev Akhawe: I think let’s break it down into what I call metrics of progress around a particular strategy, and sort of key performance indicators on whether security is doing well. I think a lot of the times that it isn’t separated and it becomes confusing, right? But let’s say, you sort of decided that you want to shape CSV or you want to like sandbox [00:35:10] codes in the server site. Having a progress like how much percentage of our code is not sandboxed, or how much of your code is not using modern frameworks, or how much of our code is still relying on whatever, all CSRF protection. How many of our repositories are not doing dependency checking?

That is a very concrete measurable outcome, that like this are the things that we believe are best practices that will reduce security risk. Let’s just measure how much progress we have made against these end goals or final targets. You’re right, that won’t tell us how well we are doing, and security as a whole. Like I’ll be more secure. But it can tell us, how well are we doing on the execution of things that let’s assume will improve our security. I think one that we should all do, like I think a lot of times, I’ve heard people say, “Well, security is hard to measure so we love new metrics.” I don’t think that’s a fair outcome, right? Make a strategy and then you can measure progress on that strategy.

Separately, I think how we are doing on security itself, I think the goal there is just look for insecurity. I think this is a lesson learned in many other fields also. In safety also, like you want to look for the things going wrong, you want to look and encourage people to report things going wrong. I think in AppSec for example, a bug bounty program, an open vulnerability disclosure program that tells you, “Hey! Did we have bugs?” and then tell us, and then use that to measure the success of your program. Use that to say how well we are doing. Again, that one is tricky, right? Like what’s the metric there? Because if your metric is how many reports, then it’s easy to game. Especially, security team is like a bunch of people who love breaking things, so they’ll break that metric.

But sort of like paired metrics is another really critical tool in any manager’s toolkit. For example, maybe you want to have a metric that says how many reports submitted, valid reports submitted to the bug bounty program paired with like our bounty amounts should keep going up. So that you’re encouraging higher and higher — with higher and higher incentives, people report. Then you also say, “But we’ll measure how many valid reports we get.” That number should keep going down. That’s one. I think sort of like those are the outcomes to measure against, like how many employees got phished, how many compromises of employee laptops or malware did we detect. But you have to pair that with how many laptops do have security monitoring built in. Because one easy way to not detect malware is to not have any security monitoring.

I think pairing metrics is probably the key tool when measuring outcomes. But also, going back to my earlier point is, that doesn’t need to be that common or that complicated. A large number of security — we know how people get hacked, we know phishing [inaudible 00:37:46] appsec flaws and stuff like that. Rather than focusing on like measuring that and spending a lot of time, I think let’s get to a place where like you can, how many people can reason that like, “Yeah. We won’t be phished because we have deployed security keys” or “We won’t have XSS because we haven’t deployed CSV” or “We won’t have like dependencies because we have this system in place that automatically scans and updates dependencies.” Those are engineering projects and let’s just measure them like any other engineering projects.

Maybe we get to a world where like then we have to really measure security, but like, most people are not getting — most breaches are not happening because we don’t know how to measure security, it’s because we are not doing stuff that we know we need to do. Scan, that S3 buckets are not open to the Internet, make sure that like phishing is not possible through security keys.

[00:38:33] Guy Podjarny: I love that. It’s similar but different to the conversations I’ve been having, but I’ve kind of seen these sort of three different means of measuring security. I think you’re coupling to all of them, which is one, is measuring security controls. I think you’re talking about measure progress against strategy. The second is, I think the two others I’ve sort of seen are measuring output of the tools, like number of vulnerabilities and measuring red team outputs, or kind of finding actual exploits, or flaws or almost sort of pseudo breaches. I like your approach. I see a lot of merit in pairing the latter too. You’re saying, “Okay. Look for vulnerabilities because those are leading indicators. How many vulnerabilities you have in the system, but pair that with quality of how many of those are real or how many of those are reported. Yeah, I think that can be a powerful combo there.

[00:39:22] Dev Akhawe: Yeah. I think sort of the measure by capability maturity, right? You have like frameworks like BSAM or open SAM and you see, like these are the capabilities we want to have. Then it again becomes a metric of like, how many have you covered, so then you can track progress that way. Or we have internal risk metrics where we track risk in terms of simple qualitative colors, let’s say red, yellow, green, blue. And we say, “This quarter, we will go from red, to yellow, to green or whatever.” I would still call of those and be like, you are internally measuring your own success and you’re saying like, “These are the capabilities and we have them.”

What those don’t do is, like where they’re actually effective, sort of like making sure that you have a different metric that is completely outside and is completely external. And it’s like, when someone looked for a bug, we offer to pay $50,000 for any critical bug, and no one could find it. Or we run pen test and no one could find it. Like I think, sort of separating out those metrics is also really important. Using that as an input to your risk metrics for example, that like if you’re repeatedly getting bugs that are XSS, your risk metrics should say that the probability of the XSS is very high. Then your goal should be like, let’s reduce that probability to low, and that should be reflected in the bug bounty reports or pen test results that are saying, “Okay. Now, we are no longer getting XSS reports.” I think that’s also another way I would think about it.

[00:40:39] Guy Podjarny: Yeah, they all combine quite well. Because based on those learnings, you adapt your strategy and them you measure progress towards it, for sure. Sadly, we’re out of time. We’re already probably one of the longer episodes. Before I let you go, Dev, one kind of final question I ask every guest this year. If you fast forward five years, where you take your crystal ball and think five years out someone sitting in your chair and doing the job, what would be most different about the reality? Not in Figma specifically, but rather in the industry.

[00:41:12] Dev Akhawe: I would say like, let me break it down into like maybe cultural, technical and people. I think the cultural change I think will happen is, more and more people would say, “Security has to be a positive for us. Security has to be a team, what they call ‘solves a yes’ rather than ‘figures out a no’.” That cultural relationship between security and the rest of the company and rest of engineering and security being seen as like the enabler and a positive force in the company will keep growing. That, I’m excited for.

I think on the technical front, I touched on this but the ecosystem around security just keeps improving. It’s an exciting time to be in security, I think, like the different startups and tools are making things — we can do more and more powerful things. In particular, one of the areas that I’m excited for is probably like data. Like I think security teams right now don’t use data as widely as they should, because a lot of our tools haven’t been great and like logging was expensive. But I think with modern data infrastructure, like the same reason everyone else is like able to use insights from data, I think security teams will also grow. We’ll have like logs and analysis, and machine learning models or something that will be much more powerful on that front.

Even on AppSec, I think a lot of the tools around static analysis, dynamic analysis, testing, automated testing will just become more and more powerful. I’m excited for that. I think around people, honestly, every year as we hire and I meet new grads and new engineers joining security teams. Like it’s easy to forget, but like 10, 12 years ago, like new grads did not join security teams. Like new grads joined software engineering teams. But every year, new grads and new engineers who join security, who like even — I feel like an old man now, but they’re like fresh out of school, they know more about security than I did after my PhD. They’re so smart and they’re writing amazing systems and have a passion for fixing security. I think like the sort of security ecosystem will have a lot of like really talented people, doing a lot of amazing work, and with a passion and sort of an engineering culture. That’s going to be amazing also. Yeah. I’m super optimistic about the future, I think.

[00:43:20] Guy Podjarny: Yeah, that’s awesome and that’s very reassuring because I do think you have a pulse on things. Hopefully, that’s a good crystal ball and I think all of them makes a lot of sense. Dev, this has been a true pleasure. Thanks a lot for coming onto the show.

[00:43:33] Dev Akhawe: Thanks a lot. Thanks for having me. This was a lot of fun. Thank you.

[00:43:37] Guy Podjarny: Thanks everybody for tuning in and I hope you join us for the next one.

[END OF INTERVIEW]

[00:43:45] ANNOUNCER: Thanks for listening to The Secure Developer. That’s all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you’d like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don’t forget to leave us a review on iTunes if you enjoyed today’s episode. Bye for now.

[END]

Dev Akhawe

Head of Security at Figma

About Dev Akhawe

Dev is the Head of Security at Figma, the first state-of-the-art interface design tool that runs completely in your browser. Before that, Dev was at Dropbox where he was a Director of Security Engineering, leading application security, infrastructure security, and abuse prevention for the Dropbox product. Dev also has a PhD in Computer Science from UC Berkeley, where his thesis focused on web application security.

The Secure Developer podcast with Guy Podjarny

About The Secure Developer

In early 2016 the team at Snyk founded the Secure Developer Podcast to arm developers and AppSec teams with better ways to upgrade their security posture. Four years on, and the podcast continues to share a wealth of information. Our aim is to grow this resource into a thriving ecosystem of knowledge.

Hosted by Guy Podjarny

Guy is Snyk’s Founder and President, focusing on using open source and staying secure. Guy was previously CTO at Akamai following their acquisition of his startup, Blaze.io, and worked on the first web app firewall & security code analyzer. Guy is a frequent conference speaker & the author of O’Reilly “Securing Open Source Libraries”, “Responsive & Fast” and “High Performance Images”.

Join the community

Share your knowledge and learn from the experts.

Get involved

Find an event

Attend an upcoming DevSecCon, Meet up, or summit.

Browse events
We use cookies to ensure you get the best experience on our website.Read Privacy Policy
close