Redefining CyberSecurity

The Application Security Audit Adventure: Unpacking Penetration, Whitebox, and Blackbox Testing | A Conversation with Andrew Woodhouse and Dr. Mario Heiderich | Redefining CyberSecurity Podcast With Sean Martin

Episode Summary

Andrew Woodhouse, Dr. Mario Heiderich, and Sean Martin share their insights, experiences, and perspectives on blackbox and whitebox testing, the potential vulnerabilities in software, and the significance of a comprehensive security initiative.

Episode Notes

Guests:

Andrew Woodhouse, CIO at RealVNC [@RealVNC]

On Linkedin | https://www.linkedin.com/in/ajwoodhouse/

Dr. Mario Heiderich, Founder of Cure53 [@cure53berlin]

On Linkedin | https://www.linkedin.com/in/marioheiderich/

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin
____________________________

This Episode’s Sponsors

Imperva | https://itspm.ag/imperva277117988

Pentera | https://itspm.ag/penteri67a

___________________________

Episode Notes

This Redefining CyberSecurity podcast features insights from Andrew Woodhouse, Dr. Mario Heiderich, and host Sean Martin, who explore various aspects of system and application security. Woodhouse introduces software composition analysis and the importance of security initiatives like ISO 27001. Dr. Heiderich discusses the roles in security testing, and the parallels between traditional QA testing and security testing methods. The use of C++ as a core language, the intricacies of managing large-scale software, and the complexities of auditing entire tech stacks are also highlighted. The discussion provides an overall comprehensive understanding of tech stack security tests and audit processes.

____________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel
📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

____________________________

Resources

White Box Testing – What is, Techniques, Example & Types: https://www.guru99.com/white-box-testing.html

____________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-cybersecurity-podcast

Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording as errors may exist. At this time we provide it “as it is” and we hope it can be useful for our audience.

_________________________________________

voiceover00:15

Welcome to the intersection of technology, cybersecurity and society. Welcome to ITSPmagazine You're listening to a new redefining Security Podcast? Have you ever thought that we are selling cybersecurity insincerely buying it indiscriminately and deploying it ineffectively? Perhaps we are. So let's look at how we can organize a successful InfoSec program that integrates people process technology and culture to drive growth and protect business value. Knowledge is power. Now, more than ever.

 

sponsor message00:53

Imperva is the cybersecurity leader whose mission is to protect data and all paths to it with a suite of integrated application and data security solutions. Learn more@imperva.com.

 

voiceover01:10

And Tara, the leader in automation, security validation allows organizations to continuously test the integrity of all cybersecurity layers by emulating real world attacks at scale, to pinpoint the exploitable vulnerabilities and prioritize remediation towards business impact. Learn more at pin terra.io.

 

Sean Martin  01:37

Everybody, you're very welcome to a new redefining cybersecurity podcast episode here on ITSPmagazine. And, as you know, I help try to help anyway, organizations understand the role of technology and people and processes and building out security programs in a way that fits operationally with the business. So enabling the business and protecting it as as it grows, not getting too too deep into the weeds and missing the real point of why we even bother in the first place. And this topic was presented to me and given my background and quality assurance engineering, I was super excited to talk about this in the context of security testing. So we're going to be looking at different methodologies for security testing applications and systems. And I'll just include the range of black to gray to white here. And we'll understand a little bit what those mean, none of them if there are other other terms or shades that we can talk about as well, we'll figure that out. So thanks, everybody, for tuning into this, I'm thrilled to have Mauro and Andrew on. They both have a different perspective to this particular topic and seeing it through to a program or project through to completion. And hopefully some tips and some stories to help us all kind of visualize a way to tackle this problem. So without further ado, a few words from each of you on who you are and what you do, and and why this is an important topic ta Mario, we'll start with you.

 

Dr. Mario Heiderich  03:23

Cool, thank you very much, and very happy to be here. My name is Mario. I'm the founder and director of a team of penetration testers, smart penetration testing firm from Berlin, in Germany. And we're called to 53. And yeah, we do pen tests, source code audits and those kinds of things. And I'm usually the person who you talk to for like everything contractual quality assurance, or report hand over and those kinds of things.

 

Sean Martin  03:49

Love it. And Andrew.

 

Andrew Woodhouse  03:51

Hi. It's great to be here. My name is Andrew Woodhouse. I'm the CIO of real VNC. So we do remote access software.

 

Sean Martin  04:03

And neither of you answered why it is an important topic. Why, why? Why did you want to? Well, first to have to do the program, the project? And then why is it important to you to share with others? So

 

Andrew Woodhouse  04:17

I'll take from, from my perspective, so the the audit that I think we're going to be talking about today as part of a wider security initiative that I put in place for real VNC. And that comprised a number of things that I'm sure are going to be familiar to your listeners. One of them was software composition analysis that we bought a tool in to look at our software, understand where we were using libraries that may be out of date, maybe old may have vulnerabilities in and help the developers fix those issues. And also highlight any libraries that had for example, license risk, so there might be GPL license to maybe that introduces risks into our commercial software that we have to be aware of And another program was an ISO 27,001 program. So aiming to be ISO 27,001 certified, which is a very large project. And at some point, we can maybe talk about why we're doing that and why we think that's important. And another one was basically bringing your 53 and to audit our code. And not just from a marketing point of view to say, hey, look, we've done this, we great, but actually help us improve our software and our software development process as well.

 

Sean Martin  05:34

Both both important parts. So I mean, raising the posture but, but also, I mean, it's smart to demonstrate and use it to help companies or customers make a good, good informed decision on which which companies actually care, right? They got just a checkbox of sock to borrow anything to add there?

 

Dr. Mario Heiderich  05:54

No, I would agree with those. And I would see when we do with penetration testing, and source code auditing is just like one of many small components that can help to make a company or product more secure. So it's not done yet once we're done. There's like lots of more stuff to do. But yeah, well, like part of that construct part of that plan that helps you to make your software more secure, hopefully, I guess.

 

Sean Martin  06:22

Absolutely. All right. So I'm gonna, I'm gonna dig back into the recesses of my mind when I was a QA engineer. And I was responsible for many things in validating the quality of the software that I was building at the time, big, big yellow company delivering tons of software products to the market, if you can picture that, who that is. And so the blackbox testing that I did, there was usually scenario driven user story driven, we kind of understood the workflow and the business process and the business logic that the app was supposed to work by or work around or under, I should say. And we would test it externally trying to get it to do that, to ensure that it did, and then tried to get it to not do what it was supposed to do, to find those anomalies. And then I was also responsible for coding tests to test a function calls and API calls and networking calls and, and data manipulation, calls and then validating data, things like that. And that was the that was the white box test. So black and white. Is the is it the same idea in security testing, does that translate? Well, I don't know. Mario, can you maybe kind of share with us your view of white box Black box are there are there some shades in between there as well,

 

Dr. Mario Heiderich  07:56

I think it's very close to what we would use as labels to describe penetration tests or similar security assessments. I'm not really sure if it's like exactly the same. But if you, for example, talk about a blackbox test. And we talk about attacks a test where we don't have much information compared to anyone out there on the internet. So we basically slip into a specific role. And after slipping into this role, the test becomes a blackbox test. And the role here would be like the uninformed attacker, or we could do a white box test where we don't slip into the role of the uninformed attacker, but rather assume the role of someone who has all the insights has access to the sources to the configuration files, potentially SSH on the server, and so on, and so on. So we become more of an auditor, still, with a main interest of doing bad stuff, or finding the spots where bad stuff can be done. But simply with a different pair of goggles on and with a different level of visibility and insight that we have. So we can get more stuff done. But we might also miss things that are only visible from the outside. So it depends a little bit on what you want to achieve and how you want achieved.

 

Sean Martin  09:01

And Andrew, the the role of both of those in your program.

 

Andrew Woodhouse  09:07

So yeah, so we, we do both and both are important in different ways. So we, we, we commission blackbox pen tests. And that's really from my point of view. And, you know, we might disagree about that, but from my point of view, it's, it's, it's much more common thing, it's a much more well known thing, you know, there are many organizations that can do it. And, and very often a lot of the a lot of the things they're doing, you know, they're running as a lot of standardized tools. And it tends to be without sounding controversial. There's, you know, it's a much it's a lowest Build exercise because there's a lot of tooling out there. For example, for web applications, a lot of the, you know, a lot of the issues and the things you're testing for are already known. And, you know, things like cross site scripting, etc, etc, that are well known issues that have been solved. So you're looking for coding errors, configuration errors. And we do blackbox tests for our, our, basically, our web apps. But due to the nature of what we do, there's also there's also client software. And we don't do blackbox tests of that. So the reason we wanted to do a white box test is I felt and I might have been wrong, that in order for the test to be really valid to help us improve our software, and to be beneficial to us not just a rubber stamp on a piece of paper that a customer can see. I really wanted an external organization that had people familiar with, you know, the the tech stack we were using, to actually go through it, my view was we need to go through our code with a fine tooth comb, because with the best will in the world, doesn't matter how good your developers are, there will be things that they miss. Right? So sometimes another pair of eyes, another perspective on looking at the code is useful. So that's a very long way of saying we do both. And I think they're both quite different.

 

Sean Martin  11:26

Yeah, absolutely. And I think we could probably spend the entire conversation debating on or probably not even debating just discussing where the value of one is and where it stops. And the other one picks up on where it overlaps. But let's stay focused on the the white box. So you know, some information. You mentioned, tech stack, Andrew, and I want to maybe start here in terms of scoping. Right. So with as much detail as you are comfortable sharing? Yeah, what, what does the tech stack look like? Where I presume it uses some cloud? Do you have some shared responsibility? agreement with the cloud provider? So any kind of walk through the stack of an application? Let's say maybe not everything you're looking at, but something you want to share?

 

Andrew Woodhouse  12:20

Yeah, sure. I guess the first thing to say is, if there's a technology out there, we probably using it somewhere in our tech stack. And I think Mario probably did a bit of a gulp when he saw he got a list of the technologies we were using. But I think like most complex applications, there are there, you know, it's very hard to to limit the tech stack, because then you know, you have to develop the reality is you have to develop in the tech stack that a meets the solution, and B can be done in a timely fashion with the resources you have. So, you know, ultimately, the core of our application is C++, right? Is there a client, their servers and things like that, that are installed on the client endpoints. So that automatically added a level of complexity to an audit, because we needed, we needed a team who could not just do more mainstream technologies, maybe like, you know, JavaScript, and Python and things like that. But actually, in our case, C Plus Plus and go weather, were two of the big ones that we really needed to engage in as a specialist organization that could cover those.

 

Sean Martin  13:39

And quickly before you jump in Mario with the C++, clearly not not a fresh off the shelf language. So the decision there to use that legacy applications continue forward on things or just the power and capabilities at the endpoint that you need to do with that language.

 

Andrew Woodhouse  14:02

I think it's a combination of all of them, I think, you know, the company I work for has been in business for 20 years, it's evolved over 20 years. And there's never been a compelling reason to move away from C++. To be honest. There's nothing wrong with C++. It's it's one of the things that you know, we really need performance, and we really need C++ is, if we had an interpreted language, it arguably might not be as fast performance is important to us as it's the size of the size of the client, the client software, you know, and we have to use Microsoft Teams every day and work for doing business. And if you look at electron apps, yeah, they might be really nice and easy and cross platform, but they're massive and probably riddled with a whole lot of security issues that or outside of the code that you're writing. So we've C Plus Plus, I mean, we're a bit of an extreme example, in that we've written everything from scratch because 20 years ago The tooling, a lot of the tools weren't there and other libraries weren't there. So. And that's why the white box audit, I think, was quite a big exercise. Because, you know, we have a protocol we have, we're not just an off the shelf solution, I kind of pick up the bits and bolt them together, it is a solution that's been developed from scratch with a protocol and things like that. So while C++ is possibly legacy, and if we were doing it, now, we might use something else. There is no compelling reason to move away from it. And we've got 20 years of learning how to write C++, as an organization.

 

Sean Martin  15:41

A lot of tribal knowledge, Mario what? Tell me about the gulp moment?

 

Andrew Woodhouse  15:48

Well, that was me saying that Mary,

 

Dr. Mario Heiderich  15:52

I think there wasn't that big of a gulp moment. But could be interpreted as one is when you learn about like, how many lines of code are there because like, if you're talking about scoping with a pen test or an audit, you initially need to know what's there. So you ask the two questions. The first question that you ask is what's there from ilevel perspective? And then you can answer like two mobile apps and one client and one web application and one API, something like this, just like really roughly described? And then the second question is like, Okay, so for each of those, what's there in terms of code? What's written in what kind of frameworks how many lines of code, how many API endpoints, how many user roles, basically everything that gives you some insights about the metrics of the scope objects. And when we learned about the number of lines of code in this particular example, we were like, Oh, that's a lot. But that was also expected, because it's a lot of software. And there's a lot of scope items that have been placed into scope. So needless to say, we were, yeah, I'm not sure if talk about this, but somewhere in the seven digit numbers, if I remember correctly with the lens of God. And that is, of course, a lot.

 

Andrew Woodhouse  16:52

But just just to pick up on that I specifically didn't want just one part or solution tested, from my perspective was, there could be vulnerabilities anywhere we hear all the time about, you know, people, attackers pivoting from one vulnerability to another vulnerability. So I regarded for the, for the for the white box audit, pen test, whatever we want to call it, it really needed to involve our entire stack. And that includes all the mobiles app, you know, the cross platform applications, the web app, the back end. Because if we hadn't done that, we, I would be worried that we didn't have a 360 view of everything that was going on. And it would kind of hobble Mario's ability to do his job by excluding certain things. And if I were, if I I'm a bit paranoid, and I'd be thinking if I were married to everything, and what are they hiding it?

 

Sean Martin  18:00

Yeah, maybe Mario's you as you. If you don't mind drilling deeper into this. I mean, yes, there are elements or components, right, you have a web app, desktop app, mobile app, back end service, data stores, networking, cloud service, whatever, all the things aren't. They and individually work a certain way, but connected, they also allow the system to work a certain way, perhaps differently than they might on their own. And so Maura, how how did you go about breaking down and analyzing the subcomponents of the subcomponents to under understand because I mean, a mobile app might work differently than a desktop app. Right? Make may call different API's may call them differently may pass different parameters, may bla bla bla, how do you get to that point where you're looking at the right elements in the right way?

 

Dr. Mario Heiderich  18:58

That is actually not that complicated in most situations, because the different kinds of products that a company is offering that are supposed to be looked at already give you like an orientation, how your work package should look like. And then the technical details about each of the components tells you exactly who from your team should be looking at the whole thing. And then what you need to do internally is to find out which ones of those components are actually security critical. And which ones are not that security critical. I would go as far and say that the core libraries that a software might be using all the API that is offered might be exposing is critical, because all the good stuff happens there. And I might also argue that potentially the web application if it's just consisting of like a UI, or something that is like a single page web application application running in your browser, is important, but not that super important. Because you can't get direct access to the data. You can maybe find a cross site scripting or something like this, but then you remain most of the time in the client. And then last but not least, you look at the mobile apps like okay, so they're installed on someone's device. Those are important, they have their own threat model. But they're in no way as important in most situations as for example, API coordinates, or pretty much everything that is there written in C++, because entirely different threat model entirely different risks that is behind it. And then you already kind of start seeing a map shaping up and knowing like, this is one component that talks to the other component. So those people assigned to this component need to talk to the other people assigned to that component. And then at some point becomes reality and it gets specific and does something concrete falls out, and then you have a plan usually depends on the technology, it depends a little bit on the customer, it depends a little bit on the domain specific parts, but at the end of the day, you usually find answers to those questions quite quickly. And you know, exactly, okay, this is one element, this is one compound of things, let's audit this with less level of prior and the other ones with another level of prior and so on.

 

Sean Martin  20:55

So you want to add something there. And

 

Andrew Woodhouse  20:58

I was I was just gonna say, in our case, the bit that I think we were all most most interested in was the security of the protocol that we use that underpins our application. And actually, most of that code is shared between the desktops, apps and the mobile apps. So I think that made it achievable. Because they, they, they, it's a different front end kind of wrapper around fundamentally the same protocol. And a lot of the libraries I shared.

 

Sean Martin  21:31

Yeah, cuz I'm gonna go out on a limb here, and, and assess that Andrew has kind of a good view of what, what the environment looks like. And more, I don't know if that's typically the case, where if somebody can say, this is what my infrastructure looks like, this is what the ecosystem components looks like, tell me where tell me where I need to invest some, some additional controls or policies or whatever it might be, versus a company that says, We just because Andrew has been building this stuff for 20 years, right? Or at least a company has, versus a new startup that basically builds apps using 90%, shared, shared libraries, a lot of them open source. And then they write 10% of the code themselves, and have very little view, perhaps of what's what. So any thoughts on that dichotomy? It doesn't change things much.

 

Dr. Mario Heiderich  22:34

I mean, the word that we're after is probably inventories. So if you're working for a company, you need to know your inventory if you're in the respective role. If you do not know your inventory, then you cannot really determine what needs to be tested, you cannot really determine where the risks are, where threat actors might actually become active. So this is like one of the key bits you need to know what you have. And you need to know what you're sitting on to be able to make like a conscious decision about what should be looked at in the next audit test for inability assessment, or whatever you're going to be running, even if it's just background or something like this, you need to know what is where and what is the risk? And can people even look at it without breaking stuff. Because it could also be that you have something in your stack that once it is being contested and receives more requests than usual, that it just collapses. And then something critical medical labs we've had that in the past that can happen. But I doubt that it's possible to do security productively without having a clear idea of what your inventory.

 

Andrew Woodhouse  23:30

I think, I think, Shawn, you picked up on one of my little bugbears. And you know, some of your listeners might fundamentally disagree with this. But a lot of the time, particularly startups, we've all seen the developers who copy and paste code from Stack Overflow, or they or they use, they use stuff from cloud providers that maybe they don't fully understand. And as soon as you start doing that, then you raise, you raise the risk, right? Because you don't know what you're dealing with you don't know, right? You don't necessarily know how the code working. And if we think about, you know, things like some of the horror stories around NPM recently, where developers are just, you know, pulling packages, and they don't necessarily know what they're doing and how they're working. There could be vulnerabilities in them. So I think I'm in a nice position in that because we've developed because we've been around for a while. And we've we've written everything ourselves. We're not a startup, I think we're in a luxurious position of probably understanding more about our code base than than a startup that is under massive. And I think they, you know, dev teams, you know, I'm not a developer, I'm an IT guy. But dev teams are under massive pressure to ship quickly. And we all know that corners get cut. And sometimes security is not necessarily front and center. Because there's this drive to ship, from from investors from the market, wherever it's coming from. And I personally think that it's the rise in the number of data breaches that none For security vulnerabilities, I think there is a correlation with the number of, you know, startups that are working, you know, writing stuff really fast getting it out there, they might be doing some really cool stuff. But is it secure? I don't think even they even they would claim it was.

 

Dr. Mario Heiderich  25:18

I do agree, I mean, to be to be one of the first who actually offers a specific kind of software specific kind of service, because you have more time than others, if no, someone came and said, like, well, we're also going to get a foot in the door with like remote desktop remote connection services, how are we going to be able to catch up as quickly as possible with real VNC? Well, we use as many of the things that are already out there as possible, and glue them together and hope that it flies. And then the consequences, of course, that you're gonna, you're definitely gonna have some level of bugs, you're gonna have some level of security problems, because you do not know exactly how each of the components that you glue together works and what the pitfalls are. And then one of the components might have an enhanced set of features that you never heard of, if you didn't read the docs properly. And then it turns out that it can do things that you never anticipated and the whole thing falls apart. Because something that's user controlled, goes into an eval or something like this. We've seen that not in the in the remote connection of business, needless to say, but in other businesses where this tends to happen, there is some first mover, then another company wants to kind of catch up and then more want to catch up because it's good money. And then software comes out that yeah, it works. But that's pretty much all. So yeah.

 

Sean Martin  26:29

I was gonna say quickly, and then I won't hear what you say. But just we can't forget that security is at least the three legged triad, right? confidentiality, integrity and availability. We always focus on confidentiality, and has it been compromised, where a service can be used to do something else where availability is key, as well as integrity, right? We want to know that the apps and its data is has integrity to sorry, Andrew going.

 

Andrew Woodhouse  26:58

Yeah, I was just gonna say, and I think this problem is actually, if you think about things like open SSL, that pretty much every software users out there. There are, I don't know how many lines of code there are in OpenSSL. But there are regularly things that come up that some really old feature that nobody uses anymore, is still in the code and suddenly gets exploited. So understanding that inventory is super important. And you know, just be aware, I guess, if you're writing software, and you are, and you have TLS libraries that are that are fundamental to your solution. It's very easy for developers to start using one is not necessarily easy to make it work securely, even though it's TLS. It doesn't necessarily mean it's secure.

 

Sean Martin  27:54

So Maria, where does the white box the understanding of what's inside, really come into play? Here? I've heard Andrew speak to the protocol, I presume, the standard protocol that the apps been built around, I just mentioned SSL, open SSL, standard way to securely communicate, right. So those are perhaps two inside elements that external people might know enough about, to then compromise from the outside or from the inside, if they they gain access inside somehow. So where does the the white box stuff come in, where it really drives? Yeah, better value, I guess, in the assessment, the audit than than a black box.

 

Dr. Mario Heiderich  28:50

I think the key word here is complexities. So if you take a piece of software, and that software is just like pretty much like any other software, it's the same code out there, like a simple web application with a bunch of forms, then you might do white box pen tests, or you might do a gray box pen test or a black box pen test. But in the end, the results are not going to differ that much. Because that thing is simple. That means even without lots of insight, you can get a good grasp of it, and you can understand it and you can make the right decisions and make the right estimations as a tester and eventually reach something that you could label as completeness in terms of coverage. Of course, you're never going to be fully done with your coverage, but you can get close. But as soon as you have a certain level of complexity, you have to make too many assumptions about what is there what might be there, how it works, how it works internally, what kind of layers are there. So at some point, white box or crystal box is the only way to actually go because you need that insight. And you need to kind of have that visibility over certain complexities on the stack because otherwise all you can do is guess, but the amount of guesses that you have to do and I mean to be quite frank blackbox testing is usually guessing the amount of guesses that you need to do with the Larry rising level of complexity grow was exponentially so at some point, you just like can't guess enough anymore to reach a level of coverage that is sufficient.

 

Andrew Woodhouse  30:06

I think a good justice a kind of practical examples brings to mind with with our software, for example, we delegate authentication to the operating system, an outside attacker is not going to see how that works, how those system calls are done, you know, are they handled properly? are failures handled properly? You know, things like that, that, I think only from seeing the code? Are you actually going to know how that process is working?

 

Sean Martin  30:34

So that leads me to two things. And this may, this may be what we talked about for the rest of this. The first part is, you do the audit, you get a ton of stuff back. How do you absorb that? You hear umpteen stories of we got this pen test, nice PDF, gets put on the shelf, and we never look at it again. So how do you how do you ingest what you find? But then also, perhaps, how do you use that in your learning to build the next iteration, not just fixing the bugs that perhaps were found or vulnerabilities that were found? Or perhaps designing in the way you just described leveraging the OS, he does have some separation there, which makes it more difficult to understand and therefore difficult to compromise? How do you do those two things? And

 

Andrew Woodhouse  31:32

so the way, the way we worked with cure 53, which is probably normal figure 53. But it worked very well for us is, effectively. Mario's mentioned workstreams. So the, the, the, the audit was divided into four pieces of work that mapped on to our four development teams. So so we had the right people available. And effectively as they were finding things. They we were using a system like teams, or slack or whatever an organization uses, but we were definitely three, were asking what had the ability to ask questions to our developers? Hey, why have you done things this way? You know, can you clarify what this is doing? And then anything that was found, which was concerning to cure 53? Basically, we use, we use JIRA like most organizations, so jurors were created to fix those things. And for I don't think any, it wouldn't be realistic for any audit of the size of what your 53 did for us to find nothing, they did find some things, thankfully, no, no criticals. But effectively, as issues were as Chrome 53, we're finding things the dev teams were raising jurors. And actually, the second phase was cure 53, reviewing the PRS, reviewing the pull requests, and confirming with our dev team that that from their point of view does fix it and that process weren't worked really well. And actually, we there were some 30 Odd issues that that were fixed in this process. And all of them were kind of validated by qf 53. So the the, the external party that found the issues then validated the fixes and that worked really well for us. So they certainly weren't left on the shelf and within and as I said, we didn't do it as a rubber rubber stamping exercise for customers to say, hey, look, we've done this thing, aren't we great? It was we genuinely had the desire to make our product better and more secure.

 

Dr. Mario Heiderich  33:46

Which is also probably something that we can use in the discussion around black box versus white box because with white box engagements you usually have like way more communication with the client, you have like the possibility to verify fixes based on do some prs. You have communication during the test, you can ask anything, they can ask you anything because you're not the external attacker you like someone sitting with them next to them, etc, etc. And that also kind of gives you more depth especially when the complexity levels are high.

 

Sean Martin  34:16

So yeah, before we go to the the next round of development cycle, how how did this fit into the broader picture of your security program? And during How How did you communicate what you did, how was completed the results of that to those that matter? And who were those?

 

Andrew Woodhouse  34:46

Well, I guess ultimately, those who matter are a our investors and be our customers. So how do we communicate it We internally, we circulated the q 53. Full Report to everyone. And I was particularly interested in improvements that Q 53 can identify in our software development process. So some of what your 53 is report contained was some recommendations on how you might want to consider doing X Y, Zed. And then that was taken by our engineering team by our CTO and action. So a good example, for example, QL, 53, isn't, hey, you might want to consider doing fuzzy. And here's why. And that, and that went into our kind of DEV pipeline. And then for customers. You know, I really believe in transparency around security, because ultimately, anything less than transparency is a little bit smoke and mirrors. We couldn't publish the full 50 secure 53 report simply because it contained quite a lot of proprietary code extracts that we didn't necessarily want out in the public. But we published a summary report, that was a warts and all report, you know, it wasn't perfect. And we weren't looking for it to be perfect. We wanted to, we wanted to give customers a level of confidence that there are no skeletons in the closet. And also to help us as I said, improve our software.

 

Sean Martin  36:29

And Mario from from your perspective, stepping out a bit. I think my my perspective is, pen tests are always give me all the negative stuff that I need to go address. Do you find generally and or specifically with the Andrews program, some insights into where their development process has been working well for them, where you don't you didn't normally or didn't find things you would normally find in other organizations, that you were able to communicate back to Andrew to say, keep doing this, because that is helping you reduce exposure and reduce risk.

 

Dr. Mario Heiderich  37:14

That's a bit hard to answer without sharing any internal details. But it's not unheard of that after a penetration test in general, you can say a lot of negative things. And we need to say those things. Because if we don't, then someone else will end they don't write a report at least. But there's oftentimes lots of good things that you can report, you can for example, report good things such as fantastic communication during the penetration test. And you can also report good things such as every question that the team asked was like answered within minutes, or like half an hour or an hour. You can, for example, also make a good attestation about the people in the right roles, having the right insights and really knowing what they do. And you can also for example, positively as a test date, or a test that there's like a fantastic security climate or the first fantastic security culture, or that certain things that you usually find indifferent about comparable platforms are just simply not present in this particular example. If we let's say run into a situation where we look at the context website, and we just like don't find any cross site scripting, then this is weird, because we always find process scripting, at least most of the cases. And then we say something about this, because then it's clear that this topic is really under control. And we can say well, if you folks are going to plan for a training or something like this, skip the Xs S model be good, you got this under control, it's actually looking good. I mean, we don't know, of course, whether this is really the case, because we cannot see the entire company only like snippets of it or like parts of the products that we look at. But still, you know where I'm coming from. It is oftentimes visible that certain vulnerability patterns or certain classes don't exist. And then you can say nice word about this actually working whatever concept is there, it's working well, and you don't have this particular problem. And I believe that there was a bunch of issues that we actually flagged as very positive with real fancy as well. And in the final conclusion of the of the internal report also positively noted as well, this is fantastic. There wasn't any credits, this is a good thing. There was a bunch of high but not too many, given the

 

Andrew Woodhouse  39:18

size of the codebase. That's probably quite,

 

Dr. Mario Heiderich  39:21

it's pretty impressive. Indeed, we have found similar examples with similar networking and desktop software running with like six 710 12 credits. And that's a disaster. And that was not the case here. So that is also something that was for a positive remark.

 

Sean Martin  39:36

So Andrew talked to me about the culture there. That doesn't happen accidentally. Yeah. Done.

 

Andrew Woodhouse  39:44

So at the risk of I don't know if you know the background of the company, but the company was founded back in 2002 by the original developers of VNC, who worked in the 18 T research lab in Cambridge. So they were academics, you know, they were computer scientists AT and T decided to close that research lab. And there were some some organizations that I probably can name because I think it's pretty public. There are some organizations like Intel, who said to who said to the founders, hey, we really see a future in this technology, we kind of don't want this technology today. So they effectively gave some vague gave some seed money to start this company to continue development of this thing that they developed under the AT and T umbrella. Originally, it was open sourced. And effectively, since 2002, the company has been continuing developing the commercial versions of this thing that was originally open source. So I think in terms of culture, it's a very, it's a very kind of engineering led culture from the very earliest days. And I think, yeah, I think security is everybody knows and take everybody. And it's not marketing, speak to say that as an organization, we know we live or die by the security of our product. Because in remote access, if customers get compromised, and we're culpable, it's existential for us, right? If our product has fundamental weaknesses, so we'll, you know, from the CEO down security is absolutely something that we will not compromise on if, if we are on, you know, like, all commercial organizations, we're under pressure to release more to do more numbers to do so sell more software, etc, etc. But there are some red lines and compromising on security is absolutely one of those red lines. And, and we all understand that. And that's very much central to the culture. And we see that as we see it as an we, we like to think that we invest in our product by investing in the security of it. So in a lot of organizations, security is seen as a drag factor, it's something you have to do with a kind of sad look on your face. We've written this amazing web app, now we need to make sure it's secure. But for us, it's security is absolutely essential. And it's at the core of what we do, actually.

 

Sean Martin  42:26

I love that, and I've been there, it's an exception to the rule, I think, where people understand the importance. And then people believe, or even less believe and are passionate about proving or demonstrating that it is important. But then there's a reality of doing it. And I want to close with this. And I'll give you a final word as well, or before we before we wrap it, and do maybe one more slice on the culture because this is an important piece. You've you've built a team that not only believes but actually does it. So are there other ways you run your meetings? Are there tools you use? Are there reward rewards? Or how do you kind of keep it going beyond just we believe in security?

 

Andrew Woodhouse  43:16

That's a really good question. And a really hard question, I guess. So. So in our, in our SDLC, so we tried to follow the Microsoft SDLC. So you know, we're trying to do that whole shift left thing of bringing in security very, very early in the earliest conceptions of new features and functionality. So there's that side of it. So the developers and they never get a chance to write stuff that's insecure, because we have security people saying, Hey, where's your threat modeling? You know, you haven't done that as part of like non functional requirements, for example, for a new feature. That's, that's the kind of the stick approach of, hey, we're not, you know, what, what's that? You know, we can't release that. Have you thought about x y Zed from our, from our security from our security team, who we have the guy that that kind of heads up that function, as actually interestingly, picking up on something you said earlier about QA, he joined us as a software tester. And he found he was interested in security. So they navigated and was more interested in specifically security stuff, but he has the background of of like QA testing of automated testing of what good tests look like, what challenges developers and testers are under, so he can talk their language. And I think one of the where a lot of organizations fail is where security is imposed. By way it's not a conversation where it's just imposed. And I think the fact that yes, it helps that we're a fairly small company, but where devs and security it's, it's a mute truly collaborative thing rather than a, just a brick wall that you can't talk to. And I think that's super important. And I think that's a big part of I think that's really important that security can be a conversation. And rather than, like cybersecurity coming in with a big hammer and hitting someone over the head, they can talk to the dev and explain to them and bring them up to speed with why maybe that isn't the best approach. And that seems to work well, for us. I think we're in large, you know, I used to work in big companies. And security was a very removed function from development. And it tended to, and I think this is where the whole shift left thing comes from security tended to be the final check, before stuff released. And that clearly causes antagonism because the developers have spent a lot of time and a lot of blood, sweat and tears developing something, only for it to be kicked back at them, because there are security issues with it. And I think the more you can bring the security teams and the security experts in integrate that into the development process. I think that that's hugely beneficial.

 

Sean Martin  46:20

I love it. And Mario, we're speaking to those security teams here. So any advice for them on how best to approach a white box security test? And I presume it's gonna be driven by communications to start, but what, what else could you say to help them really prepare for a program that succeeds?

 

Dr. Mario Heiderich  46:47

I think most of the most of the good ideas and thoughts about this that actually work in practice we already mentioned by Andrew, communication is key. And you can never have a situation where the security people become chairs and just like randomly make decisions, but do not shed any form of transparency to developers or other people why those decisions were made. So there needs to be an understanding, there needs to be an understanding from the end of the security team to the development team to understand what their needs are and what their concerns are. And the other way around as well. And that is eventually what you need for good culture, we can always as pen testers always, at least try to kind of chime in, and not work against that. And that is why we for example, try to be as close as possible to the customer during it has to always recommend to do at least bright gray box if not white box, because we don't want to be those tariffs, we don't want to be like the external covery that comes riding and shoots everything into bits and pieces and then leaves again and doesn't give you anything behind that you can actually work to kind of repair your stuff. That doesn't make any sense, you can do this, you can follow this approach. But it doesn't really help. And what I noticed over the past years as a pretty good indicator to detect whether you have a good security culture in your community in your company already, or whether there's still room for improvement is not a look at the tests and the test results and all those things not to look at the credits and the box that you found. But to look at the fixes. Because from the fixes, you can read so much you can read from the fixes via the timeliness as to how focus the teams are working on those. You can also read from the fixers, how good they are, how spot on they are if they actually address the issue, or if they fix an entirely different problem or fix the problem just like halfway. And you can always also learn how those fixes are being created, how they're being presented. In some situations, a client comes at us and say like, well, all the fixes are in the application is now fixed, have another look like regular that but it's not. It gives us the possibility to tell you in a blurry and vague way we guess it's all good. But we don't really know. So yep, that wasn't really worth anything. Or they could say, Look, here is a zip file in that zip file, you have diffs and the diff file names actually numbered after the ID of the back so you know exactly what issue that is addressing. And there's only that code in there that is necessary for the bug to be fixed. So happy review time, you're going to be done in a half an hour. And we're like, oh, wow, that is fantastic. That is exactly how it should be. So as I said, I do not really know what and how many things we need to do to build up a good security culture. I know a couple but not all of them. But I do know how to actually measure this in that is pretty much by looking at the fixes and understanding how well they became and how fast they came in, how accurate they are, how they're presented. And then last but not least also looking at the fixed application after all and checking did all the same bugs come back or did they actually have a learning effect and eliminated that particular bug pattern you will never find exists as in this application because they understood it in full communicated and well and now it's gone.

 

Sean Martin  49:59

Super cool. That's brilliant Maura, I really appreciate that. I presume a lot of our listeners can resonate with that as well. It's always difficult measuring success in this, this world of security. So thanks. Thanks for sharing that. And Andrew, thanks for sharing your, your program with us giving us some insights and why and how and the results that you have. And I think, for me, the big takeaway is the not only care, you believe, and you respond with that, with that belief, as well seems based on this conversation,

 

Andrew Woodhouse  50:36

I think, I think security is an investment. And I think, very often, it's seen as a cost. But it really, we need to change it round to it being an investment, in the same way, as it often seen as a blocker security is often seen as a blocker. And there are possibly good reasons why they're doing things. But it's about how you communicate why you're doing this stuff. So that you're not seeing, as Mario's said, as the sheriff coming in shooting everything and then riding off into the sunset.

 

Sean Martin  51:09

That's right. Bang, bang, bang. Nice fun. Well, thank you both. Again, appreciate the conversation. And thanks, everybody, for listening to this episode will will include notes, of course, for you to connect with Mario and Andrew. And I believe there was a blog post that kind of highlights some of the stuff which was the catalyst for this conversation. So we'll share that as well in the notes. And so stay tuned for more conversations here as we continue to redefine cybersecurity. Thanks, everybody. Thanks very much. Thanks.

 

voiceover51:46

Pen Tara, the leader in automation security validation allows organizations to continuously test the integrity of all cybersecurity layers by emulating real world attacks at scale to pinpoint the exploitable vulnerabilities and prioritize remediation towards business impact. Learn more at Penn terra.io.

 

sponsor message52:12

Imperva is the cybersecurity leader whose mission is to protect data and all paths to it with a suite of integrated application and data security solutions. Learn more@imperva.com

 

voiceover52:30

We hope you enjoyed this episode of redefining security podcast if you learn something new and this podcast made you think then share itspmagazine.com with your friends, family and colleagues. If you represent a company and wish to associate your brand with our conversations sponsor, one or more of our podcast channels, we hope you will come back for more stories and follow us on our journey. You can always find us at the intersection of technology, cybersecurity, and society