Redefining CyberSecurity

Book | Irreducibly Complex Systems: An Introduction to Continuous Security Testing | A Conversation with Author David Hunt | Redefining CyberSecurity Podcast with Sean Martin

Episode Summary

In this episode of Redefining Cybersecurity, host Sean Martin engages in a thought-provoking conversation with David Hunt, author of the book, Irreducibly Complex Systems: An Introduction to Continuous Security Testing, about the importance and challenges of continuous security testing.

Episode Notes

Guest: David Hunt, Author

On Linkedin | https://www.linkedin.com/in/david-hunt-b72864200/

On Twitter | https://twitter.com/privateducky

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

____________________________

This Episode’s Sponsors

Imperva | https://itspm.ag/imperva277117988

Pentera | https://itspm.ag/penteri67a

___________________________

Episode Notes

In this episode of Redefining Cybersecurity, host Sean Martin engages in a thought-provoking conversation with David Hunt, author of the book, Irreducibly Complex Systems: An Introduction to Continuous Security Testing, to explore the topic presented in the book.

David introduces the concept of irreducibly complex systems, explaining that continuous security testing requires a system where all the individual components must be functioning correctly for the system to work. He uses the analogy of a mousetrap to illustrate this idea, highlighting that removing even one component renders the entire system useless.

The conversation also digs into the challenges of testing in changing environments and the need to understand how defenses perform during specific time frames. They discuss the value of continuous security testing in gaining visibility into the effectiveness of security defenses and shedding light on techniques used by malicious actors.

Sean, having been a software quality assurance engineer in previous roles, and David, having held numerous roles in the commercial, public, and non-profit realms, explore the differences between continuous security testing and traditional security testing. They explain that continuous testing focuses on evaluating how defenses respond to attacks, rather than testing offensive capabilities. Moreover, continuous security testing operates at complete scale on production systems, unlike traditional testing which is often limited to development environments.

They also discuss the importance of overcoming the dichotomy of skill sets required for continuous security testing. David explains that the offensive skills needed to create effective tests and attacks are often separate from the software skills needed to build a safe, high-assurance command and control center.

Throughout the episode, Sean and David provide listeners with valuable insights into the world of continuous security testing and its significance in the evolving cybersecurity landscape. They emphasize the need for organizations to adopt this approach in order to gain better visibility and understanding of their defenses in the face of emerging threats.

There’s a lot to take from this conversation, including an extreme example of how continuous security testing results have redefined cybersecurity in David’s organization.

____________________________

About the book

Continuous security testing (CST) is a new strategy for validating your cyber defenses. We buy security products that promise to protect us, like EDR, but how do we know they're working? CST takes the stance that endpoints are the center of your infrastructure universe. Whether the operating system verticalizes defense or a third party is bolted on, it is the job of the endpoint to protect itself from within. This new concept dictates testing should occur around the clock, in production and at scale. It provides an open model that others can use to approach testing and finally answer the question: Do you know with certainty that your defenses will protect you against the latest threats?

____________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

____________________________

Resources

Irreducibly Complex Systems: An Introduction to Continuous Security Testing (Book): https://www.yellowduckpublishing.com/books.html?title=icsd

____________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring an ITSPmagazine Channel?

👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Sean Martin: hello everybody, this is Sean Martin and you're very welcome to a new Episode of Redefining Cybersecurity here on ITSPmagazine, where I get to look at, uh, all kinds of things related to security programs. Uh, not just to protect the business and, and what it delivers, but also protect and hopefully help generate growth as well in a secure way. 
 

So, um, there's of course, no lack of topics under the realm of cybersecurity or InfoSec, if you wanna call it that. And, uh, Yeah, I think continuous security testing, having a view of where you stand in any point in time is something that the industry is constantly struggling with, um, across the realm of networks, infrastructure, apps, data. 
 

IOT, OT, you name it, right? The world is all there. And of course, I know just about this much about, about, uh, many things. And, uh, that's why I get to have amazing guests on like David Hunt to help, uh, help broaden the conversation and dig deeper. David, thanks for joining me.  
 

[00:01:11] David Hunt: Yeah, thank you very much for having me. 
 

[00:01:14] Sean Martin: And, uh let's, before we get into, I mean, the catalyst behind this conversation is a book you've written on continuous security testing. And, uh, we're going to get into a bit of that, how you, how you decided to write a book there, your first one, evidently. And, uh, I presume it's built or a result of a lot of things you've experienced over your career. 
 

So first, let's start about, start off with, uh, your, your current role and what you're up to.  
 

[00:01:44] David Hunt: Yeah, no, it's, uh, uh, definitely the book is based on a lot of those experiences. So yeah, I'm the CTO and a co founder of a company called Prelude Security. And so what we do is security testing. And so we focus on defenses and how defenses are responding to various attacks. 
 

And so, uh, my role at the company is leading our engineering and a lot of our kind of, um, more advanced areas in offensive security. Which is where my background comes in.  
 

[00:02:13] Sean Martin: And let's, let's go there. Because I was looking at, uh, your LinkedIn profile. And I, I know those, LinkedIn does a decent job of sharing where you've been. 
 

And if, if you're proactive and proficient, you can kind of share what you've done. And perhaps even some of the outcomes of Of the results of your effort, um, but hearing it from you is probably going to be even more interesting because I mean, you've, you've started off and let me go back to some of my notes, clearly a lot of software engineering, um, but you had some logistics experience, uh, I'll call it supply chain operations, perhaps, uh, connections to big data, uh, with Mandiant, some thread Intel stuff, and then moving to MITRE, another organization that I love, uh, looking at adversary emulations and, A nice collection of environment, ecosystem, operations, workflow, building stuff, testing stuff. 
 

And, uh, I think that gives you an interesting perspective on where cybersecurity fits into a business. Uh, just from the technical perspective. And you had a stint with the government as well, which that doesn't say much. Maybe you can share a little bit about that. But talk to me a little bit about that history. 
 

And some of the key learnings perhaps, um, coming from the different roles that help you as a cybersecurity professional, perhaps connect the risk and the solutions back to the business.  
 

[00:03:41] David Hunt: Yeah, I appreciate it. Yeah, I think to me, the varied experience has, has helped me shape how I view cybersecurity holistically. 
 

And it's looking back, I didn't kind of set out. Uh, my career, I've been doing this for 17 or 18 years. I didn't set out to get that varied experience. I just kind of stumbled into it through opportunities and, and so forth. But looking back, I can see clearly that, you know, I spent time in big data as a, as a manager at John Deere, where I learned a lot about tractors and agriculture, but, you know, spending a couple of years managing big data. 
 

At that point, which was, you know, call it 2010 to 2012 time period, when data was starting to explode, actually started to build quite a bit of how I view data and technology in general. That was a very influential role for me. Um, and at the time, John Deere was, was basing all of their systems on something called IBM, uh, DB2 databases, which nobody ever works with nowadays, but it's a relational, you know, data system and trying to move to Hadoop. 
 

And so transitioning and kind of guiding that, that transition was, was really. Eye opening. And so I've had roles like that where I've been in managerial and data related, but then I've also, you know, I went from John Deere to a variety of security companies like Kenna Security, uh, where I spent about a year doing vulnerability analysis and combining a vulnerability scan data that kind of exposed me to the world of CVEs and exactly how companies Try to, um, prioritize them because you can't solve all your vulnerabilities, but which ones do you solve? 
 

So that was kind of eyeopening just to see that process. And then I moved on from there to, uh, to FireEye, uh, specifically Mandiant at the time, and the goal with, with Mandiant was Threat Intel. It was, Hey, can we take all of the information that we're gathering internally? Through IR, incident response engagements, and red team assessments, toss it into a data system internal to the company called Nucleus, which I go into actually in the book, the technical specifications of it, which are actually. 
 

Uh, kind of wild when it, when it comes out to it. Um, but what we would do is we take all that data from Threat Intel, aggregate it, try to get context from it, and then ultimately try to spit that out into something that is actionable. And so I did that for a number of years, uh, overseeing a handful of projects, building things like the central repository at FireEye, which is a collection of malware samples that ranges into the billions, uh, that are collected on an ongoing basis. 
 

And moved on from there and joined Mitre, where my big project, the reason I went to Mitre was to build what's called the Caldera framework. And so Caldera was still in a prototype version one state when I went to Mitre and they asked, Hey, can you build this into something bigger and better? And so when I went there and took over that project, the whole goal was, Hey, can you build a system that can mimic how a hacker would actually navigate through a system and make the same type of decisions with a small amount of data? 
 

It's a really difficult problem set and it requires a lot of, uh, what's called automated planning, which is a very small sector of AI that doesn't get a lot of attention because it's, it's not super sexy, but it's really interesting. And so I did that for a number of years. Caldera got so big and so popular in the purple team space. 
 

That it actually is the, the impetus for starting Prelude because it was, Hey, we've got this giant research project. That's really, you know, uh, digging in a lot of organizations. Is there something to continuous testing? And that's kind of where the journey of, of the specifics around continuous testing started. 
 

[00:07:11] Sean Martin: Yeah. And I've, uh, I've had the. The pleasure of speaking to a number of MITRE folks and I, my experience with it is the research is great. The tools are great. The frameworks are great. The standards are great. The, the community surrounding all that stuff is fantastic. And then you have to actually deploy it and use it and, and get it to work. 
 

And, and I found in like Fred Wilmont was a good, good friend of mine. Bringing solutions that wrap around all the good stuff that MITRE does and shims it or connects the abstractions between MITRE and the business or security operations, if you want to be more specific, is super crucial. And I want to, it might be a slight tangent, but I'm interested in this point because it's something that's been in my mind, and I think you have an interesting perspective, perhaps on this, my view of. 
 

Some of the newer security technologies, I'm not speaking specifically to the one you're working on, but just in general, they're looking at growing and enhancing the new modern way of operating. So it's always chasing the containers and the multi hybrid clouds and and an IOT and OT and and I feel. That a lot of the new stuff forgets about the old stuff. 
 

And you talked about abmdb2 and, uh, sadly, if I can say that, I have experience in that building a sim for Symantec. That's, that was the technology we started with, which it was a behemoth and didn't, wasn't easily deployable and easily maintainable. We ended up moving to MySQL, which had its own issues. 
 

And my point is, There's a lot of legacy stuff that companies have, and a lot of new security technology is kind of aiming toward the new, the new frontier, if you will. What's your experience? I don't know if you have any specific experience along the way, but the DB2 to Hadoop is maybe one example. How do you see organizations? 
 

Overcoming the challenges of we're transforming our security products are transforming and the programs that connect those things together need to transform as well. Any thoughts on that and perhaps even advice for the audience on how to overcome some of those challenges?  
 

[00:09:31] David Hunt: Yeah, I think, I think I'll go on a tangent on this one. 
 

I call it the chasing the shiny ball. Uh, we hit this a lot. You know, I, I see this, I've seen this a lot of my career in general, which is the new thing comes out, everybody kind of moves in that direction and all the old stuff is forgotten. Yet the old stuff makes, as you said, the majority of your infrastructure, the majority of your endpoints and so forth. 
 

And so, um, I think it's good to try to get that coverage across all the new things if you're using them. But from a technology standpoint, my advice is always use the tried and true basic, like the small, the smallest, simplest version of everything that you need. Do that and do it really, really well and only get pulled into the new stuff. 
 

If you can create a really good justification for technology or business. So an example is moving from, you know, say spinning drive servers to containers. The. You need security in either one of these cases, but as an organization, do you make the move from spinning drive servers to containers and how do you make that decision? 
 

And so I think those decisions in today's world happen too quickly and people just chase the shiny ball. And then all of a sudden you've kind of had this infrastructure that you didn't really plan out how you're going to manage and you don't really understand the pros and cons. Where if you go all the way back to the basics and you stick with just the boring Technologies that have been around for many decades and really perfect that every decision you make after that becomes Very calculated and measured and so I'm really big into that that simplicity angle. 
 

[00:11:03] Sean Martin: Yeah, there's no no lack of complexity Making things more difficult. Yeah, so let's um, let's talk about the book So the you decided to write a book on continuous security testing Describe to me the catalyst behind it. You kind of touched on it a little bit already, but... I mean, there's having the idea or having the knowledge, having the idea that I have a knowledge that I should document, turning that into a book, multiple hard steps there and then getting it published. 
 

Of course, there's a whole nother thing, but talk to me a little bit about why. Well, the process of turning your knowledge into a book and why that was important for you to do.  
 

[00:11:49] David Hunt: Yeah, it's a, um, it happened really organically. So I never really sat down and said, you know what I'm going to do? I'm going to, I'm going to write a book. 
 

And what am I going to write a book on? I'll write it on what I do for a living. That's kind of not the way it happened. Um, That'd probably be even simpler. But the, the way I ended up coming to the colu, to the conclusion that I needed to write a book was based on just conversations I was having on repeat. 
 

That was the same conversation, be it with a customer, a security person, you know, friends and family that I conversed with. And I, and I kept running into the same conversation and having. The same aha moment, which is there's a new frontier happening in security today, and nobody really realizes it yet. 
 

And every conversation I get into, it becomes. About a two minute, you know, convincing period and then people hit the light bulb moment and then they're like true believers in this thing. And there's no material, no documentation, nothing on this sort of concept. And what it boils down to is an assumption that our defenses are working. 
 

It has been really interesting over the last six or 12 months as I have conversations with people before writing the book around, Hey, I believe my EDR is working or my AV is working. Um, and there's that kind of base knowledge that they have that is built on when they take a few steps back, it's built on nothing outside of marketing and promotional materials and so forth. 
 

Like a lot of things are. And then when you step back and you say, okay, I bought an EDR. How do I know it's working? And when you start to look at how you're, you're approaching security from that angle, you start to say, Oh, well, maybe, maybe I need to, to test my hypothesis and see if it's actually correct. 
 

And so that's, that's where I ended up writing the book was enough of those conversations that I didn't want to keep repeating them. And so it was, uh, actually on a, on a trip to Vancouver, I was visiting, I was on vacation slash visiting my co founder, our CEO, Spencer Thompson at Prelude and. We're out there in Vancouver. 
 

And I was like, you know what, Spencer, I think I'm going to sit down. I'm going to write a book. I keep having these conversations. Um, I don't think anybody's ever going to read the thing, but like, I got to write it down and like, I can at least send it to people. So when they asked me the questions, I have the convo, I'll just send them the book. 
 

And I actually gave myself only 30 days to do the writing. And I sat down, I was like, I'm going to do it all in the month of June. And I'm just going to write. What's in my head. And it's probably going to be like 50 pages. It'll be, it'll be a small book. And I did the 30 days, I used up all of it. And it ended up being a 370 page book that goes into kind of a few different parts, but it's all on that topic of continuous security testing. 
 

[00:14:26] Sean Martin: And the, the title, which I'm, I want to dissect a little bit with you. Is it's called irreducibly. Reducibly complex systems. Yes. An introduction to continuous security testing. It is a mouthful. That is complex. But the, does one require the other? Can you, maybe I'm jumping the gun here on this because. 
 

Maybe folks don't even know what continuous security testing is. Um, so maybe as part of the answer to this one, do you have to reduce complexity in order to achieve a continuous view of your posture?  
 

[00:15:04] David Hunt: Yeah, no, it's a great question. And it is a mouthful and  
 

[00:15:08] Sean Martin: it is a mouthful. And so maybe as part of that, what, what is continuous security testing? 
 

So we can kind of frame.  
 

[00:15:14] David Hunt: Yeah, so I think that's a good place to start. So continuous security testing is the process of, of course, continuously testing your defenses against all of the emerging threats occurring in the world. And so what you're doing through continuous testing is you want to know how your defenses respond to different attacks. 
 

Now, everything is done from the perspective of your endpoints and your end, and devices. So your workstations, your laptops, your containers, your whatever that endpoint is, is what you're taking that perspective. You're continuously testing that you want to know how it responds. So this differs in two ways from traditional security testing. 
 

The first big way is it is from the perspective of your endpoint defenses. Not from the offensive operator. So things like red teaming and pen testing and so forth, uh, breach and simulation, you're doing it from an offensive operator perspective, which means you're trying to determine what you're, what you're capable of doing. 
 

So I'm the adversary. What can I do? And you can do a lot of things from the adversarial perspective, but the real thing that, that matters is how does your defense respond? Am I purchasing the right defenses? Am I configuring them correctly and so forth? So everything in continuous security testing is done from that perspective. 
 

The second real big differentiator is, from kind of just security testing in general, is scale. Continuous security testing is designed to run, because it's focused on defense, it's designed to run at complete scale on production. So if you have 100, 000 computers in your environment, in your infrastructure, You're designed to run this on all 100, 000 every day on repeat. 
 

That's something that just doesn't happen in traditional red teaming, breach and simulation, and so forth. And, uh, for those two reasons, it kind of introduces, to achieve that, it introduces a new paradigm. Which I dub as irreducibly complex systems in the book through the title. And I go into this in the book in, in, in quite detail, but I think, um, let me define this really quick. 
 

So if you're unfamiliar with the term, so in irreducibly complex system is a system that only works when all the individual components that make the system up. are running and functioning correctly. They all have to be there. If you remove one of them, the entire system is not functional and it actually doesn't even resemble itself. 
 

So a common example that gets used in that space is a mouse trap. And so a mouse trap requires, it only has three or four pieces, right? You've got a base, you've got a spring, and so forth. You've got the metal thing that goes down. You remove one of those, the mousetrap is completely rendered useless. It actually doesn't even operate even close to the purpose. 
 

So you need all of those three or four pieces for that to operate. And so that's what irreducibly complex means. It kind of reduces down to the least amount of complexity. Now, continuous security testing, I believe, hasn't 100, 
 

000 machines in our infrastructure. I think it hasn't been done to date. Because we haven't solved how to make a actual infrastructure, a system that is irreducibly complex because I think that's required in order to conduct that level of testing. And the reason why I think that is, is because to build a system like that, it requires two almost completely different skill sets. 
 

It requires advanced offensive skills to create the types of attacks and tests. That you need to write to, to test the defenses. And then it requires software skills to be able to build a complimentary system that is effectively a very safe, high assurance command and control center. And so you don't often have a lot of security people like deep offensive security people with the skillset to build software systems of that level. 
 

And vice versa, the people that are typically building software systems at that level. Have never written an offensive exploit in their life. And so there's a dichotomy of skill sets that I think has to be overcome for continuous security testing to be viable.  
 

[00:19:14] Sean Martin: I'm stuck on the, the, the mousetrap. Cause I, and I don't know, my view of this is operationalizing. 
 

Purple teams with or a software enabled purple team. Yeah, no, that's a great way to look at this. So, so I go back to the, the mouse trap now. And so there are the three or four parts, whatever it is. And. The first thing that came to my mind was the mousetrap is worthless if there's no mouse. Um, but then I took it to, well, how do we test the mousetrap? 
 

Well, we're not going to go get a mouse. Maybe some people do, I don't know. But typically it's a pen or a pencil or something that we stick it in there, right? And you need to have the, the bait. Uh, if we want to connect it to, uh, yeah, the cheese, exactly. To connect it to some deception or whatever. Um, so. 
 

Expanding on that analogy, because once, once you set off the trap, right, either through legitimate, well, yeah, do you have a separate trap that looks the same? So is it a clone environment or is it a production environment is another, another common thing that, uh, pen testing is always running to, right? 
 

Yep. You knock over a machine and then you're irreducibly complex system. Fails. Yeah. So, so talk to me a little bit about that, um, maybe using the mousetrap as an analogy to help paint a picture for.  
 

[00:20:49] David Hunt: Absolutely. That's a, that's actually a great point. Cause this one comes up a lot in continuous testing. 
 

This comes up in security testing in general, a lot, which is. Hey, I need to do all of my testing against the development environment because I don't want to cause any issues in production or what I actually have seen a lot of my career, which I've always found astonishing is security teams are given anywhere from one, like one computer to maybe 20 to do all of their testing on and the goal there is, or the kind of directive is do your testing on that one machine or 20 machines. 
 

And then extrapolate the findings to all 100, 000, and that's how we'll do our security testing out of a fear of causing a disruption in a production environment. Of course, the more extrapolation you do, the more guessing and assumptions you're taking along the way. So as we like, look at that from a mousetrap perspective, and we would say, Hey, we can, we can achieve this. 
 

We're going to clone the mousetrap and have two mousetraps, and we're not going to touch the production mousetrap. We're going to do all of our testing on the, um, on the test mousetrap. The problem with that approach is you're assuming out of the gate that these are perfectly identical mousetraps and potentially when you first set your experiment up, even in a one to one, call a mousetrap a computer, you might actually get that on day one. 
 

On day two, things start to change. One of the mousetraps is going to get updated. Because maybe somebody comes along and they clean the base of it, right? That adds some new friction on the table that may actually change how it behaves. The more times it's tested, the spring is going to get worn down. 
 

That's now going to impact the behavior that the amount of time it takes to close may actually get slower because it's gone so many times that it's actually the spring is now a little bit slower where the the sharp one which doesn't get tested as often is very quick and so you start to to lose that connection of how identical these things are even in something as like Clonable as a, as a mousetrap, when you take a computer and you even do the one to one, two completely identical containers, and you leave them running for 24, 48 hours, the amount of Delta between those two different, uh, environments is significant. 
 

Different processes start to run, different users engage, um, different, uh, uh, interactions happen as, as incoming packets come in that actually make the environments quite different.  
 

[00:23:13] Sean Martin: Yeah. And I'm, I'm just thinking of, uh, seasonal businesses where a system, uh, During back to school might look similar to a holiday buying season, but completely different from the rest of the year, for example, or I don't know, throw it, throw another holiday where people buy a bunch of stuff in there, but so even, even just the time of year or the, the way. 
 

People interact with the systems can change.  
 

[00:23:38] David Hunt: Yeah, time of day, believe it or not. I haven't, I don't have an answer for a lot of these, but you find it in testing. So one of the most interesting that I found is actually, um, what I can best, uh, tie to like a, a moon cycle, cause I have no idea, which is, uh, different EDRs and AVs will actually respond very differently based on the time of day. 
 

Now, if you're testing continuously, you're going to run it and you're going to actually highlight a lot of this and you're going to see it. You may not know, have a reason and knowledge about why it's occurring, but at least you know that you have an open risk there, which is, Hey, if it's between 10 and 11 PM or, um, a, say an update, a windows update just ran on the computer, then the EDR becomes. 
 

Non responsive for the next, you know, 30 minutes or maybe even full day. And if you're not continuously testing, you're actually not going to have any visibility on that whatsoever. And it's really interesting to start to get those statistics once, once you are testing.  
 

[00:24:38] Sean Martin: So let me ask what seems obvious to me now, having heard what I've heard. 
 

Are the, the bad actors, um, using, using this method to understand where existing technologies. fall apart or, or slow down or leave things exposed so that they can pounce on those moments of opportunity.  
 

[00:25:05] David Hunt: Yeah. Yeah. I think, uh, I think the bad actors have been using these technologies for, for quite some time in a smaller scale. 
 

And what you use from the, or what you get from the defensive side by starting to use these yourself is you actually start to get visibility on what's actually been happening on the adversarial perspective. So I'll give an example. So, uh, in, in my career, I've done a significant amount of offensive operations. 
 

So be it red teaming or something called OCO, offensive cyber ops. And so I've done quite a bit of that. And through that time period, I pulled out a number of techniques over, over my career that have been very successful. A couple of them are actually still successful today that I've been using for, for 17 years. 
 

I'll talk about two of them right now because, um, I hope they get closed down one day and maybe more visibility will help. Uh, one of them is driving up the CPU, uh, utilization on a, on a computer. One of the first things I do when I post compromise and exploit a machine is I actually drive the CPU up. 
 

Roughly 50 60 percent. Once I get it up to about that percentage, the EDR, AV, or any other defenses on the machine are forced into deciding whether they should act on things. Or take their pedal off the metal because they don't want to add to the fact that there's so much going on on the CPU. So they start dropping packets intentionally. 
 

That gives me the chance to start to use that machine completely undetected unprevented just by raising CPU. So that's a technique that's actually been happening under the covers for, for quite a few years. And so when you're running continuous testing, you start to. Shadow light on that because if you're testing a hundred thousand machines, you're naturally going to get CPU running high on, on quite a few of them. 
 

And you're going to start to see which ones are your EDRs are actually dropping packets in that scenario, which ones are doing really well, and you can start to draw correlations. The second example I'll give real quick is, uh, Chaining Techniques. So, what I mean by this is, a lot of existing security testing is done in, in what we call malware, which is, I drop a binary on a machine, I run it, it runs as a program, a connected program that does a bunch of different behaviors. 
 

Defenses are tuned to catch that. What they're not tuned to catch are micro behaviors that you run a single technique, a benign action on a computer. Where you take the output of that benign action and you chain it to the input of another benign action. I call the process chaining. And where if every individual action is benign, and runs in a separate process, it's actually quite difficult for a defense to catch. 
 

An example of that would be, I want to, um, ransomware an entire computer. I could do it in a ransomware binary like a lockbit or some of the things that we see out in the world today. I'm going to get caught most of the time if there's a defense there. So what I choose to do instead is I will Uh, locate the files I want to ransomware is step one in one process. 
 

Completely benign. People search file systems all the time. Can't really shut that down. And then I will feed that list of files into a completely detached separate process that will then copy those files into a location that I can kind of work together and do one thing. It's called staging the files. 
 

And then I will pipe that location into another separate process that will encrypt them. And then I will pipe the results of that into another process that will compress them into a zip and into another process that will exfiltrate them. Now each one of those steps of that, those five elements in the chain, the links, are completely benign at the root level. 
 

So things won't catch it unless they see the connective tissue. And if you run in separate processes, it's hard to get that connective tissue, and therefore that's a technique that, that is commonly used that you start to flesh out when you're continuously testing.  
 

[00:28:53] Sean Martin: Interesting. Just when I thought it couldn't get more 
 

complex. I feel like I could talk to you for hours about, uh, some of these scenarios and whatnot. Perhaps we will have another, another chat for that. I want to go back to the book in the, in the few minutes we have left for our chat today. And. Maybe an overview of what's in it. I don't know if, uh, an overview of the, uh, the chapters, uh, who do you think should read it and what do you hope they should get out of them when they do? 
 

[00:29:23] David Hunt: Yeah, absolutely. So I structured the book into actually three sections. Four if you count the conclusion, which I actually will because there's some interesting things there. Um, the first section of the book is, uh, it's a number of different chapters, four or five chapters in that first section, where I actually go through a lot of my own personal journey. 
 

And so what you'll read in those first, that first section in those chapters will be a lot of behind the scenes, uh, experiences I've had in the government at places like Kenna Security, at places like Mandiant. Uh, I go into my experience at MITRE. So at MITRE, I was on the ATT& CK evaluations. Uh, sorry, I was on the ATT& CK leadership team. 
 

I worked alongside evaluations. I built Caldera, doing all that work. There was a lot of interesting behind the scenes decisions. I go into that in the book. Um, and the reason I go into a lot of the detail, I go into a lot of detail in a lot of cases, is to highlight the connective tissue behind a lot of those experiences and how I've kind of reached the conclusion around continuous security testing being that new frontier. 
 

And so that's what section one is all about. Section two of the book is really where a lot of meat happens. That is where I lay out the groundwork. I actually have structured it kind of like a thesis, if you've ever looked at a PhD program. It's laid out in a similar way, where I give an argument for continuous security testing. 
 

The pros, the cons, the, the, uh, the reasons against it. I go into all of that. And then I go into what kinds of attacks need to be part of continuous security testing. How do you determine what tests to write and where do they come from? I go into things like surface area. What, where are you testing? What are you testing? 
 

How do you decide that? And so forth. And then I go into section three, which is an architecture tour. So this is where the irreducibly complex systems get broken down into seven subcomponents that I would consider in architecture appropriate for continuous testing. So that would involve things like, hey, you need agents or probes that can run the tests. 
 

You need tests themselves. You need a planning engine that is capable of making decisions of what tests To send to which end point at what time. And so you need all of these different components that can be looked at and isolated individually. And I go into each one of those and I describe the component. 
 

And then I actually give a lesson learned that I went through building components like that in the past to try to make it a little bit more, more, uh. Interesting. And then the conclusion section is one of my favorites. So the conclusion section, I'm actually crowdsourcing with a number of folks in the security community that have really interesting stories to tell. 
 

And, um, I've already got, uh, two conclusions written for the book, which sounds crazy, but it's probably going to end up being six or seven in total. And it's cool. So if you have a copy of the digital book, you get these updates. Uh, automatically, which is as conclusions come in, the digital book gets updated, and the conclusions get sent to anybody that has a copy. 
 

And what these conclusions contain are the perspectives of interesting people in security, and how they see the future of security testing in particular. Going, the importance, the future and so forth. So a few, a few really interesting names, people that are working in places like SOX and people doing threat Intel and security engineering and CISOs, trying to get different perspectives to kind of round out, um, where people think the industry is going. 
 

[00:32:40] Sean Martin: I love it. So that's probably the, the audience as well, though, those four, uh, four roles.  
 

[00:32:45] David Hunt: Exactly.  
 

[00:32:46] Sean Martin: Um, I'm curious. I always like to do one more question when Marco joins me, my co founder, he says, Sean's got one more question. The thing that's sticking in my mind. So people listening. I mean, I've done testing of security products at Symantec for three years and setting up a testing program is fantastic. 
 

Pretty significant, and I would imagine it's not no small feat for an organization to set up security testing programs themselves. So for folks thinking about this, um, actually, I have two questions. What's maybe one of the first things they can do to look at their current program to say, I'm doing it right or I haven't started. 
 

Here's how I can start.  
 

[00:33:35] David Hunt: No, that's the first right question. It's number one thing is have a defensive security product of some type, uh, covered. Like that's number one. It sounds like something you wouldn't have to say, but it's actually shocking how many environments you go into have no defensive protection, meaning no antivirus capabilities, no EDR running, uh, and so forth. 
 

So the first thing is like, make sure you have some type of defensive product in place that can do endpoint protection. The second thing that you want is something that can validate it's working. And to me, this has been one of the most astonishing things that hasn't happened uh, as a norm yet, is people are used to buying the defensive product, and then they have no idea if it actually works. 
 

And so the second thing is get something that can validate that is actually working. People can do that in various ways. You can do it manually through like a red team or a purple team, or you can do it. In a repeatable way, like a BaaS vendor, or you can do it in a continuous security testing way, which is kind of the most automated you can, you can go. 
 

And so that's, that would be my advice for starting off. Love it.  
 

[00:34:36] Sean Martin: And another question I either ask or comment on as part of the show, and I'm going to do it here as well, because clearly what. Analyzing your current security posture in a continuous manner will give you an opportunity to create, I'll say, a better mousetrap for your security program, right? 
 

You can fine tune your protections and your response and all that stuff. Where I like to take it next is, and I'm going to pull on the irreducibly complex system point here, where teams are constantly doing something because of the way... Business process was built. And I'm wondering, do you see, or have you seen any impact on how business systems are built so that they're not, I'll go back to your CVE thing. 
 

So they're not throwing a bunch of, uh, bones out that teams have to then patch and they're wasting time patching instead of. responding or building better protections.  
 

[00:35:38] David Hunt: Absolutely. I'll, I'll give you a good one because it's actually impacted us, uh, internally at, at Prelude, which is I think pretty interesting. 
 

So, um, one of the things that, uh, I've kind of seen through continuous security testing and just the experience of doing it in so many different realms has been, um, the insecurity of computers as we know them. So like actual laptops, actual servers, things that we have taken for granted is the devices we work on, right? 
 

Businesses run on laptops and desktops and that type of thing. Um. What I've kind of recursed down to is these devices actually, um, try to balance two things. They try to balance security with privacy, and they're at what I call in the book the, uh, equilibrium of those two elements, security and privacy. 
 

And because they try to be in the middle of those two, they can't do either really well. And they can't perfect either one of those well, because the operating system, for example, is far too, um, forgiving for the user. The user can do anything, which makes them hard to secure, right? And so because of that, you can actually only reduce or improve the security to a certain extent. 
 

You cannot bulletproof an actual laptop or a server, because there's too much surface area. Now, because I've gone through all that recursion and learning, One of the things we've actually done to impact it at Prelude itself is we're actually in the process of getting rid of all of our company laptops, desktops, all of our computers, which sounds crazy, but we're switching the entire staff over to tablets, Chromebooks, and other things that are secure by design devices that are actually running operating systems that were built and designed with security first. 
 

So instead of an operating system that was designed in the 1950s, consider an iPad that had an 2010. It has a completely different architecture that has sandboxing and protections in place that make it kind of a dummy proof scenario from a security perspective. And so it's impacted our business to the point of like, shifting everybody over, including software engineers, designers, front end engineers, um. 
 

Legal, marketing, it, it kind of goes the whole gamut.  
 

[00:37:47] Sean Martin: I love it. It's, uh, it's interesting. Some, some folks may know I worked for good technology for a short stint. And, uh, the whole goal there was to move everything to the mobile environments, right. And in a secure fashion, of course, BlackBerry ended up buying them. 
 

Um, yeah, I think the, what, what I found challenging a few years back was just the interoperability of apps and transferability of data in those closed Systems. Yep. And I'm sure things have moved a long way since then to enable streamline business processes, but still do it in a secure way and in a container. 
 

That's, that's, uh. built by design to be secure. So,  
 

[00:38:31] David Hunt: but without continuous testing, we never would have, we never would have come to that conclusion. We had to kind of go through all the testing and then we kind of hit that like light bulb moment of, wow, the OS actually isn't as secure as we thought it was. 
 

[00:38:42] Sean Martin: I love that example because you probably don't have a whole team dedicated to patching.  
 

[00:38:47] David Hunt: No patching at all. Everybody has auto updates put on their iPads and Chrome OS.  
 

[00:38:53] Sean Martin: That's a fantastic example. Well, David, uh, it's a pleasure to meet you. Pleasure to chat with you. I'm, uh, I'm thrilled to have learned about your experiences and, uh, your view on continuous security testing. 
 

And, um, Yeah, I hope everybody grabs a copy of the book. I want people to read it. I think there's going to be some good things in there for, for teams trying to figure out are they making the right investments, doing the right things for their program to protect the business. The book is called Irreducibly Complex Systems, an Introduction to Continuous Security Testing. 
 

I got it all out in one, one straight sentence there. Um, we'll include a link to that, a link to David's profile and any other resources that uh, David thinks are useful to help folks learn from this conversation beyond what we talked about today. So thanks everybody for listening and thanks David for joining. 
 

We hope to have you on again soon.  
 

[00:39:46] David Hunt: Thank you very much.