Redefining CyberSecurity

The Great AI Debate: Does It Belong in SIEM? | Dissecting the Impact of AI on Modern SIEM Solutions | A Conversation with Mick Douglas and Dinis Cruz | Redefining CyberSecurity Podcast with Sean Martin

Episode Summary

Dive into the debate on the role of AI in Security Incident and Event Management (SIEM) systems with experts Mick Douglas and Dinis Cruz in this episode of the Redefining CyberSecurity Podcast with Sean Martin. Witness the gripping discussion on the potential advantages, the looming issues around trusting AI, and the significant computational costs tied to the implementation and maintenance of these AI systems in SIEM.

Episode Notes

Guests:

Mick Douglas, Founder and Managing Partner at InfoSec Innovations [@ISInnovations]

On LinkedIn | https://linkedin.com/in/mick-douglas

On Twitter | https://twitter.com/bettersafetynet

Dinis Cruz, Chief Scientist at Glasswall [@GlasswallCDR] and CISO at Holland & Barrett [@Holland_Barrett]

On LinkedIn | https://www.linkedin.com/in/diniscruz/

On Twitter | https://twitter.com/DinisCruz

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

____________________________

This Episode’s Sponsors

Imperva | https://itspm.ag/imperva277117988

Devo | https://itspm.ag/itspdvweb

___________________________

Episode Notes

In this episode of the Redefining Cybersecurity podcast, Sean Martin is joined by Mick Douglas and Dinis Cruz to delve into a debatable topic: The role and effectiveness of Language Model (LLM) AI in Security Incident and Event Management (SIEM) systems.

Mick, with a rich history in cybersecurity, contends that while AI has its place, he doesn't believe it belongs in the SIEM itself. In contrast, Dinis cites the potential of AI to make SIEMs more productive by cleaning up data, reducing noise, and improving signal value. They discuss the issues of handling vast data sets, the potential for AI to help identify and manage anomalies, and how to create learning environments within SIEM. However, concerns were also raised regarding false positives, trust issues with AI and the significant computational costs to implement and maintain these AI systems.

Key Questions Explored:

___________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

___________________________

Resources

Inspiring LinkedIn Post: https://www.linkedin.com/posts/mick-douglas_first-let-me-be-really-clear-for-the-near-activity-7146143942739124224-a4vl/

Inspiring Twitter Post: https://twitter.com/bettersafetynet/status/1740370001973154010

___________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring an ITSPmagazine Channel?

👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] Hello everybody, this is Sean Martin, your host of the Redefining Cybersecurity podcast here on the ITSP Magazine Podcast Network, where I get to talk about all things cybersecurity in the business. Hopefully, uh, in a way that we can operationalize all this crazy tech that we've, we've built and spent money on and, and, uh, in, in the guise of, or with an aim to actually Not just protect the revenue, but actually generate growth for the company as well on a safe way and, uh, many aspects and facets to talk about there today. 
 

We're going to be looking at the SIM security event and incident event manager. I guess I should probably know that the correct order of that having built one for a big yellow company many years ago. Um, but more specifically, we're gonna, we're gonna look at the role of AI within it. The incident response process and more specifically within a sim [00:01:00] and that assumes it belongs there, which one of one of my guests suggests maybe it doesn't. 
 

The other might disagree. And uh, so I'm thrilled to have Mick Douglas on and Dennis Cruz on. Thanks guys for joining me. That's  
 

Mick Douglas: great  
 

Dinis Cruz: to be here.  
 

Sean Martin: Yeah, this is going to be fun. A shout out to Casey Ellis. Uh, he was hoping to join us. He's caught up on travel. I'm sure you'd have some good insights. Uh, maybe, maybe I'll throw some comments in on uh, on the social media when this comes out. 
 

Who knows, we might not get to all of it in the time we have allocated. We might need a second episode. So we'll see how this goes. Um, this is inspired by a post that you put on. I think I first saw it on Twitter and then saw, also saw it on LinkedIn. So, two places, and I think the, the interaction mostly came from, uh, there's a lot on both platforms, but the interaction I saw that, that prompted me to say, let's talk about this was on LinkedIn. 
 

So I'm going to include links to both of [00:02:00] those. But Mick, you put this out there, uh, and it basically says, LLMs, AI don't belong in the sim and boy, the, the fireworks started to flow. Um, take a moment if you would, please, to, uh, share a bit about who you are and, uh, some of the things you're working on and then, uh, and then very briefly why you put that out there. 
 

And then we'll go to Dennis for, for a quick intro from him as well.  
 

Mick Douglas: Sure. All right. So, Hey everybody, my name is Mick. I do a lot of defensive hacking. I. Helped stand up MSSPs. I've been a client of MSSPs. I've been using sim since before we called them sim back when they were called smart log aggregators in 2007. 
 

Um, I remember. Q1 labs back before IBM bought QRadar. So I've been doing this a long while, [00:03:00] and I want to be clear. I think that AI belong in the SA, but they don't belong in the SIM itself. And specifically, I see a lot of people talking about using LLM. For various logs, and I, I'm really struggling with some of the stuff that I'm seeing people do, and unfortunately, a lot of it boils down to the fact that they don't know the environment, nor do they want to spend some time understanding the environment, so they just sprinkle this magic AI pixie dust on it, and everything gets suddenly better, and the short Answer is that it doesn't because IBM had the Watson, the AI plugin now for 10 years, and it's actually not a bad plugin, but for the level of effort it takes to set up and maintain, you can be doing a whole lot more interesting work in terms of.[00:04:00]  
 

Configuring your, um, doing things like just basic stats. I mean, my goodness, in that thread to keep on saying, sorry, I'm having an alert go off here that I got to turn off. Um, IR alert? No, no, no.  
 

Dinis Cruz: That's your AI in your sim going, oh, oh, oh, careful.  
 

Mick Douglas: That's the AI sim. Don't, don't talk bad about, you're going to be the first against the wall. 
 

But, um, you know, AI, AI can help, but I don't think, again, LLM are not the tools for this. You'd get better return on your effort for doing things like just standard deviation analysis, least frequent occurrence, most frequent occurrence. These are set theory things. You don't need AI for that. I'll be happy to talk about where AI works great in terms of like helping build out [00:05:00] your, uh, Knowledge base, like my God, I love using LLM for like, Hey, here's a couple bullet points of what we saw and then it puts it into a narrative like that's awesome. 
 

Use that, but all too often I keep on seeing people just saying, Oh, AI and all your threat detection becomes easy. It's not true. It's not neat.  
 

Dinis Cruz: Yeah. So that, so a lot of great comments in there. So first of all, I see I'm Dennis Cruz. I'm, I actually sit on a whole bunch of different fences these days, which really cool. 
 

So I'm, I'm the CISO of, uh, Ralph Holden Barrett, sort of UK retail organization. I ran a, you know, an amazing team. You know, we, we run our own sim. We actually had an outsource for a little while, but now we run most of it. Although we work very close with a whole number of partners that provide us a little bit of that sort of extra inside of things. 
 

I also work for a company called GlassWall, which basically. Makes file safe at scale for government. So, you know, it's quite interesting because it's, you know, we feed in there. We feed into sims in that [00:06:00] world. Right. And then, uh, I'm also playing around with, uh, with the idea of creating a bot for board members. 
 

And I call it the cyber boardroom and, uh, you can play around with it, it's called the cyber boardroom. com. But that helped me a lot to understand how it works, where it adds value, et cetera. So the, the first thing I want to say is that, you know, in a weird way, I think, and by the way, just to clarify. I would completely agree with you a year ago, before Gen AI, before, you know, chatGBT, because I was the guy that would go through for a sec and say, anybody here talks to me about AI, I'm walking out, right? 
 

Because everybody said they did AI, none of them did properly. At best, they had machine learning, right? And I always felt that you guys much better spend the time doing exactly what you described, clean up the data, make sure it was properly done, filter it out. You know, every Every vendor, in a way, and I get it because they, in a way, I think the market failed itself. 
 

I think the market went, tried to buy silver bullets. So the, [00:07:00] so the customer, sorry, tried to buy silver bullets and the market go, Hey, cool. Like since you're buying, here's a bunch of them, right? So the people that want to sprinkle AI on top of a gen AI specifically, right? On top of the existing solutions are the same ones that six months ago bought XYZ, a year ago bought the other thing, two years ago bought the other thing. 
 

Probably they are the teams that have. 30 tools in their disposable, right? Or 20 or whatever, right? And they barely, they can barely work. And I agree, you know, in that world, to be honest, they probably even need more Jenny eyes and the other crowd. Right. But I, for me, the, the, the big moment was when I realized that the new generation, and by the way, I still have fuzzy word because the thing doesn't think right. 
 

It's just very clever. It's a very clever Google, but understands context. Even at the level that we don't fully understand how it works, which should make you think that, hold on, we don't fully understand how actually chatGBT really, really works, right? But it's not sentient, right? It's nowhere even close, right? 
 

This is about very good context. The key part for me was when I understood that [00:08:00] It's the thing, right? Actually understood context and understood meaning. And more importantly, you can give it lots of complex data, and it produced very accurate analysis of that data. The problem with most NLMs, in fact, I think all of them, is when you don't give it data, and then you ask it things. 
 

And then he makes up stuff. Right. But , this guy, I said he was very funny. He says, look, people hallucinate right, left front and center right? Like the data is the way Hallucinates left, front and center, right? So it's not that the LLM is doing anything new, it's just they're doing at a bigger level, the, the, the, the key points that I feel you made, which is we should be dealing with the basics, right? 
 

I kinda have a problem with the concept basics. 'cause if there were basics, people would do it right? And the reality is that we have a lot of bad data in. A lot of mix, mix, mix, mix. And then we get some meaning out. Right. So I feel that the LLMs, the ability to have an engine that can understand data, that can understand context by [00:09:00] a prompt, produce an output. 
 

Can basically help in all sorts of areas in that in that chain, right? And the key part here, and this was a big paradigm for me shift was to to understand that what we're doing is we're moving a lot of the business logic into a prompt, right? So, so let me let me give you an example of an area that I feel we're in SIM, right? 
 

And by the way, I'm a massive fan of SIM. We're going to eventually talk a little bit more about. Where it can play a massive role in the business, right? But one of the biggest things I think everybody talks about was, you know, a lot of data coming in, right? Too much stuff going in, too much noise into the system, right? 
 

So I feel that that's, for example, the first place where LLM can make a big difference. Now, this is where I draw the line. I'm not saying LLM. You take it out and put into the same. That's not what I'm saying, right? I think that's very dangerous. I think what you can do right is to use the same to figure out what is the [00:10:00] best ways to clean up this data, right? 
 

What is the best ways to analyze this data and, for example, build a parser that allows you to go from that amount of data to this amount of volume of value, right? So you go from that amount of data, that amount of noise, this little amount of value, how you go from there to that amount of value that you feed into your same. 
 

So I think the first area I will argue, but don't go in bits, right? I think LMS can can make a massive difference in how you understand your data, in how you understand what's coming through, in how the humans and the engineers, right, who making the same better can for the first time had a fighting chance because they could, for example, Build custom parsers very easily. 
 

They can, for example, understand data structures much more easily because they have the inputs and the outputs. So where, where do you sit on that one? Right. So using gen AI, right. And I think let's distinguish all the other AIs that, like you said, but then before, because they were [00:11:00] not really AI, there was machine learning on steroids, right, that we, we can now describe intent, right. 
 

I can say this is a feed from that type of environment, with this type of security requirements, with this type of risk, that it has this type of structure, here's what it looks like, here's what I would like to know of it, or even better, what are the things that is very good at this, can you help me go from megabyte to megabyte, to megabyte to 2K, Of data, and then you feed that to your sim. 
 

Mick Douglas: It's not in the sim. Let me be clear, you can do that. You can. My question would be, what is the more practical approach for most organizations? Um, for quite some time now, what we do with our clients when we're tuning their sims, we will turn on query logging. And we will look to see what are the time [00:12:00] horizons that they search. 
 

What are the data sets that they search? What are the fields that they're searching on? And then, uh, after, you know, 30 days, we have a very, very strong understanding of what they're using in their data and, um, You could do that with an LLM, but you, I don't think you need to. Um, and what we would then do is after, you know, another, another month, another, like even a quarter of monitoring, then we'll. 
 

Have confirmation that this is the data set that is needed and then purge it either further up at the, you know, log source or somewhere along the way before it gets into the same because I'm totally in agreement with you that there is a lot of garbage and So here's  
 

Dinis Cruz: the thing, right? You, you could take what you just described, right? 
 

And you could build that using [00:13:00] an LLM in the middle, right? And I want to clarify that most of the things I talk about here, I call it human. Enhance or human owned LLM results, right? And, and it's not about the black magic of dropping a chat GPT in the middle and going, Wee, it's gonna work, right? It's, it's about, so it's, for example, it's taking what you just described. 
 

And making it in real time and doing it because this is interesting, right? You said, I do this, but your time frames were three months to six months, right? I would argue that I would like to do what you just described daily, right? And this is this is the difference, right? The difference is you can now build that logic in a prompt in a series of prompts, right? 
 

That literally start to analyze that data. And then what you can do. So this is how I would design the prompt for that. I would design the prompt to say, you know, chat GPT, blah, blah, blah. Here is the queries that the users are doing today that they did yesterday, right? Here is what it seemed. This is the, this is now.[00:14:00]  
 

Uh, 20, 000 words analysis, right, of what a good sim will look like or what the data feeds will look like. This is, this is what good looks like. So you can create a prompt where you can say, so what I want to do, here's the current situation. This is what just happens. Here's the baseline. Here's what they have, right? 
 

And then you run a query and going, right. What could they add to their sim, right? What are the, what is the two things they can do in the next couple of days that will make those queries. That they just did a more productive and actually, what, what can you delete from the existing data set that they have, which can also feed to the prompt? 
 

So you can see that what I'm doing here is I'm packaging a huge amount of business intelligence right now. In a way you have in your head. But at the moment, I would argue that you when you go to organizations, you don't scale because you don't leave behind that thinking. So you do what exactly I described here and you come in the end with the recipe. 
 

You say, Oh, what you need to do is delete that data of that feed, et cetera. What I'm saying is let's leave that [00:15:00] prompt that you created in your head behind so that they can do that. And then we can keep doing the more, more, more interesting stuff.  
 

Mick Douglas: So there's a lot to unpack here. And again, what you're proposing is absolutely doable. 
 

And I'm just saying it's not needed. So. All of that logic that I said about analyzing and looking at what log logs are being looked at. That's a script that is a leave behind artifact that is done daily. In fact, for most of our clients, what we will do is hourly summaries. So the, I hear what you're saying. 
 

I, and I, there are potential use cases. One that you said a little, um. Bitigo was on, uh, parsers. That is a fairly interesting use case. But for, um,  
 

Dinis Cruz: Follow your use case, man. I don't think that [00:16:00] you But let's follow your use case. That script that you created, who wrote it?  
 

Mick Douglas: So that's the script that my crew wrote, and we have different implementations for the different, uh, Zim platforms that we support. 
 

Dinis Cruz: That script is very tied to the inputs and outputs. It has to be, right? By design, right? In the way it works, right? That script will break. As soon as anything new comes along, and you won't have the flexibility. My point is that you should be, that you should not be giving a script. You should be giving a prompt that creates the script, right? 
 

That evolves the script, that maintains the script, right? In a, also in a continuous way. And what, and the reason is. Your script won't have intent. Your script will not  
 

Mick Douglas: have intent though. You keep on assigning these terms to AI that doesn't, it doesn't follow. The AI and LLM does not understand the prompt and it doesn't understand the response. 
 

What most, [00:17:00] I'm assuming. Just for the sake of argument, you're talking about vectorized databases. Yeah. What AI are doing are using proximal vectors to understand how close something is to another thing. Yeah. Understanding at all. And the problem that I worry about is that it's a lot easier for somebody to. 
 

Who's up a prompt than it is to have a script and you, you know, I want to push back very firmly against the notion that the script somehow is fragile. The script is consuming the query logs that come from the SIM. So the vendor would have to be the one that changes their, what their query logging structure is, or somebody would have to turn off query logging. 
 

I, I, again, I think AI are very cool. And yes, you can use AI in the way you're describing. I just feel that like what you're doing is. Using a Formula One race car to go down the street [00:18:00] when a regular family car would be more than sufficient.  
 

Sean Martin: Let me ask this question. Because I want to bring it. We're kind of, which I love, by the way, and I'm sure we'll get back here very quickly in into some of the weeds here but I want to look at. 
 

The broader definition of a SIM and what are we pulling in from which sources? What are we looking at in that data? And because I can, uh, I can envision a world where we have data from, from a attack, uh, attack paths like MITRE and that kind of thing, and then threat intelligence feeds and other things that we can continue, we, it's AI potentially could continue to. 
 

Analyze and quote unquote learn from and do it in a way where a query doesn't have to take place, right? The machine could [00:19:00] constantly re evaluate that script against new information. So, I guess what I want to ask is, what is the definition of a sim in its current form? What, what it needs to be? Does it need to broaden for it to become available for your LLM use cases? 
 

Mick Douglas: What do you think? I think that the, and this is something that I think Dennis and I are on very tight alignment is I think that industry wide, we need to start thinking more creatively of how we use our sim. One of the things that makes me really sad is, you know, back when I did a business analysis before I became a consultant. 
 

The amount of data that a sim has, most lines of business would do shameful and horrible things to have that sort of visibility. And for a lot of my clients, I love to use a sim in a [00:20:00] very novel way. Like, um, one of my favorites, if you do any kind of retail or transactional stuff on your website. Your SIM is going to be monitoring your production web server, like, or at least I hope to God it is. 
 

And the same logs that you're consuming to see attack indicators. Can also be used to show health of the business. And for many of my clients, if like, let's say they've got a shopping cart, if too few elements are going into the shopping cart and being bought, an alert gets generated, but not to the soc. 
 

Instead it goes to the marketing team and says like, Hey, start driving eyeballs to this environment and like the, the sim with the visibility that it has, we should be squeezing that thing dry. A meta way of doing it is asking an LLM, Hey, here's the telemetry we have. What sort of, you know, [00:21:00] uh, types of value add, can we do with it, but that's not inside the SIM that's about the SIM. 
 

Dinis Cruz: That's part of the SIM infrastructure. So I think we've found agreement. I, I would even double down. And I think SIM of all the stuff we're doing security is probably one of the biggest misses missed opportunity that we have, right? Like in security, we have. The thing that is unique to cybersecurity is that, and I would argue when a good security team operates in an effective way, is that we're the only team that can speak to everybody, get data from everybody at all levels of the organization, all the way from strategic to business, to executive, to functional, every single team. 
 

And by the way, organizations don't speak to each other, right? Like organizations are these massive schizophrenic entities, right? That half of them don't like each other. Half of them hate each other. Even when they like each other, they don't collaborate because the things are sometimes engineered in ways that the rewards. 
 

system is different. We in the middle are the ones have to pick up the pieces because if there's an incident, we almost like we don't, the attackers don't care, right? The attackers don't care that this division doesn't talk to that division, right? They'll go right through it. So we have an insane asset, which is we can [00:22:00] get data from everybody, right? 
 

And even more interesting, we can process it. And I'm going to say out of the bat, if a team doesn't have an engineering function, then they might start there. Like literally, like, you know, that's your biggest problem. Forget everything else. You need engineers, you need developers, you need to build stuff, right? 
 

You need to be able to do that. If you don't have that, it doesn't matter how much you spend on tools, right? And that's another topic for another thing. I really want to pick up your point of what the sim is, right? I actually think that there's multiple layers of sims, and I almost view this in a three dimensional, four dimensional way, right? 
 

Like, I think there's the first, maybe you can call, The virtual data lakes, not data like I don't like that term you have almost like the sum of every raw piece of data that you have access to, right? And you collect it. So that's fundamentally is your potential same data, right? And that should be massive, right? 
 

And I'm a big fan of taking that and dumping into a storage, right? And then coming out of it, right? So that's the first I would say same element that that can come from it, right? Then what we do is we take that and we [00:23:00] feed it. to environments, which is what we tend to call Sims. But I, the reason I want to make a distinction is because I feel that we should, we should almost have free Sims in sequence. 
 

We should have the top big one, right? Who has the potential of all the data, which can be used for some type of analysis. Then you have a created one, which is already a massaged version of this, where you, you go from this amount of data to that amount of data. And that already has a lot of signals. And I think what we should even do next is take that and process it even further. 
 

further to create what the analyst has in front of it. And this is where I feel that the analysts and the data that you should see on a day to day should be a very small subset of your SIM data set, right? And should be very focused as making sure that you have high signals, but it should have the ability to load data on demand. 
 

So it's kind of like this idea that you want to zoom in. And as soon as you mean, you go, Oh, give me a second. Let me load up. Okay. Now, is this what you want? Oh, let me load up. And then you have the ability to go back in time. And what's, and the bit I don't think a lot of teams do is that these [00:24:00] transitions have to be codified because you need to be able to view it as code. 
 

So you need to be able to almost lose the value in your final data set, because that final data set is just a transformation of what it is and, and actually, and the point that, uh, makes that about the consumers. I think the biggest mistake we do is where we don't open up the same data to other teams. 
 

And this is one of the challenges I keep giving my team is that, and it's not easy because you need to solve a lot of things, right? It's very easy when you're seeing is a walled garden and everybody that's in there is very highly trusted, but it means that you cheated because you haven't solved the problem of how do you share data in a secure, in an effective way. 
 

But the reality is that if the people paying for the same. I say architecture, right? Foundations is a security team. That is a failure because the marketing team, the business team, the development team, every team in the business should get value from your sim. [00:25:00] Because like Nick was saying, your sim has all the data that they need, right? 
 

To do a lot of the analysis. And the interesting thing is It can do a lot of correlations because what the seam should be doing and I saw these guys once describing amazingly In a way the seam should be doing a judo move So it's not about one system to rule them all which is again the mistake that everybody does because it's a bit like hey We have five standards. 
 

Let's create the big standard and now we have six standards and then let's create the uber style You have seven so I also think that the clever seam is the seam that In a way, leverage other data lakes, leverage other systems, so it doesn't go against the weather. Oh, you, you, you have Elastic, you have, you know, you have Datadog, you have this, you have this, you have Chronicle, cool. 
 

So you, you feed that in and you build connectors. So you can almost, from your sim. It's also you shouldn't care where the data come from. What you care is that you can connect the dots, right, in an effective way. The problem, I think, [00:26:00] and I, when I, when I drill down, I think there's two fundamentals, is that we don't expose the data to others, so we don't get more eyeballs. 
 

Like, you know, you know, Mick was saying that. You know, actually, you might have the marketing team picking up an attack because they don't. They didn't want to pick up an attack. They just realized that there was an impossible journey, right? And selling what the hell, like the profitability just went off the charts or down the charts, right? 
 

And they think something is wrong and X. Oh, shit. That was an attack, right? I think that's one element. I think the other element is that We create puppies, right? Or, or, or, or, or pets in the whole cats, pets versus cats analogy, where most of our sims are the gigantic pets, right? They are one offs, right? You go to any team and you say, if I deleted your entire sim, yeah, everything, configurations, blah, blah, blah, how fast can you rebuild it? 
 

And I can actually argue that most of them cannot even get Anywhere close the way I know. And that means that the freaking Jagat in pet, right? And that means that they have no change control. There's no [00:27:00] automation. There's everything is really dangerous. So the amount of innovation that happens in that pet is very limited because also it's very fucking dangerous, right? 
 

You do the wrong query, you blow up half your data, right? So, so that's why I like this going back. I like this principle where you have your data sources throughout organizations. Some you collect, some you get, then you have transformation, transformation, and then you democratize the assets. Right. Access to it. 
 

And I, and I go back that gen AI is now a key pool of making this happen because he allows workflows that before were very, very hard to do. And that's my vision for where the SIM supposed to be, right? It should be.  
 

Sean Martin: So I want to pull on this and maybe make you have, may have some thoughts on this, of course. 
 

Um, because before we recorded, I brought up the idea of the team and as a team. Capable of utilizing a SIM. And I think, I think the team is capable in the pure security operations perspective, when you describe it as you [00:28:00] just did, Dennis, and pulling all these data points together and all these systems together, there's no way a human is going to connect all those dots. 
 

And I think we'll need, 
 

Mick Douglas: you know, think of it this way. Even the most complex river system just starts with a single spring. And, um, what a lot of organizations try to do is they try to solve the entire river basin. And instead what you really should do, especially if you're just starting on your SIM journey or You find that your SIM deployment is like in a really weird state. 
 

Let's be like, you know what, like, what's let's just drop everything. Forget the noise. What's one thing that if we really nail, it makes our life just a little less painful. Just, you're looking to make it a little better, find that one thing and then start walking that log through the system. [00:29:00] And you're going to learn a ton about how things are set up. 
 

You're going to learn a ton about how other. Data sets should also be set up and you're not, um, you know, the best SIM deployments that I've seen, they try to have a continuous improvement model, but their continuous improvement goals are fucking most. Like, serious sim players are looking for 1 percent improvement per week, if they're very aggressive. 
 

Some as light as 1 percent a month, if they're very mature. But if you think about that Like 1% a a week, you're looking at about a 50% performance improvement over a year. Mm-Hmm. . Yeah. Which is insane. So you need to start reframing what a SIM is and a big problem. Dennis, I, again, I'm in in big agreement with you [00:30:00] on your other points, but one that I also need. 
 

To add, or I would like to add to your list is that the way we're selling SIM solutions is wrong,  
 

Dinis Cruz: but that's why you need the Gen ai, man, because you need, 
 

Mick Douglas: it's, it's not the gen ai. It's No, it's a, it's a turnkey ready ecosystem. 
 

Dinis Cruz: No, no, no, no. It needs a lot of tuning. No, he's the point. You're making the selling is not just selling is the usage. 
 

And this is where I've been playing with the cyber boardroom, where the big thing that I realized that for the first time I can scale is how do you create messages that are target to each person, right? So, for example, when you say we need to sell the same will actually What we need to do is we need to make sure the output of the same is understandable and is in context to the recipients. 
 

So that means if you think about it, that if I ask you to say, Hey, I want to take the data from the same and I want to create it that is target to this person, that team, that language, that culture, that environment, right? That workflow will be fucking good. You crazy, right? There's no possibly can do it. 
 

That's not possible. Right. It's possible because again, you can [00:31:00] describe that in a prompt and then the communication can be done. Right. And you can still own that transformation. But I want to I want to go quickly back to the point I shouldn't make because I think that goes to the heart of all this, right? 
 

Is I think explainability. Right. And I would even argue version control. It's super important. Like the idea that you have something over there. It doesn't matter a sim, a person, a tool, a script that data goes over the fence and they come back with a solution. I found it, right? Like, it's a bit like you send the soldier and the guy comes back with the fricking. 
 

Yeah, I found it, right? I found, I found the magic thing. That's the opposite that we want, right? The thing about Sims and this is about scalability is how do you, it's about force multipliers, right? How do you get it. A new engineer, a new person to have super capabilities, right? That is able to operate at a massive level. 
 

And the way you, the only way you do that is you kind of take the example that Mike, Mark, Mike was saying, but I will describe it like explainability of what happens is about increasing abstraction layers. Right? So more and more abstraction layers. [00:32:00] But what it means is every abstraction layer is highly documented. 
 

So what it means, it means that the analysis should get simpler and simpler and simpler and simpler the higher you go up and should require less and less and less domain expertise, right? To act on it. But what we need is to make sure that that whole chain is well documented. And I'm a big fan of trying to figure out how to do even with version control. 
 

So that means that what you don't have 
 

Sean Martin: I'm going to jump in here quickly, quickly. Because this, this goes To the opposite of what Nick just described, at least in my simple mind, mixing, find that important thing and trace it back down. Yeah. You're just describing all the feeds. I don't know. I'm just trying to think that I'm, I'm, I'm trying to be on the side of LLMs belong here somehow in that exploring things, discovering things that you might not naturally have here. 
 

Dinis Cruz: It's the jump, right? Like you, you cannot go [00:33:00] from this amount of complexity to that negative solution that that never works, right? And that's not what you want. What you want is a chain of analysis, right? And each analysis, because that's what we do, right? We take a bunch of feeds, we consolidate it, we create this, then we analyze, then we analyze. 
 

But that whole correlation, the whole abstraction layers has to be documented. And that's what we want to scale. And I think But I think the case you're saying is that starting one place, right? Starting one place and say, I want to make this better because the other really cool thing about our world is that we operate in real world stuff, right? 
 

So I'm a big fan of taking P3 incidents, running MSP ones or real world. It's almost like we know this happened, right? Look, the user told us that this has happened. Right now we couldn't see it. Now let's understand. Let's look at everything. We failed so that that one little thing that we know we missed, right? 
 

So it's almost like I like when you know the solution already, right? You know that this happened. You know that this alert occur. You know that this thing is there, right? Can you now just zoom [00:34:00] in on everything that is missing or is there right so we can get that connection and then that allows that have one thing. 
 

That you can then keep evolving. But I think the LLMs bring a huge amount of explainability into the loop when we get it there. 
 

Sean Martin: Mick, thoughts? I mean, I, I can keep going with, uh, my thought on this because I, I, I believe that if you start with something that you know you want to tackle. You're starting with a premise or an understanding of, of how you've done it already. And I guess what I'm trying to point to is that you might find something new in the data. 
 

By looking at it differently with the help of technology and more specifically, gen AI that may say that path, that starting point, isn't the right starting point, that path doesn't really matter. This, these are, these are really the things. [00:35:00] That matter to whatever it is you're trying to, trying to figure out. 
 

Mick Douglas: Yeah. And maybe I'm, maybe I'm arguing in a way that's unfair. What you're talking about is just the investigative process, whether it's AI or a non, like a, just like a, like a, a canned Yara that you run. It's looking for some anomaly. And what's more interesting to me. Is not when the AI gives me expected output or when the candy art gives me expected output. 
 

What I love is when things go sideways and like, why is this thing behaving differently and unexpected? And then you can start beginning your investigation because all we're looking for is anomalies. I don't really care what tool you're using. I just care that we can find anomalies. Like, here's another way to think of it. 
 

All the different data sets that we're dealing with are just slices, [00:36:00] right? Stacks and stacks of slices. And what you've got are two complex curve shapes. So certain occurrences happen, that's anomalies. Above that, that's alerts and you need to investigate those. The hope that a lot of people have is that AI can help define what those curved spaces are. 
 

And that'd be great if they could. The problem is that the LLMs are not aware of what controls you have. And if you have a good asset inventory, you might be able to train or build a model that's appropriate to your environment, but like out of the box, they're not going to know that you're going to have to do some real hellacious, um, work in order to have them understand what your environment looks like. 
 

And that, that's, that's where I think a lot of the, um. [00:37:00] Lessons that I learned the better way back in the LLM days when we first started doing things like Markov chains, we thought that was going to be an amazing win. It helped in a lot of ways, you know, um, you know, the machine learning is a good thing. 
 

AI is a good thing. It's just I contend that for most SIEM specific, we don't, for most organizations, we're not yet at a spot where they need, uh, Gen AIs.  
 

Dinis Cruz: Yeah, I would agree. But I also think that if people, again, want to throw all the data to Gen AI and get the answer, first of all, we don't have any, you know, that has the amount of capacity, right? 
 

I don't think that's the right way. What I the way I look at it is that I want to use Jenny. I to create to have the least amount of data in the same with the most amount of meaningful data in the same to be able to communicate effectively to be able to have explainability left front and center and actually to teach and train the thing that [00:38:00] remember that the biggest challenge we have is that There's that curse, right? 
 

Where you train somebody to be a SIM operator by the time they're really good. They want to leave because now they have a lot of great other skills, right? And that doesn't scale when it's a zero sum game. What I think is more interesting is to, again, use other gen AI capabilities to create learning environments, to create simulations. 
 

So, so you can do that. So for example, like it's very easy these days and assume that we're going to get better and better platforms to do this, right? So for example, I could take a dump from a SIM or the outputs and I could create a simulated game for a new. Cyber analysts, right? And say, by the way, this is the kind of alerts that you're gonna get. 
 

And you can create simulated environments much better than anything else we had before, right? You can even say, here's the expected answers. Now start asking questions and now teach the person, right? But teach them into context, right? So and that's what I feel that it's almost like it's more like You know, Jarvis, the good one in, in Tony Stark's world, right? 
 

It's kind of like, that's what it is, right? It's not the uber duper, like it's almost like we then asked Jarvis to [00:39:00] sing and dance and to compose things and going, well, the hallucinates, that's not what it is, is this is about giving a mini Jarvis to every single person, right? And have them that insane ability to suddenly be really good and have access to really good environments. 
 

And I even see this as. A generation of bots, right? So it's almost like, imagine when you start, you get given almost like, okay, this is level one sim, right? You get a very small bot, you get small data, data, learn. And then as you get better, you have access to more and more and more data sets, more and more complexity. 
 

And the key point I was trying to explain before, Nick, is that, I feel that a lot of the stuff that we do, we have the prompt in our head, but we leave behind the photograph of that output. I think what's really interesting is to start to leave behind the thinking behind it. And, and the power of the thinking I prompt is that is cheaper by orders of magnitude. 
 

To modify that or to adapt that right [00:40:00] then it is the code and the structure will leave behind right so you can think of this as a sequence of prompts and the beauty of this is that if you start to go into a world where you have prompt after prompt after prompt after prompt of every input and output is reverse engineered and version controlled, you can then. 
 

Every time there's a bad output or something that doesn't make sense, you can go back, right? And you can go, Oh, okay, where, where did my script fail or my prompt failed? And then you tweak it, right? And then you learn the ones that have high confidence, the ones that have low confidence. But I feel that that world allow us to get a huge amount of more value from the same, and then we can communicate upwards and downwards, right? 
 

In a much more effective, I would say. Language and contacts because again, the contact is a prompt, right? If you see what I mean, like, you know, when 
 

Mick Douglas: I follow what you're saying, and I, I even agree in a large part with [00:41:00] what you're saying. Um, I guess the main concern that I have for the foreseeable future is what you're talking about would be very computationally expensive and, um,  
 

Dinis Cruz: Those guys already solved the problem, right? 
 

Because if you think about it, the Gen AI crowns and they're losing money with it. But it doesn't matter. In fact, that's coming down very quickly, right? Well, I get that, I get that. And I've actually, that's the competition, expensive part, right? Is the larger, and I am,  
 

Mick Douglas: I am working with small AI. I've got a, um, a project that I'm hopefully going to be. 
 

Open sourcing here in the not too distant future, and it runs on a laptop that runs really nicely, but even that amount of chain of prompts to coin a phrase, that would be a fairly expensive, uh,  
 

Dinis Cruz: Look at that cool model. Imagine you've been able to describe. Imagine I'm telling this. You now have. A micro LLM SIM or agent [00:42:00] running on your devices, and you can describe it what you want. 
 

We know that far, man. We know that far. Apple is releasing a new thing with the open source. If you look at innovation on the open source LLMs, we're getting to the point where you're going to run it everywhere, right? And here's the thing I've been playing with, right? The interesting part, and Amazon actually had a good gotcha on this. 
 

Microsoft had a good gotcha with Copilot. Top marks for the folks who come up with that on that one, because it's a Copilot. But what Amazon, I think, would be right, is the concept of foundational models, FMs. So I actually think the name of the game is not on customizing models. I actually think customizing models is fucking dangerous, because we still don't understand how the hell that works, right? 
 

I think what you want is to have foundation models that become really good, that you understand very well how they behave, how they operate, what they do, and even the cost matrix of those models, and then you build prompts and workflows on top of it. And in a way, what you want is to run the cheapest possible model. 
 

[00:43:00] To achieve the same goal that you want, so let's say that you want to have something that analyzes, you know, Splunk logs or, or, or, or, you know, what's it called? Peacocks or, you know, the output of a freaking AV or, or, or EDR or whatever is this scale, whatever tool you've got, right? You know, and you want them to analyze it eventually should be able to say, well, I don't need GPT four for that. 
 

I probably need three. I probably need two. I probably need llama. I probably need this guy here and this. And then eventually the model comes small, small, small, smaller until you find, you know, the sweet spot, right? And that's why I'm saying, like, so the foundational models becomes that ability to supercharge, right? 
 

An analysis where And this is where I want to clarify, when I say intent, I'm not saying the intent of the AI, I'm saying the intent of the analyst that wrote the prompt. Right? So when I write a prompt, I have an intent, right? And the intent might be, I want you to look at this, look at that, collect this, map this, here are the skulls, now give me these outputs. 
 

That's what I mean by intent, [00:44:00] right? The, the GenAI is just a great way that I can explain these in English, or in JSON, or in whatever format I want. That before was, it's like before I needed half a million pounds of engineering to do that, right? And every time I want to change, I have to go back to my engineering and change everything. 
 

Where now you can modify the prompt. Run it again. Right? And that's very different. It's like code. Data is now code, and we need to start treating data and descriptions and prompts as code. So it's like another abstraction layer, right? You go from assembly to C to Python to prompts, right? So, you know, there you go. 
 

You write your scripts in Python. I'm saying start writing your scripts in prompts, right? And then have toolkits around it, right? But then the explainability is super important because Blackmagic has no space, especially in Sims. I would argue that anything that happens that you can't explain is a P1. 
 

Right? It's like the freaking Air Force, man. Like, a door blows up of a plane, they can't understand it, [00:45:00] they ground the fleet. That's it. Like, there's no, there's no ifs, there's no buts. The same thing here. We should have a case where a signal comes along, we don't understand how it works, the LLM did something, we don't get it, fucking ground it. 
 

Everybody, P1, solve that problem. Right? And that, I think, gives you a lot more confidence in the outputs and a lot more explainability at all these layers. Right? So you don't have black magic, where you have these layer, layer, layer, layer, layer. And then you can even use the LLMs to explain what each of the layer is. 
 

And I want to do this with source code, by the way. Because for me, you know, I love the fact that people talk about LLMs hallucinate. I'm like, dude, our fucking source code hallucinates. Right? Nobody understands the products that we build, right? They tell you, like, software above a certain amount. You can't find anybody who actually understands how that thing actually works. 
 

Mick Douglas: That's not true. I always get upset when folks say that because that's It can be very expensive to decompose how an AI came to a solution. Um, [00:46:00] but we absolutely know how AI work. We know how they, uh, function. It's just,  
 

it's economically  
 

costly to go back, especially with like neural net. AI to figure out how it came to understand a particular element, but what you're talking about, there's actually a bit in a book called, um, the grounding, or I'm sorry, the alignment problem, and they talk about, uh, how, how that's not true. 
 

Like, we, we know what they do. We've known how these,  
 

Dinis Cruz: well, then explain to me, right? Like, cause I haven't seen a good question. How, how does charge of it, how gen AI, where's the reasoning part of it?  
 

Mick Douglas: There's no reasoning. Let's keep it simple. Let's keep it simple, okay? So, um, what a simple, the best solution is, okay, you have some prompt, and usually that [00:47:00] goes into something like a vector database, and all vectors are is just a fancy way of numerically representing some list. 
 

Dinis Cruz: And it doesn't even matter  
 

Mick Douglas: Doesn't even matter what those numbers are as long as you're consistent. And you have a uniform approach for how you vectorize whatever text you're using. You're going to wind up with this vector space where all these vectors are living in this database. And then when a prompt comes in, and you say, Hey, I want to know this information. 
 

That prompt, like that you enter into like a chat GPT or a BARD or whoever. It's vectorized and that vector gets plopped into that vector space and it says, Oh, these two vectors right here are the ones that are closest to that particular prompt and then starts. Using that as its content  
 

Dinis Cruz: again, but that's layers.[00:48:00]  
 

Mick Douglas: There's no reasoning. It can't reason. All it is is a database lookup.  
 

Dinis Cruz: Well, that that's where I draw the line because in the past I could always see how most a eyes was like that is very binary. But again, example, you LM these days and say, here are 10 questions, right? He's a knowledge base, right? I want you to write Right. 
 

You know, questions. I want to create a quiz. In fact, I want a quiz with seven questions or with five questions based on that data sets. And I want you to run as a host. Now think about it. Everything here is instructions, right? Like the only way to pull this off. Right? Is there has to be layers upon layers of intent of logic of mappings, right? 
 

And even like you say, do this with humor, do this with dry, do this in bullet points. All those guidance. You know, my understanding is at the moment what we know is that given of compute, given of vectors, given of layers, there's layers that exist at the top that we [00:49:00] don't fully understand. Right. That make that happens, but we look good. 
 

Okay, boy, me too, right? Because my understanding is that the way, for example, the way we can explain compute today doesn't exist on a gen AI model because there's layers in there that were created with almost brute forcing. Of content and learning and learning and learning that we today are not able to explain how the output is created 
 

Mick Douglas: Yeah, so there are some there are some elements of this right now that are kind of strange I I will admit like some of the stuff like i've talked about vectors for instance a lot of times Um, we're and i'm being very broad here. 
 

We I am not personally involved. I'm just talking about AI hackers and stuff, a lot of the stuff that we're uncovering is that, um, we don't know what the optimal vectorization process [00:50:00] is. So if you have say a reference document that you want to be able to ask questions. How do you represent that in vector space in a way that is consumable? 
 

You could make an entire page a single vector. You could make a single word on the opposite side. You could make a single word a vector. But if you go that small, the vectors aren't going to be really, yeah, you're going to get really weird answers. But if you go a full page, you're also probably going to get weird answers because a page would typically contain multiple thoughts, you know, multiple ideas and what you're really trying to have is a way that your vector contains a bite sized. 
 

Complete idea. But [00:51:00] how do you know what the proper vectorization is to do that? Because unless you have somebody saying, oh, this to this, that vectorize that, like, and nobody's got the time to do that.  
 

Dinis Cruz: Well, not at the time. I think we need a model.  
 

Mick Douglas: That's something that I'm playing with. It's like, you know, having multi vectorization of a particular data set. 
 

Dinis Cruz: And so I want to bring a topic I think is super important here, which is Part of the other reason why I really, so just to be clear, I'm a big fan. I think LLMs can modify the way, not just Sims, but a lot of security space. The thing I would say is people don't put the LLMs in line, right? And I tell you one thing, this is for the ones who's been in security for a long time. 
 

This is like SQL injection on fricking steroids, right? Like we went from, or, or buffer overflows like on fucking like another level, right? Because we went from data streams. That we initially we didn't understand what it was, then we realized, Oh, maybe, maybe copying data from a to b is not a good idea. 
 

Maybe concatenating string queries in [00:52:00] databases is not a good idea, right? We went from, okay, now we understand that this bit of data is malicious. We're going to parse it, analyze it. And we, you know, let's say SQL injection. We went from raw SQL concatenation to nice parameterized queries, separation of code and data. 
 

All that jazz, right? Now we have the prompts, right? And and they think the reason I'm very worried about prompts in line, not prompts to analyze other things, right? It's because I think we're going to see an insane type of attacks that we have not even dreamed on, right? So when As we get better, like I think we're going to have cases where people are going to do jailbreaking. 
 

It's not even jailbreaking. They're going to start telling the fricking models to completely change their behavior. It's almost like you have to imagine that we have no evidence that you cannot through a prompt actually completely change. It's a bit like you have SQL injection that patches, not just the query, but patches the, the, the, the, the codes at whatever level you want. 
 

Right? Imagine a SQL injection query that you're patching SQL Server or SQL [00:53:00] themselves or the kernel or the server or the OS, right? I think putting things in line that making decisions Based on prompts that come from hostile environments, which by the way, you know, seems all right. It's it's borderline crazy, right? 
 

And I don't think and that's why when people says, Oh, I just feed the data to the lamb and let you do the analysis. I'm like, dude, wait until you see the kind of exploits. Because then, you know, remember, we said that we are the epicenter, right? We become one of the best place to attack, right? You freaking attack. 
 

You pop the seam, right? You know, you pop the scene, you pop the organization, right? So that's why I like to put LLMs to augment analysis. But also more and more, I want to find ways that everything the LLM does is version controlled, and it's almost like we need to use LLMs to LLMs into analysis that it's almost like code is still the best thing we have. 
 

Data is still the best deterministic. Things we have. So the LLM should be creating the code. It should be [00:54:00] creating the data, right? But it's all version controlled. Or else, you know, you're gonna see some crazy shit. You know, when, when people put a prompt, it's a bit like, you know, the, the cool SQ injection that you put a a, a payload in a form field that was executed four streams down when it actually hits the final thing that literally transverse your entire organization, even companies, and suddenly that, that simple SQL query that you put in or that payload. 
 

That you put in is then loaded up, let's say, you know, JavaScript injection, right? It's finally loaded up on some same analyst dude that is actually reviewing the data. And then that pops the whole organization, right? Because you put the payloads in the failed password attempt, right? Or the username, right? 
 

And guess who sees all the failed usernames is the people who analyzing failed usernames, right? So I think prompt injection. Is going to be a massive right?  
 

Mick Douglas: It is going to be a problem. There's no two ways around it. Anybody who says otherwise, I think is delusional. The [00:55:00] other thing that I'm worried about, too, is that we're kind of training people to trust these machines more than they should be. 
 

And, um, you know, you may have heard of that G. I. G. O. And a lot of people think it stands for garbage in Garbage out. With the realm of AI, what I'm worried about, and this comes from a buddy of mine, John Strand, he says it's garbage in gospel.  
 

Dinis Cruz: And But that always happens, right? Is that different from what we have today? 
 

Mick Douglas: I've seen so many times when people just blindly trust the output of these models, of these tools, and, you know, I think that they can be great force multipliers, but you need to treat them as like, Interns, you know, like sometimes they give you delightful and very meaningful insights. Other times, they're so crazy. 
 

Dinis Cruz: We have that problem today, right?  
 

Sean Martin: And I'm, and I'm going to say we do. And I'm going [00:56:00] to, cause we're getting close to, to the time here. I'm happy to have another session, but what I want to do is one of the things that was sticking in my mind was false positives. You just kind of touched on that a little bit there, Mick. 
 

And so what I want to do as we, as we wrap here. If you were to pick one thing that the sim could do in 2024 with ai, let's assume, let's assume you put your fears away for a moment, and, and some of the things you talked about. I, I wanna know what, what's that? What's the area in the sim that would benefit from AI in 2024? 
 

Who wants to go first? Where, where would you, where would you spend your money? Making the SIM better with AI.  
 

Mick Douglas: You know, the, the SIM vendors will never ever do this and they'll actively try to [00:57:00] stop it, but I do feel that not LLM, but other AI tools would be exquisitely helpful in reducing the gigs per day, the. 
 

Um, events per second, whatever your license model is, those tools inside a sim would be exquisite at telling you what the noise is that can be cut down. They have perverse economic incentives to make sure that that never ever gets deployed.  
 

Dinis Cruz: But that's changing, by the way. There's some SIEM providers who are playing on that game, and I think, I think the market will change very quickly. 
 

Mick Douglas: I would love that. Those, the, the, anyone, like, the, the market is way open for disruption.  
 

Dinis Cruz: Well, I can name them. I say Chronicle plays that game, right? Chronicle says, fucking, just dump everything. We won't charge you for that.  
 

Mick Douglas: Cripple's, Cripple's playing some really cool stuff. Exactly.  
 

Dinis Cruz: So, so I would actually echo Sean, what Mike said. 
 

I think the place where I would use [00:58:00] LLMG24 is to clean up your data. And it's and I would say that reducing the gigabytes of injection is not the objective is the side effect of having better signals where you can use the other lamps to understand better what you got and to help you with the parsers with the data with the mapping. 
 

So you reduce the amount of noise in a way that you get to the same. So you improve what you get on the other side, right? So and this is what I would go. So the reason I say, you know, the lambs belong To the heart of the same is because they can play that role of making the same more efficient in ways that before we couldn't scale right? 
 

Like in before it would be impossible for me to go to my team and say, Hey, guys, I would like to see an analysis of those five data feeds for value for this for that. You know, what, what are you ingesting? What's in there? Explain me every single field. Explain me everything that's going on. You'll be impossible, right? 
 

Now you can feed that in the land. You can feed that to the environment. [00:59:00] So I would say use the LLMs to make the seams more productive and now maybe going full circle. And yes, some of those things you can do, you don't need a freaking gen AI for that. Or like, you just, you can just apply common sense, but I think there's a, there's a diminishing returns that you hit if you're not powered by your gen AI, you know, rocket ships, right? 
 

And as you hit them, that's when you apply. The gen AI. So make your SIM and your data and your team more productive. I feel that's, that's where I would put the gen AI in, in the mix.  
 

Sean Martin: I love it. I love it. You guys are amazing. I, I, uh, I had my popcorn here.  
 

Dinis Cruz: Well, they went up some rabbit holes, but I think we come out of it. 
 

Sean Martin: I tried to pretend that I knew, uh, knew what was going on half the time, but, uh, now you guys are really cool. I appreciate, uh, all your, all your perspectives and insights and, uh, Clearly, you [01:00:00] have your fingers in all the good, fun stuff that I don't get to deal with anymore, which is a bummer in some instances. 
 

Dinis Cruz: I'm hiring, right? And I'm hiring lots of freelancers. If anybody has time and they want to join a really great, um, model, so I'm doing a plug. So as long as you can work through Upwork, we can hire talent, right? You know, so I'm building the best possible team, you know, I can assemble. So that's my, my plug. 
 

So if you guys want to do this kind of stuff, right, you know, ping, ping, ping me. We'll, we'll get you guys on board. Right. And then we'll see if it works or not. 
 

Sean Martin: Love it. I always need more, more analysts and, uh, people in the sock.  
 

Dinis Cruz: And more diverse. We actually found a little thing. This is a great opportunity for more diverse candidates from other industries to join cybersecurity because it's a freaking shit show, right? 
 

Like we need better thinking. We need talent. We need people with fresh perspectives. So neurodiversity women, that is the best time to join in. Right. Because, you know, it's so ripe for disruption and we need better thinking. We need new, fresh batch of talent. You're like the way I look [01:01:00] at it, you take a PhD person, right? 
 

Or you take somebody who worked at the local restaurant, right? Like I, I build an amazing instance response. People like literally work from home moms from Wales, right? Middle of nowhere, you know, took the kids to school with content writers. They become some of the best. This is responders I ever had because they're logical. 
 

They have common sense. They understand. So we need a lot more talent, right to join in our industry. And I think now is a great time to do it because it's again. Jenny, I allows those individuals. To supercharge their learnings in a way that in the past I just couldn't right, you know, but now now you can, and I think it's a great opportunity. 
 

So let's bring on the talent  
 

Sean Martin: and, uh, with that, I want to I want to thank you both. And Mick, I want to thank you as well. Um, I mean, you're, you're always thinking and sharing. And, uh, clearly this, this conversation was spurred and spawned from, uh, from that post that you made. So, uh, keep keep doing the good work. 
 

Keep, uh, [01:02:00] Keep sharing and getting us all to think and, and getting us to talk to each other. It's really cool. I always, always enjoy having you on the show. Didn't see you as well. Good to have you on again, too. 
 

Who knows? Maybe we'll do another one. Maybe Casey will. Yeah, Casey and I will get the whole, uh, How do you, uh, hack the system and how do you disclose stuff properly, uh, from him? But anyway, for now, thank you both. And, uh, of course we'll, we'll link to the posts, uh, and, uh, please do share. Uh, this is a conversation that needs to go around for sure. 
 

And I, I suspect people will have. their own thoughts on whether it belongs or doesn't belong. And if so, where we're not. Uh, so please, please comment on the threads, uh, that the thread that Mick posted in, uh, and, uh, on our posts on ITSB magazine. So thanks everybody. Talk to you soon.  
 

Dinis Cruz: Thank you, Sean.  
 

Mick Douglas: Bye. 
 

​