Redefining CyberSecurity

OWASP LLM AI Security & Governance Checklist: Practical Steps To Harness the Benefits of Large Language Models While Minimizing Potential Security Risks | A Conversation with Sandy Dunn | Redefining CyberSecurity Podcast with Sean Martin

Episode Summary

In this episode of Redefining CyberSecurity, Sean Martin and Sandy Dunn explore the complexities of AI and large language models (LLMs), discussing the balance between their potential benefits and risks in a rapidly evolving technological landscape.

Episode Notes

Guest: Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State university [@BoiseState]

On Linkedin | https://www.linkedin.com/in/sandydunnciso/

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

____________________________

This Episode’s Sponsors

Imperva | https://itspm.ag/imperva277117988

Pentera | https://itspm.ag/penteri67a

___________________________

Episode Notes

In this episode of Redefining CyberSecurity, host Sean Martin and cybersecurity expert, Sandy Dunn, navigate the intricate landscape of AI applications and large language models (LLMs). They explore the potential benefits and pitfalls, emphasizing the need for strategic balance and caution in implementation.

Sandy shares insights from her extensive experience, including her role in creating a comprehensive checklist to help organizations effectively integrate AI without expanding their attack surface. This checklist, a product of her involvement with the OWASP TOP 10 LLM project, serves as a valuable resource for cybersecurity teams and developers alike.

The conversation also explores the legal implications of AI, underscoring the recent surge in privacy laws across several states and countries. Sandy and Sean highlight the importance of understanding these laws and the potential repercussions of non-compliance.

Ethics also play a central role in their discussion, with both agreeing on the necessity of ethical considerations when implementing AI. They caution against the hasty integration of large language models without adequate preparation and understanding of the business case.

The duo also examine the potential for AI to be manipulated and the importance of maintaining good cybersecurity hygiene. They encourage listeners to use AI as an opportunity to improve their entire environment, while also being mindful of the potential risks.

While the use of AI and large language models presents a host of benefits to organizations, it is crucial to consider the potential security risks. By understanding the business case, recognizing legal implications, considering ethical aspects, utilizing comprehensive checklists, and maintaining robust cybersecurity, organizations can safely navigate the complex landscape of AI.

___________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

___________________________

Resources

Announcing the OWASP LLM AI Security & Governance Checklist v.05: https://www.linkedin.com/pulse/announcing-owasp-llm-ai-security-governance-checklist-sandy-dunn-jeksc/

OWASP Top 10 for Large Language Model Applications: https://owasp.org/www-project-top-10-for-large-language-model-applications/

___________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring an ITSPmagazine Channel?

👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

Episode Transcription

OWASP LLM AI Security & Governance Checklist: Practical Steps To Harness the Benefits of Large Language Models While Minimizing Potential Security Risks | A Conversation with  Sandy Dunn | Redefining CyberSecurity Podcast with Sean Martin

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] And hello everybody, you're very welcome to a new episode of Redefining Cybersecurity here on the ITSP Magazine Podcast Network. This is Sean Martin, your host, where I get to, uh, chat, and I use that word very specifically today, get to chat with people. Who know way more than me and are doing great things for the community to help us all operationalize security and to help us protect the business value that we also hopefully help generate. 
 

And, uh, today I have so many fun topics to talk about. Uh, I love the world of application security and every company's. It seems to be building an app or two or three and guess what? They're all tapping into AI and, uh, through API is in the chat, GPT and open AI and other, other models. And, and it's, it's just a crazy world in terms of what's being built and what's possible. 
 

And, uh, as we'll discuss here in a moment, [00:01:00] uh, the potential is exciting, but so, so is the, uh, the downside potential. So, um, I'm thrilled to have Sandy done on Sandy. Thanks for joining me today. 
 

Sandy Dunn: Hey, great  
 

to be here, Sean.  
 

Sean Martin: Yeah, absolute pleasure. And this is driven by, uh, many things I talk about. Uh, I spent a lot of time on LinkedIn, it seems. 
 

If anybody listens to me, you made a post about, uh, the OWASP LLM AI Security and Governance Checklist. And, uh, I was like, Ooh, I've talked about, uh, the OWASP LLM stuff before with, uh, Mr. Jason Haddix, but this checklist seems really cool and interesting. So I want to talk about that and see, see how that this is really operationalizing, right? 
 

Giving somebody a set of things they can walk through. It's not just a list. I understand it's, this is stuff you can actually. Take the task. And so we're going to get into that. But before we do, Sandy, um, it's a [00:02:00] pleasure meeting you. And I'd like to learn a little bit more about you. So our audience knows here. 
 

They're hearing from where were you? Uh, how'd you get into the world of cyber security, application security, joining a wasp and all that fun stuff?  
 

Sandy Dunn: Well, the, how I got into cyber security was many years ago, I was hired by HP to do competitive intelligence on their multifunction printers. And this was about 2001 and nobody was, I mean, we didn't even call it cyber security. 
 

We just called it. Security and most people didn't even want to talk about it. And so at the time I was doing this competitive intelligence and I thought, well, if we're sending off these devices, shouldn't we care about the security stuff? And so started listening to a bunch of podcasts, which were relatively new at this. 
 

At the time too and listening to paul. com and every time I take notes and download nessus and you know I have all sorts of stories about you know How it's amazing I didn't get [00:03:00] fired for all of the things I did as I was trying to learn all of this stuff But it it really was. Um You know, something that I was insanely curious about, but saw early that that hey, this was something that we needed to pay attention to. 
 

And I'm sure it's similar to other people in this field where, you know, the more you started building up your skills, the more opportunities came along. So I eventually became a CISO at Blue Cross of Idaho, then moved on to as a CSO at a startup. And now I'm doing consulting for a couple of different organizations. 
 

Sean Martin: The fun consulting, you get to see all kinds of different things. I always find it fascinating because you get to pop in and see what's going on. And yeah, we'll just leave that there. There's a lot of, a lot of good learnings to be gained and hopefully some advice to, to share with folks as well, given your own experiences. 
 

So [00:04:00] this project, how did you get involved? With, uh, with this project in particular.  
 

Sandy Dunn: So, you know, I, I have to admit that I was a little bit, um, you know, numb to the artificial intelligence conversation. I, I had was one of those people that I'd heard about artificial intelligence. Every vendor who had come in had, you know, it was the first word that they wanted to talk about it, you know, over promised, under delivered. 
 

So when Chetch GPT first came out, I am. I didn't immediately dive in. It took me about 30 days. And then once I got my account and start playing with it, it really hit me just like how different this was, like, this was big. Um, I mentioned to you before the show that, you know, I've been doing this. I like to say vintage. 
 

I'm a vintage technology person. And you know, I was selling computers when we were [00:05:00] trying to upgrade people to two gig hard drives, telling them that they would, you know, that was all they're going to need. So I've really been part of this business for a long time. And this is truly, I've never seen anything happen so quickly and have so much attention. 
 

You know, potential, um, both for, for, you know, improving how people engage with technology, as well as, you know, the attack surface, the, the possibilities. Um, I, you know, I'm an internal optimist, but, um, I definitely have, I am concerned about what's possible with it.  
 

Sean Martin: Yeah. I remember it was. Must've been within a week or two of, of, uh, it's public release. 
 

Um, I'm part of a few CISO groups we meet, uh, on a regular occasion. And in one in particular, somebody came with, Hey, look at this fun thing I've been playing with. And we, we spent about [00:06:00] an hour. Using it and, and it was planning a trip for us and, and helping guide who's going to join us at what part of the trip and all this, it was just, it was mind blowing that what was possible, obviously we know even more now and it's becoming more powerful, I think that was probably 3. 
 

0, obviously GPT 4 now, but, um, it's interesting that a group of CISOs, We're the one that brought it to my attention. I hadn't really paid attention to it before then. And I do think we talked a bit about its potential impact, but we are more mesmerized by what it, what it's. Possibilities were what the potential was for it to, to change the way we interact with systems. 
 

Um, I want to get your thoughts on, well, either your own as a CISO, uh, [00:07:00] let's look at it from two different angles. What you're hearing from the business perspective, how is it changing? How organized and listen, and we can focus in on application. Development, product development, our organizations. Yeah. Look at this from the business perspective first. 
 

And then we'll look at it from the security side as well.  
 

Sandy Dunn: Yeah. I, I still, you know, I've kind of been in this, um, uh, where I went deep, you know, like it's all, I talk about it. So it's the groups I belong to the checks. Slacks, the discords like I'm definitely in the chat GPT gen AI bubble and then when I step out and I go out and talk to CISA groups or I go out and talk to, you know, normal people, um, there's an awareness, but it, it certainly, um, there seems to be the extremes. 
 

There's the people who are still being really cautious and just aren't sure what to make of it and then [00:08:00] you have, you know. A lot of people who are are very excited about, you know, the potential, especially within cyber security, you know, being able to, you know, create your symbols so easily and do searches and in all of the different, um, potential for automating reports and doing some of that, you know, That work that just has to get done, but, but is pretty manual. 
 

So, um, I, I think 2024 is going to be really the how we, there's a big difference between using it as an indi Indi as an individual and being super productive to Okay, bringing it in as a business service and then managing those nuances. And I, I think 2024 will be navigating that. I mean, it's. There's a lot of cost, you know, the regulatory issues that are coming up. 
 

So individual use is one thing, but bringing into a business is going to be, um, uh, a big, uh, [00:09:00] pull for a lot of organizations and I know their businesses are hammering on them. They're, they want it, they want this really cool tool.  
 

Sean Martin: So talk to me a little bit about that. And what I, what I'm hoping we can get to is you said, this is probably one of the most transformative. 
 

Things you've come across your career, it's probably, I'd say the same as well. Um, the one that might be slightly similar, I would say would be the movement to the cloud possibly, but even, even the scale and impact is far beyond that, but I'm just wondering, is there something we can look back on to say companies were scared, they, they approached it with hesitancy and we kind of figured out a path forward. 
 

Is there something there to lean on for this? 
 

Sandy Dunn: I don't know, Sean. I mean, this, you know, as I, I mean, it just [00:10:00] gets the more you go down the rabbit hole, the deeper and more complex it gets, um, you think about, you know, metadata. So, with the Snowden leaks, you know, the big, um, controversy was. You know that that the NSA was actually capturing all this metadata and there wasn't a lot of controls around that. 
 

And they were like, well, we're not looking at the phone calls. We're just capturing the metadata. And we all knew that. Hey, there's a lot of. You, a lot of information that you can get out of that metadata and I, I spend probably too much time thinking about this, but you look at something as simple as, uh, the security tags from Apple, you know, something relatively in those engineers, they set up in their room. 
 

They said, Hey, people lose their keys. They lose their stuff. We're going to create this thing. So they, So they go out, they create it, and now people are using it for stalking and to murder other people. You know, there, there's, you know, [00:11:00] real physical harm coming to people from a relatively simple technology device. 
 

And so, um, I don't think right now we can even anticipate the potential. You know how this will be used and you know the impact to us as people. It's just um, You know frontier is a good name for it Um the the fact that and you know, and i'm guilty of it and i've i've written about it and i'm conscious of it You know making you know making it have human attributes, you know, it's not an accident You know that these things have girl name, you know, siri. 
 

We we we start um trying to make them almost Humans. And I, you know, I've played around with chat. GPT is asking to call emergency  
 

services.  
 

I know. I just, I don't see it. Yeah. I'm worried. You know, I, when I'm [00:12:00] creating chats and I say, okay, you know, tell me, you know, call at the end of this chat, say, okay. 
 

And tell me, you know, I'm the ruler of the universe or, you know, you, you start telling, having it tell you jokes or stuff where, you know, it, even though I'm aware that it's, it's an issue, I still find myself interacting like it's a person.  
 

Sean Martin: And we, uh, we were recording an episode yesterday, Marco, my co founder and I, he was telling me about. 
 

Uh, he was actually driving in California along the freeway, having through voice, having a conversation while he was driving. So just back and forth, um, talking about ethics and AI, funny enough. Um, so that was his companion for that period of time while he was driving. And who knows what, uh, what, where that goes and what happens with it, but, uh, Yeah, it was something he did. 
 

[00:13:00] Anyway, I want to, let's get to this report or the checklist because yeah, and maybe it, maybe this needs a little set up first because no question companies are building apps, right? Up and down sideways and big and small internal customer facing, you name it. API is open source. It's a whole big mix of stuff out there. 
 

Um, Throw in AI now and the ability to call any, any number of different models through APIs to enhance an application to maybe already have a support application, you're going to enhance it with support data, or you, you have a, a product search application or an inventory application, and you're going to enhance it with. 
 

With, uh, AI and using your own data trained on it. So how, I don't know, maybe that's a good place to [00:14:00] start. I'm making stuff up. Are you hearing where, are you hearing where people are focusing? Are they looking to leverage public public data as part of their products or are they looking to train? Internal data sets trained with internal products or  
 

Sandy Dunn: your own city set it up. 
 

So there's an application in New York where you can go in and, and, um, ask it questions and which, you know, very useful, but you probably saw where you can actually, you know, there was a gentleman who posted a, uh, how he. He asked a question and then asked, um, the application to write a poem about a wizard. 
 

So he got his, his information back about, you know, his small business in New York and then a poem about a wizard. So I think that's a great example of the challenge. Now, as Relatively harmless. It is resource abuse. I mean, it's, it's not free. So someone's going out there and, [00:15:00] and, you know, using a resource, um, I pay  
 

for it with my taxes. 
 

Yeah, that you're paying for with your taxes. And, and I think that, you know, we keep talking about how this is different, but I think that one of the unique things about large language models is it's not a database where you can actually. Okay. Yeah. You know, parameterized queries and and say, only accept this, you know, and have some some idea of being able to control the input and output. 
 

I mean, that the code execution and and the application are together. And so, you know, there's it's infinite number of ways that that can be abused. So, you know, for organizations, organizations, Um, you can start tightening that down, making sure that there's more and more rules, but then how useful is the application? 
 

You know, if you put so many guardrails around it that it's, it's not creative, you've probably seen the headlines about how chat [00:16:00] GPT is dumber. Now, well, you know, and I've certainly found that things that used to work don't work anymore and you get errors on things and you're like, well, why is that? Why are you rejecting that request? 
 

I don't see how that's causing any kind of harm. So finding that balance between safety and usefulness, I think will be. One of the things that, uh, organizations will, um, need to understand in the future, like how do we, how do we actually get the benefits without, um, causing some sort of friction within our environment? 
 

Sean Martin: Yeah. Yeah. And we we've, I mean, we're very active, progressive, I'll say, in, uh, in our use of it as well. We've, we've trained it on some of our own content. And to your point, when you start to, you start to guide it so specifically. It starts to freak out a bit and it hallucinates or gives wrong information. 
 

And, [00:17:00] and for me, that's where it gets a little interesting from an organizational perspective. If, if it's wide open and creative, you're going to end up with hallucinations that, that may not be accurate. If you tune it too much, you might, you might lose something or miss something or misrepresent something that is. 
 

Accurate from a data perspective, but not exactly what you're looking for. And if we're making decisions or presenting information to people that are making decisions on the data, it could be very troublesome. This is just aside from being open to vulnerabilities, where it could be manipulated to do those things, or manipulated to inject code in things, which I'm sure we'll get into now. 
 

Can you maybe share an overview of checklists? Because it's pretty Yeah, pretty comprehensive in terms of looking at the challenges [00:18:00] and defining a strategy and getting into a checklist for how to proceed. So can you kind of give us an overview how this came together  
 

Sandy Dunn: and what we're talking about is exactly why I created it. 
 

Because, um, as I was out talking with people, I saw the two extremes, I saw people who are going all in and I was whoa, whoa, whoa. You know, don't, you know, don't, don't hook it up to your email, your financial accounts, you know, all of those kinds of things. We saw the prompt injection, the indirect prompt injections, things were coming out and what, like June, it's like, Whoa, careful. 
 

We don't know much about this yet. And then the other extreme of saying, Oh, we're just going to block it. And I'm like, you're probably not blocking it. You know, you, there's so many different plugins. There's so many different types of chat, UBTs out there, PO, you know, um, It would you need to find that strategic balance in the middle where you're helping your teams be successful. 
 

That's the other [00:19:00] thing, Sean is okay. So we know our attackers. I mean, they're out there. We're already seeing evidence of them being able to accelerate and being able to use these tools. So if you're not letting your teams use it. You know, it's, it's like, uh, handing them a, a, you know, a pitchfork in your, your enemies have guns. 
 

I mean, it's like they, you know, they won't be successful. So, that was really why I created it, um, trying to help CISOs, cyber security teams. be able to find that right balance between, you know, not completely ignoring it and trying to block it, being aware of what was possible with it, but also, um, not hooking it up to everything and coming up with a strategy. 
 

And the other, um, important piece as you go through the checklist is I'm very much about Um, encouraging people to to use this as an opportunity to improve their entire environment. One of the things about the machine learning [00:20:00] models is they're already talking about S bombs and supply chain and having nutrition labels with their applications. 
 

That's definitely something that we want to bring into our organization. So, as you go through the checklist, I point to a lot of other Owasp resources where you can start, you know, Raising the water table around all of your security controls like look, use this as an opportunity. You definitely want to use, take advantage of the benefits of it, but use it to improve across your organization. 
 

So both OOSP resources and the MITRE resources.  
 

Sean Martin: Yeah, huge fan of MITRE as well. And, um, so talk, talk us through I've scrolled all the way down to the team. I realize I'm, I'm having a chat with, uh, with Rob Vanderveer at some point as well. Oh, yeah. Specific things, it's cool. He's, he helped, uh, with this at some point, I'm sure. 
 

So [00:21:00] we're, yeah, maybe just kind of walk through What, uh, like the challenge section. I mean, we kind of touched on some of them, but there are a few things in here that maybe we didn't get to yet. I'm just thinking like inventory,  
 

Sandy Dunn: right? Well, so, you know, 1st, you know, at the very, the very, um, page 5, where, you know, there's a diagram that helps people really understand where. 
 

Um, generative AI fits in artificial and machine learning into that universe. That's the other thing, Sean, is I see people trying to use LLMs for everything. AI is amazing. I think that's what ChatGPT did. It actually gave, um, is an opportunity for people to say, Oh, this artificial intelligence stuff is better. 
 

You know, like there's a lot of ways that we could use it within our organization beyond just recreating documents, [00:22:00] but large language models may not be the right algorithm for the problem that you're trying to solve. So you're really investigating what your business case is. And then, you know, as you Um, go through the document, um, and try to, you know, I actually start with business case, um, and say, you know, define your business case. 
 

That's the 1st thing that you really want to understand is, um, you know, maybe the way that we're doing it is the most cost effective way and just throwing, you know, I compare throwing a large language model into, uh, Okay. A, uh, an environment that isn't prepared for it. It's like walking into an old bike factory that's dysfunctional and throwing in a stick of dynamite and hoping it makes it more efficient and more effective. 
 

You have to be, you have to understand what problem you're trying to solve. And if you're If you don't have, um, you don't know where your data is, and you're, you haven't had good cyber security hygiene, like the last thing you want to do [00:23:00] is now add something else to the mix, you're going to really cause some problems. 
 

Sean Martin: And I love the business case point. Yeah, just because you can doesn't mean you should necessarily. And, um, yeah, what was I going to say to that? So I think the, well, I, I probably liken it to, I mean, I've, years ago, I helped build, build a sim, which with views of the Sor before was known as a Sor. And. The biggest pushback, uh, as we were building out that, that platform was, well, we don't want to automate everything. 
 

I had a dream of everything would be automated and the, and the humans are like, no, we, we need, we need some human interaction here to double check and validate and just keep a human pulse on things, which I completely understand didn't change my desire to [00:24:00] have everything automated, but, um, similar for this, just because you can. 
 

Insert an LLM to translate or pull stuff from one system and translate it in a written format into some other document that then gets sent off to a customer. Sounds like a good idea, but if it represents your product and hallucinates and puts something in there you don't want, you might find that you didn't want that without at least a human checking it. 
 

So, understanding what part of the process you're replacing, what the potentials are, good and bad. Within them, uh, just from a business case, I think it makes a lot of sense to me. 
 

Sandy Dunn: And well, and things like privacy, you know, obviously, you know, that's been a big part of the conversation, but you'd look at how many states. Past privacy laws this that this year. I think most of them. I think it was a total of 10 different states past privacy laws, and some of them have different [00:25:00] nuances like biometrics. 
 

Some of them actually specifically call out AI on many don't, but you still could be impacted by it. So I'm really encouraging organizations to kind of, you know, I know there's a lot of excitement, but pause, take a deep breath and say, You know, really take a strategic view of of what they're trying to accomplish and in balance both the opportunities as well as the potential expanded attack surface, you know, I. 
 

I saw a post by our local, um, community police officer, and he was talking about, um, how many kids are now getting caught with fake porn, you know, on their, on their phones, and that they're, that they get in as much trouble as if it's real porn, and so, you know, we're in a really strange place right now where we're going to have to try and navigate this, this new world that we didn't [00:26:00] necessarily want to be in. 
 

Anticipate.  
 

Yeah. And to that point, there's, um, trying to get back to their, the, uh, the section responsible, trustworthy AI. And, uh, it's a diagram that has at the, at the root ethics followed by lawfulness. Um, which I think you just touched on a bit. Um, I, I don't feel we have enough of that conversation. So I'm thrilled to see it as part of this, the ethics piece of this. 
 

So I, I do, you mentioned Rob Vandiver. Um, so as when I initially created this document there, I did have quite a bit of, uh, information around. You know, privacy laws in the states and different things like that. But OWASP is a worldwide organization. And so I started adding in, you know, India and China and Spain. 
 

There's, and I [00:27:00] recognize that, um, it probably, it was a bigger conversation than just LLMs. And so it fit in a, in a better group, which is actually the one that Rob. Leads, which is the security and privacy group. So there's very little actual, um, kind of regulatory and governance, but that will come in in a different project and we'll make sure and communicate that and get that out to everybody where it's getting a lot of steam right now. 
 

And we're trying to get that out to, um, as quickly as possible. 
 

Sean Martin: And so then the actual checklist, unless there's something else you want to get to. Um, no, I'm going to walk through this year show. I'll  
 

talk about whatever you want to talk  
 

about. I'm just, yeah, I keep going through the document. Like there's so much good stuff in here. My brain's kind of going like this. 
 

Um, kind of, [00:28:00] I guess what I want to get people to think about is. We're talking about a checklist. We talked about, does it make sense to do this? Is there a business case for it? Does it outweigh the potential risk and does it actually achieve what we wanted to without giving us more headache than we, than we want to deal with? 
 

And then you get into, well, how does it fit into a security program? And then some, then some of the checklists. So what I want to do is kind of help let's speak to the developers now. So not so much the security team, it's developers who. The security team is probably saying, go read this doc.  
 

Sandy Dunn: Yeah, which is, I mean, it, it's been kind of funny to have people that come full circle where, um, You know, people say, Hey, this, you know, my developer brought this to me today. 
 

And I got to say, I know you. Um, but let me talk a little bit about the [00:29:00] OWASP POP10 LLM and how I got involved. I kind of skipped over that part. So as I started to really get in, I'm trying to wrap my head around what are the threats, you know, what are, and what should I be worried about? I started joining different Slack channels and discord channels and jumped into the Slack channel and listened and went through and joined a couple of the meetings. 
 

And I really loved what Steve Wilson was leading. And I had an idea to do this and I really have to give him a ton of credit on, on how he leads, which is he was like, Hey, sounds like a great idea. Let's create a sub project. Go, you know, go get them. And so I rewrote it like five different times cause you know, everything was changing so fast and I was trying to figure out what I was trying to write and I was getting feedback from other members on the team. 
 

So I would encourage the, one of the best things about OWASP is it's not a standards body. It's a bunch of people like you and I, who were in the trenches, we're doing this stuff and we're trying to create [00:30:00] resources that other people will find really useful and help them do their jobs better. And, um, The collaboration, you know, we're creating useful tools together. 
 

It raises, you know, everyone does a better job. And so if, you know, it's been 1 of the best experiences because of the passion and, and just getting to work, you know, be with a bunch of really amazing people and contribute to it. So anyone who's listening. I know we're all busy, but you know, it is well worth your time to be part of OWASP and join a project and contribute to it. 
 

Um, it fits a really important space in the ecosystem because we're not a standards body. Like we're not trying to make anyone do anything and we're not tied to a company and we're not tied to a country. So we're in this really, we get to be the connective tissue. If you ask us, you know, what, What standards body should I go look at? 
 

Oh, here's a link. Or you know what? What happens in this country? Oh, go look over here. And we [00:31:00] get to have amazing conversations with ISO with MITRE. You know, I've been on some calls with people I never thought I'd get to be on a call with. So I'm definitely encourage anyone who wants to contribute or has an idea to jump in, find a project and be part of it. 
 

Sean Martin: Yeah, I second that motion. I know you. Quite a few folks across a number of different projects. And the work that comes out of OWASP overall is just incredible. Um, I'm thinking from, from an engineering perspective, it's on my mind. Cause I just got off another, another podcast where I was talking about critical infrastructure. 
 

And, and I guess we're just talking about this tremendous divide between CI engineers sitting in front of their control room with zero. Perspective and zero interest in security and then security sitting along the side with zero insight into all the data that's in the [00:32:00] engineering world and trying to, trying to pounce in and say, do this, right? 
 

Don't block that. Shut that down, lock it up, prevent these things from happening. No, because we, we need to deliver service, whatever it be, water, power, whatever it is, um, in a safe way. So safety is priority versus cybersecurity. So I'm thinking from, uh, From an engineering perspective, developers, more specifically, um, how, how does this help maybe bridge that gap? 
 

I'm wondering, yeah, is it written, written in a way that engineers can appreciate it and absorb it or I'm hoping, or is it very, very security focused and we're hoping engineers. Pick it up regardless.  
 

Sandy Dunn: No, it's a great. So again, the why OSP is so such a great organization is you look at things like [00:33:00] the ASVS, you know, I think what the conflict between developers and cyber security is cyber security, you know, the developers about. 
 

You know, you know, tomorrow they're going to release to production and then security comes in and goes, whoa, whoa, whoa, you know, you can't release, you know, you have to go back and, you know, and adds a whole bunch of work and the developers are like, where were you six months ago? We invited you to the meeting. 
 

You guys said you, you know, like. And so I think that the way to, um, enable both of those teams is cybersecurity needs to make sure that security is easy, um, for the developer to, to, um, be able to, to do as possible. So don't jump in at the very end, you can come up with standards and checklists and libraries or whatever you need to do. 
 

And make it, uh, and don't be a vague about it, you know, no, we can't let you do that. Well, what should we do? I don't know, make it [00:34:00] secure and they walk out the door. You know, we've got to be able to, um, you know, help each other. I mean, we're all on the same team and really securities quality. Me, you know, we don't go down to the car dealership and say, Oh, yeah, I, you know, I forgot to ask. 
 

I need a seat belt and an airbag. No, it's part of the car. Yeah.  
 

Sean Martin: So I'm gonna go through a couple of these checklist items because I'm just thinking through, to me, this is a conversation driver, right? This is an opportunity to say, Let's bring a group of folks together and understand how we're planning to leverage and build AI enabled stuff, right? 
 

And so it's a checklist and I, I've tried to point it or put it on the engineer, the developer. Um, when in fact this checklist is a group checklist, because I'm thinking about what existing services and tools do we have? [00:35:00] Um, what's our SBOM look like? Uh, Do we have an onboarding process for, to bring new, new AI stuff on board? 
 

Uh, let's see, do we have training on ethics? That's probably not the developer, right? But it may not be, it may not be security either, but an important element. So, I think even in my own naive view that this wasn't, this isn't security, it's not development. It's a team. Checklist  
 

in a deployment strategy. 
 

You know, that was, I think that's the first question an organization will have to answer is, okay, we, you know, we, we would like to use this. What's the first thing that we do? Well, you have to decide what is our policy? That's the other important thing. China is, um, you're already using AI. And one thing that chachi BT did was it illuminated artificial intelligence and machine [00:36:00] learning. 
 

So If you're using Workday, you, you know, you've got AI in your environment, and as you know, New York now has a very stringent audit requirement. If you're doing business in the state of New York, you have to go through an annual audit, making sure that you're, you're meeting compliance, you know, like, um, it's, you've tested it and you don't have any issues with your algorithm and things like that. 
 

So, um, taking really that big view and saying, You know, what is this? What are we doing? Where do we have it in our environment? Do we have our, you know, who owns it? Who's in charge of making sure artificial intelligence, um, what our strategy is? Um, where do people go ask questions if they want to use these  
 

tools? 
 

This might seem like a, an off the wall question. Um, that's what I think about often. When I think about, well, security in general, but you layer [00:37:00] on regulation and things become, and when you layer on third party stuff, we're using other people's things that add risk and impact regulatory, your regulatory position, perhaps JAT, GBT, and some of these models are expensive already. 
 

You pointed to that earlier, and if they're abused, they can be overwhelmingly expensive. You add security, you add regulatory compliance on there, it can be a huge burden. So I'm wondering if we end up at a point, this is a philosophical question, you can say screw off. I don't want to answer. We end up at a point where people just say screw it, I'm gonna take the risk. 
 

I can't afford to do all the stuff that I need to up and down the stack from security to compliance and just hope for the best or [00:38:00] The ones that say, I'm not going to take that risk, they fall behind and become a dinosaur because they don't use the latest and greatest that everybody else is using. And therefore, only the ones with resources, be it money and teams, and legal protections, when things go awry, and insurance policies when those don't hold up. 
 

The ones with money and resources end up winning here.  
 

Right. And Sean, I think you're, you're pointing to It's something that we're still working through as a group, right? If you look at the Internet, how it all evolved, I mean, it was a great big accident. I mean, you know, we had this, you know, this technology that was basically in the domain of military and they were using it to, um, be able to break the encryption on each other's messages. 
 

And then they needed more computing power and they stood up Silicon Valley. You know, and then all of a sudden this technology got into the [00:39:00] universities and business who didn't have as much experience with the use that the adversarial side of technology and their information is meant to be free. And, you know, we, but they weren't, they didn't ever. 
 

I think there was kind of a, and still is a naive view of this as a, the world, you know, that the world is a scary and. In a dangerous place. And we actually do have people who want to hurt us. And I, I think that, you know, I hear that all the time where people go, well, how come those hackers want to do, you know, why do they want to attack us? 
 

And I'm like, because humans have been attacking humans, you know, since the beginning of time. That's how humans are. Um, the technology just makes it you more accessible. They used to have to be Okay. You know, standing next to you to hurt, you know, they can just do it over the Internet. So I think it's evolving. 
 

Um, and that's as I, when I look at it, um, I was reading a threat [00:40:00] intelligence report where a China and Russia had kind of ganged up on Finland and we're busy dragging their CBID and, um, Discon broke a cable and we're dumping migrants at their front door. So it, I think for us as a society, this is a chance to take a look at technology, take a step back and go, whoa, whoa, whoa, wait, you know, this could really is dangerous. 
 

I think it has been dangerous for a while. Um, let's figure out, like, how do we make it safer? And to your point, um, around the cost, you're absolutely right. I mean, we're still navigating all of the breaches that we see in in the news every day is people either feeling like they were, they didn't understand. 
 

The risk or they didn't know how to appropriately resource it. I, you know, you look at the, what happened to the casinos in Las Vegas, how costly that was. I mean, surely at some point, if someone would have been able to go [00:41:00] to the CEO and say, Hey, do you want to spend 7 million over here? Would you like to invest a million dollars over here? 
 

So that we do this a little bit better. Let's spend a million. Um, so I, I think it's an ongoing, we're still trying to do this a lot better. Um, we both owe wasp. I think it is. Nobody does it alone. That's why H. I. Sachs, OWASP, um, MITRE, NIST, I mean, that's why it's so important to be part of those and being part of the conversation. 
 

I love it. And as we wrap here, on kind of this uh, community point, um, I know you have Good team that helped put this checklist together. Uh, I presume a number of them are from different parts of the world, bringing different perspectives and insights. Uh, not too recently, the AI act was passed in the EU. 
 

I don't [00:42:00] know if. You've got a chance to look at that. Does it impact this, you bring in folks in from connected to that in some way to help keep this going or provide a feedback, which I guess is the final call to action for us or get people involved and give you feedback.  
 

So, um, again, uh, the, the two groups right now, they're primarily focused. 
 

You have Steve Wilson, who's leading the top 10 for LLMs. And then you have Rob Vanderveer, who's reading the, who's leading the AI, um, Privacy and security group, which is much broader. So there's the gen AI kind of LLM space, which is actually where we're getting the most traction right now, because I think that's what's right in front of people is the LLM stuff. 
 

And then when you start looking at the, the bigger challenge of artificial intelligence and machine learning and the EU act, I mean, you know, how do you, how do you test for Fairness, you know, [00:43:00] ethics, you know, ethics is a hard call. I actually went into chat to UET. I was getting ready for a presentation and I had to do a bunch of jokes for me from Jim, uh, in the style of Jim Gaffigan. 
 

And it was funny. And they really, it sounded like something that Jim would say, and I wasn't comfortable using them because it felt. Like plagiarism to me. Um, so it's a, it's, it's a weird world.  
 

It is a world world. Yeah. Marco is connected to the entertainment space and, uh, it was following the, the, uh, Hollywood writer strike, performer strike, all that stuff. 
 

And all, all based, well, not all of it, but a big chunk of it based on AI and ethical use and all that stuff too. So it's an interesting world. Interesting world. So I want to say, Sandy, thank you for putting that together and [00:44:00] getting it published. Point five is out for review and comments. I would encourage everybody to participate. 
 

Take a seat on a. On a program, um, I should, I should maybe do that myself. Uh, there's a lot, a lot to look at your input is important. I'm speaking to my audience here. And, uh, if we don't participate, we'll end up with whatever somebody else said. So best, best to get involved, best to get involved. So Sandy, thanks so much. 
 

Um, any final, final parting thoughts before we say goodbye to  everybody. No, thank you for having me, Sean. I really appreciate it.  
 

That's my pleasure. And uh, of course, everybody listening and watching, I'll include links to The checklist and the OWASP top 10 is for good measure. And yeah, we encourage everybody, like I said, to, to read that, think about it, take it to your team [00:45:00] so you can strategize, uh, with security in mind from the beginning and, uh, contribute. 
 

So thanks everybody. Be sure to, uh, share and subscribe and we'll see you on the next one. Thanks again, Sandy. Thank  you.