Redefining CyberSecurity

Navigating the Ethical Maze of AI Usage: Curtailing Misuse in Cybercrime | An Imperva Brand Story With Ron Bennatan

Episode Summary

The promise of AI is indisputable, but so is its potential for misuse. As cybercriminals could leverage AI for sophisticated social engineering attacks, the need for ethical constraints in AI application has never been more urgent.

Episode Notes

In this Brand Story Podcast, hosts Marco Ciappelli and Sean Martin join forces with Ron Bennatan from Imperva to embark on a journey into the world of generative AI. The conversation is a blend of philosophy, technology, and cybersecurity, offering listeners a deep-dive into the complexities and opportunities of AI.

The trio explores the accuracy and unpredictability of AI, discussing its ability to handle complex prompts and the unexpected 'hallucinations' it can produce. Bennatan highlights the challenges this poses in a software development lifecycle, emphasizing the non-deterministic nature of AI outputs and the difficulties this poses for automated testing.

The conversation also delves into the scalability of AI, discussing the potential for automation at scale despite perceived slowness. Bennatan provides an interesting perspective on AI's tendency to never repeat the same answer, viewing it as both a source of creativity and a potential issue.

Cybersecurity is a key theme in the discussion, with Bennatan acknowledging that AI's ability to mimic human communication could elevate the sophistication of social engineering attacks. He also raises the potential for AI to mimic specific individuals, increasing the risk of impersonation, deep fakes, and insider threats. Despite these risks, Bennatan maintains that AI can be a powerful tool for defense, making cyberattacks more sophisticated but also enhancing defenses.

The conversation also gets into a philosophical exploration of the Turing test and AI's potential to fool someone into believing it's human. Bennatan suggests that AI doesn't need to excel at everything at once, but can be highly effective in specific tasks. He also envisions AI improving customer service and operational efficiency by handling complex tasks more efficiently than humans.

In this episode, listeners get a taste of the intriguing possibilities, challenges, and ethical considerations that AI presents, making it a must-listen for anyone interested in the intersection of technology, philosophy, and cybersecurity.

Note: This story contains promotional content. Learn more.

Guest: Ron Bennatan, General Manager, Data Security at Imperva

Resources

Learn more about Imperva and their offering: https://itspm.ag/imperva277117988

Catch more stories from Imperva at https://www.itspmagazine.com/directory/imperva

Driving Innovation and Protecting Growth: The Intricate Relationship Between Information Technology (CTO) and Information Security (CISO) | A Their Story Conversation from RSA Conference 2023 | An Imperva Story with Kunal Anand: https://redefining-cybersecurity.simplecast.com/episodes/driving-innovation-and-protecting-growth-the-intricate-relationship-between-information-technology-cto-and-information-security-ciso-a-their-story-conversation-from-rsa-conference-2023-an-imperva-story-with-kunal-anand

Are you interested in telling your story?
https://www.itspmagazine.com/telling-your-story

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

[00:00:00] Sean Martin: Marco,  
 

[00:00:00] Marco Ciappelli: Sean,  
 

[00:00:02] Sean Martin: you know, I like to, I like to go faster and get more stuff done than humanly possible sometimes. And, uh, I think thankfully there's a way that technology can help. with this.  
 

[00:00:22] Marco Ciappelli: Yeah, but I think nowadays I can still tell that you cheated. That's the problem. Exactly. I mean, it's getting there. It's getting there. 
 

Yeah. Here's an advice for you. You should use it as a tool. Where you work with this new technology named, I heard it's called Artificial Intelligence or even more specific Generative Artificial Intelligence. There is not an easy button. It's not going to do everything you want.  
 

[00:00:53] Sean Martin: I think many people have opinions. 
 

Is it, you said cheating. Is it cheating? If it achieves the results? No other rules. I don't know that. That's that's a question that maybe we'll dig into today.  
 

It's a cheating if to go somewhere faster instead of walking, you get a bike or a car. That's not cheating. You're just using the technology that human invented. 
 

So see, we're already going philosophical. So break on this. Don't get don't let me go there yet. Let's introduce our guest, which is Someone that I really enjoy talking, we had a couple of episodes in the past, so I know it's going to be a fantastic conversation.  
 

Ron Benetton, thanks for being on again. 
 

Always, always fun having the Imperva team on to talk about interesting topics and no question, AI is a hot topic and an interesting one. And I'm excited to hear. Some of your perspectives on, uh, the world of technology and the world of cybersecurity and how that, uh, this new wave of, of capability is impacting both of those and ultimately society as well. 
 

So thanks for, uh, thanks for being on.  
 

[00:02:14] Ron Bennatan: Thanks. Thanks for having me Sean. Marco.  
 

[00:02:18] Sean Martin: So before we get into, uh, all the good stuff, can you read? The profile description that CHAT GPT wrote for  
 

[00:02:27] Ron Bennatan: you. 
 

[00:02:30] Sean Martin: I want to hear, I want to hear who you are and what makes you tink and think makes you tick or think either way. And my guess is CHAT GPT knows far better than any, any of the three of us. I'm joking. Of course. Now, a few words about what you're up to at Imperva and maybe, maybe some thoughts on why this topic is an important one to discuss, not just because everybody is, but there's a reason that everybody is, right? 
 

[00:03:01] Ron Bennatan: Yeah, yeah, yeah. So I'm a, I'm a data security fellow at Imperva, uh, which means I, I am on the data security side of the house. I focus on. Technologies that help our customers secure their data and, uh, and I get to tinker with, uh, uh, new stuff and things that, uh, could change the landscape of how we do things or change the landscape of the threat profiles and, um, you know, and sometimes I take her with things that make a difference and come to fruition and sometimes they don't. 
 

Um, if you ask me, this is, Thanks. Probably the biggest thing that I've had to work with or I got to work with, uh, probably since the internet. Uh, and you know, so it is, I, I cannot think of something bigger and more disruptive than this, or at least I haven't experienced, or at least in the last 30 years I haven't experienced something this disruptive. 
 

I, and, and, and disruptive for, overall for the good. Like now, it, it's also disruptive for the, in, in a bad way, but in my opinion, overall on the balance of things, it is a very, very good thing. And I'm very excited about it. And I wake up, uh, every morning, uh, with, uh, thinking, Oh, I, I just, I, I actually wake up and, and, and sometimes, and I write myself an email in the middle of the night just so I don't forget to try something. 
 

Because it's, it's, it's, it is very magical in my opinion. Okay. It's, uh, it's not, you know, it's not like, Oh, well, let's do a search engine or, or, or, you know, or, or ask something. And it'll give me an answer. And it's also not really about cheating, but it is about, um, It, okay, so, oh, uh, I, I think what people don't understand or, or, or, or haven't completely assimilated into themselves is the fact that, um, we as humans, there are some things that make us human, okay? 
 

And, and one of the things that make us human Our ability to communicate. Okay. And what this thing, you know, I've, in my entire life, I've heard about ai. Okay. I, I, when I went, when I did my undergraduate, I had a course called ai. It wasn't even close to what, what this thing is. Okay. It, it, it's like it. It shouldn't have the same name. 
 

What, what, what the current generation, I think, you know, that only exists for the last, maybe two years, at least, at least for the majority of us. Okay. I'm sure other people have, have worked on this for longer, but for most of us, it's been around for between a year and two years. The thing that it has been able to do is kind of hack the communication protocol between people, okay, and I think that's what makes it so, so fundamental, right? 
 

It's, it's, it, it truly understands, or it seems to truly understand what I'm asking it. And it seems to truly give me answers that I would expect somebody human to give me. And from that perspective, it's, it's hacked our humanity. Okay. Because if, uh, if it can replace, you know, if, if, if, if it can replace some of the communication, then it's, it is on the one side, very magical. 
 

And very, very deep. And that's why it's very exciting to me.  
 

[00:07:01] Marco Ciappelli: Wow. I was, I was going to open the can slowly, the philosophical one, and you just went and pretty much said, here it is. Just broke it. Just break it. Let's, let's put it out there. And I agree with everything you said. And, and I guess then we can go more in the specific of maybe cybersecurity. 
 

I'm sure Sean wants to go there. But I want to. I want to point on something that you said, and, uh, as you are talking about the communication, I, I was thinking about that movie, Her, from a few years ago, right? And, and that, that did seems like still a dream at that time to have that kind of conversation. 
 

Now, I'm not saying that CHAT GPT is going to fall in love with any of us, although who knows, could, but, but we're there. I mean, I, I feel the same. I feel like I am talking to someone and, uh, in a, in a human language in a way that I don't need to put the idea there that it's just the probability random positioning of one word after another. 
 

Because in the end, I don't even know how we think. So I just know that it makes sense for me to talk to you than if you're doing randomly, you're doing a pretty good job. You know what I mean?  
 

[00:08:18] Ron Bennatan: Yeah, no, I agree. I totally agree. I, I think it's, you know, I catch myself sometimes, you know, there's this, um, You know, anybody who goes to, uh, who studies computer science learns about the Turing test, okay? 
 

Can you put, can you put a screen and you have on the one side a human and the other side a machine, and you talk to both and you can identify? And so people often say, does CHAT GPT pass the Turing test? Oh, no, it doesn't, because you can kind of You can see a pattern or a style, you know, I, I have to tell you, I, I completely think this thing passes the Turing test because I find myself, for example, writing stuff and I'm, and, and, and, you know, it doesn't always give me the right answer. 
 

Or it doesn't, or, or I can see that it didn't completely understand what I wanted, or I wanted to focus on something else, but I find myself, you know, asking one thing and then, and then when I, when I asked for something a little different, like, for example, I might, I might say, Oh, you gave me, like I, I'll give it a big, a big task and it only gives me. 
 

The first part and I, and I want to say something like, uh, please do it more thoroughly for everything. And did you see what I just said? I said, please do it more. So why am I saying, please? It's a, it's not, it's not going to be offended if I don't say please, but I, I, I am saying please. Okay, so to me, it, there's something going on here, which is, which is, you know, which is different and, and, and also, you know, I'll be, I'll be, anyway, I find myself that, um, sometimes I, I think that the thing that it produces And I use this in a bunch of places, not just CHAT GPT, sometimes it's, uh, you know, copilot for writing code. 
 

And, and I look at some of the output and, and I, and I, and I use the word uncanny. It's uncanny, this thing, okay, which you don't say that about tools, right? You don't say it's uncanny, okay? You say it's good, you say it's fast, you say, but But I, I, I, I get that feeling a lot and, and, and yeah, it's, you know, it's, and it's, it's, it's in a very broad spectrum. 
 

Okay. It's in a, it's in a broad spectrum and it just gets more and more. What, what, what was it that I. that I recently realized. I recently realized, I read this tutorial that you can, that you can, you know, sometimes, oh, oh, I remember what it was. Um, when I moved from 3. 5 to 4, I suddenly noticed that, you know, besides the fact that it's a little, you know, it takes more. 
 

It has a bigger set of tokens it can, it can, uh, it can use and, and, and by the way, it's also slower. Okay. But, but it, it explains itself. Okay. It's like, it doesn't give you the answer. It, it, it gives you, it's, you know, let's not, it's thought process. Okay. Okay. Whatever you, whatever this, but it's, it's like, it gives you the steps and. 
 

You know, it's, uh, it's marvelous. It's just marvelous.  
 

[00:11:52] Marco Ciappelli: So I have experienced that with the DALI , which was a huge step. And I, that's how I, I jumped back from mid journey to say, okay, you know, DALI is pretty good. And I loved that explain and how you, it creates a more advanced prompt than the one you gave, right? 
 

So there is that conversation and, and. I'm wondering, does it do the same thing with the code? Because I don't play with codes. Does it explain why it decided to write the code in a way or another? 
 

[00:12:22] Ron Bennatan: It, it, it, not, not specifically, not, not in, if using like a GitHub Copilot, but, but if you are using it in, um, in the CHAT GPT part, when they're now more integrated, then yes, it's exactly that. 
 

What, what, it's like, if you give it a simple task, if you give it a task, which is really complicated, it gets, you know, sometimes it gives you an answer. And then. Uh, like, like for example, even in a security, uh, context. Okay. I'll give it, I'll give it a, I'll give it a, you know, like, say, say I'm trying to do some, uh, PII classification. 
 

Okay. I'm, I'm, I'm, I'm, I, I, I have some, some data and I'm trying to, and I'm trying to tell it, uh, tell me if there's PII here, or find me the PII or tell me what to do with this p ii. Okay. Uh, if I give it a simple text, it'll do it. If I give it, for example, a zip file with 10, 000 files, it, it, it, it'll try, it'll try to do it, and it'll give me a very, very, very partial answer. 
 

And then when I tell it, but wait a minute, I asked you to find All the PII in this zip file. Like, I didn't actually change exactly the prompt, but I kind of scolded it for doing a partial job. And by the way, sometimes it apologizes. Sometimes it says, uh, you know, apologies, uh, you know. And at that point it goes into a prescriptive algorithm. 
 

It says, okay, you gave me a zip file. The first thing I need to do is open the zip file. Then I need to iterate over the directory structure and it. It writes the few lines of Python code to do that, and so it, so it's, it's, it's, I, I don't know what, listen, I don't understand what's going on in, inside this thing, okay? 
 

But, but it's, it's, it's almost like you're talking to a different person at that point, okay? Which is Again, you know, it's a lot of fun. 
 

[00:14:40] Sean Martin: It can be fun. And, um, I want to get your thoughts on this cause you're talking about, uh, tokens and whether or not the prompt is too, too big to understand. And we can actually, so you can, you can prompt it a little differently or give it to different information or use multiple prompts to break things up. 
 

And I'm wondering, we're not just talking about the UI prompt through a web browser or the mobile app that's available for CHAT GPT, right? Um, these things are open by APIs where I can, I can run some automation and just feed and feed and feed and prompt and tweak and, and whatever's coming out. 
 

Automate a lot of this stuff at scale, right? And. So I'd like your thoughts on this. Cause sometimes when I do that, it's 99 percent accurate. And then there's that one little thing in there where I thought, you know, I don't know where the heck you got that, but that's not part of it.  
 

[00:15:53] Ron Bennatan: Yes. Yes. Yes. But by the way, that, that. 
 

That is a big challenge. That last bit is a big challenge, so maybe I'll start with this. And it's not one percent, okay? It's much higher than one percent. That's all I'm capable of. And it's called hallucinations, okay? It's like there's, you know, and there are ways that you can control, you know, how creative it is and how hallucinating it is versus how precise it is. 
 

Um, and it's funny when you think about When you think about, uh, you know, how it fits into a software development life cycle, there's all kinds of other challenges because the outputs are not exactly deterministic. Okay. And we're used to when you develop stuff, you're used to creating, you know, automated testing, but the thing answers differently. 
 

Sometimes you can be running the same tests and they're, and they fail, they fail. Okay, so that, that's, that's, that's a challenge. Okay, uh, there's another challenge with the at scale piece that it's slow. Okay, when you run an API that lives over there and. You know, when, like when you're sitting there interactively in front of the prompt, you don't care that it's typing slowly. 
 

But when you run it from your code, and you're, you're waiting for an answer, and it takes four seconds to come back, and you're trying to build a system that has a throughput of a thousand requests per second, that's not going to work. Okay, so, so, so, you know, so, so you have a lot of, uh, other You know, the good thing is it's not a monopoly, you know, there's not one game in town, there are different, you know, there's OpenChat, uh, which you can host yourself, but, you know, different levels of quality, different, but, but, but yeah, it's a new tool and, and, uh, you know, whoever's in the software business that wants to use this new tool, It needs to deal with, uh, all the implications of something that's new and different. 
 

[00:18:12] Marco Ciappelli: So you said something that, actually, that sometimes pisses me off and sometimes it's a blessing in disguise, which is it never, usually never repeat the same answer, even if the prompt is the same. It's very hard to have it do the same image and say, just change at three clouds. You usually get some difference in the output. 
 

So if it doesn't in a creative environment, okay, whatever. But if it doesn't in a code, just for the heck of changing something, because it just has to be innovative all the time, that could be an issue. But it could also prompt to, I'm guessing, something that you would not So when we say it is not creative I think sometimes because of this randomness, it could bring some creative aspect to it. 
 

And where I'm going with this is, how can an attacker, a cyber criminal, leverage this? Because we do try to social engineer. So that, you know, like I was listening about people now that they have writers, that they have a lawsuit, uh, against it because it pretty much ingested. Well, not CHAT GPT, but OpenAI because they ingested obviously entire books. 
 

And, uh, and they, and the lawyer were trying to trick the CHAT GPT to. To say where, if they actually did read the books or not, you know, so I'm wondering how the bad guys are going to try to, to skim this.  
 

[00:19:50] Ron Bennatan: So I think, you know, it, it is a tool and it's a great tool and the same way. It's a great tool for us. 
 

It's also, it's a, it's an amazing tool for. For the other side, it's, uh, there's no doubt about it, that it's both, it's both a good tool for the attacker in making them more productive in scripts and things like that, and maybe even more important, it's a great tool for, you know, that, that, that concept that we hacked the communication layer of humans, okay, you know, first of all, a lot of defenses, is. 
 

relied on something that we thought humans could do and scripts could not do. Okay. The whole capture thing, the whole thing that, uh, you know, uh, somebody is going to call a phone number and read something out and you have to, those are trivial, like trivial to the, to the complete degree. So a lot of defenses are worthless right now. 
 

Okay. Um, Everything that we kind of got used to and got under control that we used to call social engineering attacks or phishing attacks or whatever, you know, are going to become way more sophisticated, okay? And, and I don't think we've even started to skim that, that, that because, because it's not, you know, it, it's not just about, It's not just about the, the ability to sound like a human, but potentially it's the ability to sound like Marco. 
 

Okay. Like if Marco is important enough to that, if we compromise Marco, we can compromise the, I don't know, the entire bag, then it's worthwhile investing. In mimicking Marco, okay, and, and the ability to take an existing large language model and just do very fine, you know, very not difficult fine tuning. 
 

You know, means that, you know, we can approximate Marco much faster. Okay. So, so, and I don't think that's happened yet, but, but, um, you know, these are all things that are going to make defending harder, um, you know, in our world, for example, much of what we do is insider threat now, insider threat means that the, the person inside the company. 
 

Who has all the privileges is in many ways the threat. Okay. But, but not because they're necessarily bad guys. Okay. It, it, it could be that they're bad guys, or it could be that they have been compromised. The ability or, or, or, or the threshold necessary in order to compromise insiders. Has just gone up. 
 

Okay. There's no doubt. And, and yeah, it, it is, uh, but going back to the beginning, I still think that even though it makes kind of the attacks more sophisticated and harder to defend, I still think that overall this thing is, is, is for the good. 
 

[00:23:20] Sean Martin: So you made an interesting point that I want to maybe expand on a little bit. And it's this idea, cause I think we started off with the Turing test. Can it, can it, can it fool somebody or at least make somebody believe that it's, that it's human and it it's easy. I mean, we've, we've seen bot purpose built bots. 
 

We've seen all these things like specific social engineering attacks. When we start talking about AI and generative AI, I think we tend to take a step back and. Think, can this do everything all at once really well, visually, audibly written code, you name it, right? And, but to your point, it doesn't have to do all of that today, all really well. 
 

It can do something very specific. And you said you haven't, you haven't seen anything like that. Um, but what are some. What are some examples of what that might look like? You gave the, you gave the example of Marco's voice, um, to, to impersonate him. But let's look at this from a, I'm, I'm an organization, I, I build software to interact with my clients to manage claims or process, uh, order, whatever it is. 
 

How, what are some of the things I might experience? With some very finely tuned, AI enabled bots, let's say  
 

[00:24:58] Ron Bennatan: as, as a user or as the company providing the service. What, what did you  
 

[00:25:03] Sean Martin: mean? I, I meant the, I meant, yeah.  
 

[00:25:08] Ron Bennatan: Well, you, so, you know, I, I hope I don't say something that I'm not, that you can't say on a recorded line , but I think, I think, you know. 
 

This has the potential to be better than 70 percent of humans, okay? So, so I think it elevates a lot of, a lot of, uh, communications and a lot of tasks that you as a consumer or as a, or as a customer or as a user, you know, might have been very frustrated. Okay. I, you know, I know, I know that is just a, you know, just a person, whether it's, uh, you know, dealing with, uh, with, uh, my, uh, okay. 
 

My, my, uh, medical plan and, you know, a claim and, you know, Sometimes I feel like I want to pull the hair that I no longer have, okay, and, and, and, and sometimes it's because it's not because somebody is intentionally doing something wrong, but because it just seems to be like a lot of disconnected processes or disconnected systems that don't tap to. 
 

So I don't think AI is going to solve everything. Okay, because it's not enough. Okay, you need, you need the communication, you need it to have access to the data because without the data, you know, like it's, it's not going to solve my problem of why my, uh, foreign claim has taken 90 days to, for, for somebody to look at. 
 

Okay. However, I will tell you that it is absolutely not something that should take 90 days. And you can see that I'm talking from personal experience. Okay. I mean, I look at the documents that I sent in. It's a no brainer to decide if you accept the claim or don't accept the claim. Why does it take 90 days? 
 

So if you, do you, if you're asking me, do I believe that a year from now or two years from now, It'll take 90 seconds. I absolutely believe it should take 90 seconds. It is a no brainer. Look, by the way, actually, it's an excellent example. Okay. It's an excellent example. Why? I'll tell you why it took 90 days. 
 

Okay. I sent it in. I don't know. It went around and around and around, but because it was in a foreign country, It wasn't in English. Okay? It wasn't in English. So they sent it to translation. That took another month to come back. Okay? Then, something else was missing. They sent it again to translation. These things do the translations themselves. 
 

It would have taken 90 seconds. With the right AI. And, and that's an example of how our lives are going to change. And by the way, in security too, when you think about it, the triaging process. is the most expensive and lengthy process, right? And it involves people. And it involves people that need a certain skill set. 
 

And it involves different contexts and different aspects. Okay, so are all these things gonna go like that? I truly believe they will.  
 

[00:28:44] Marco Ciappelli: Well, after all, the, what, what AI, generative AI is doing now, well, is summarize. And so understanding quickly a document, looking at all the options. Creatively, yeah, there are some weirdos like me and you that really have fun with it. 
 

And I'm sure there's others, but most people, they use it to summarize this article. Yeah. Read through this and tell me the main five points of, I don't know, the new contract for SAG AFTRA. Because I don't want to read it all, right? And it does it really well. So I'm wondering. When you said that it could improve the, the control, the look into the asset, look into maybe some, uh, lack of defense in a threat programs maybe, or in the way that the, the computer is accessible from the outside. 
 

I'm thinking that that's exactly the kind of work that AI can do very, very, very well.  
 

[00:29:47] Ron Bennatan: You know, I haven't, I haven't actually thought about what you should about that . It's a, it's a really good question. Um, it's, it's a, it's an excellent question. Uh, unfortunately I don't have an answer because it's a, it's a deep question. 
 

I, I will say one thing though, that even though you said it was. Exactly that. It's actually exactly the reverse, meaning what you, what you want to ask it is not to summarize what exists, but to ask it what doesn't exist. You're, you're saying you, for example, you'll say something like, you know, this is my policy. 
 

Where is it broken? Or, you know, these are my controls, what is it lacking? Or, you know, what are the pitfalls? Or, where do I have, uh, contradictions? You know, I, I don't know, Marco, you might have stumbled onto something like, uh, Like maybe a new startup or something. I, I have to think about what you said, but it's, but it's, uh, it's a, a, it's a good question and it could be a brilliant question. 
 

Okay. Uh, but I don't know. I don't know the answer.  
 

[00:30:56] Marco Ciappelli: Maybe it was completely random and I just came up with something, something brilliant. I don't know, but,  
 

[00:31:02] Sean Martin: uh. You'll never be able to re, re, re make that statement.  
 

[00:31:06] Marco Ciappelli: I, yeah, that same question is going to have to be rephrases a million times. I'd have to add tapestry. 
 

And, uh, some kind of like dance and technology that he puts, but, uh, I don't know, Sean, a lot to think about here. This is not a conversation is going to end today.  
 

[00:31:24] Sean Martin: No, there's, uh, so much to consider and Ron, I actually want to, as we begin to wrap up, sadly, um, your thoughts on this, I mean, there's so many. 
 

Opportunities for security teams to leverage this, to paint a picture, identify the gaps as you were describing, um, perhaps respond faster. I think you use that as an example earlier, uh, to, to looking at attack information and compromised data to find new paths, those are fairly. Tactical examples. What would you suggest, maybe security teams, let's focus there, security teams do, to ensure that their organization has the best posture possible, given AI is gonna, gonna exist at some point in their organization. 
 

It's a big question, but, um, yeah, highlight big, big areas to focus on.  
 

[00:32:31] Ron Bennatan: So, so I think that, I think you're right that, um, this is, this is, uh, significant enough that it's not enough to look at this tactically. I think it's, uh, you know, it, not incremental. It's not linear. It's, it's not enough to just, you know, I, I, I think a lot of, a lot of, uh, Like, even the, even the companies, okay, the vendors, okay, like it's clear that every vendor needs AI, okay, so everybody's looking for, what do I put to my eye, but, but, but you're right that, you know, security organizations, I think they almost need to go back to basics in order to figure out where it. 
 

fits or how it best fits, okay, and, and, and, and make a decision on, you know, what are the absolute axioms that I have built my program on, um, and then, and then figure out where this fits best and how this fits best and not, not look at this as just, oh, okay, I have, uh, I have a data security program. I have a DLP. 
 

I have a SIM. I have this. Each one of them should have some AI. I don't think that's, you know, that's the easy one. I think, um, I'm not saying don't do it. I'm just thinking that you, you, I think it's important to realize that it blurs, it blurs the line between human and not human and then go back to basics and look at the axioms and and say, Which of my axioms don't work anymore, okay, which just have to be completely thrown out and, and therefore it'll be more significant, okay, more, um, more fundamental. 
 

[00:34:28] Sean Martin: And I'm going back to, what was the example you gave? Uh, oh yeah, CAPTCHA is presenting, presenting text that then audibly reads it to, to get past the CAPTCHA. I mean, that, that's an example of something you're doing now. Is not going to fly.  
 

[00:34:51] Ron Bennatan: Or tell me where the bus is, okay? What do you think this thing doesn't know where the bus is? 
 

Or the bridge is?  
 

[00:34:58] Marco Ciappelli: It can see images better than us.  
 

[00:35:00] Ron Bennatan: It's dead. All that thing is dead.  
 

[00:35:04] Marco Ciappelli: But if you think about the process, again, it's, it's a, it's always a getting better. It's always this chess game. And I'm going to say something to my last thoughts on this, but before I get there, is password. Password. 
 

Yeah, password was great during the Roman Empire where they were fantastic. You know, who's there? Oh, let me open the door if you guess that that's certain. Then we got computers that started to crack those password numbers longer. Now the catch that it was good. Now it's not. So we're all along it.  
 

[00:35:37] Ron Bennatan: But look how long it took us to get away from passwords. 
 

Like we, we knew we needed to get away from passwords 20 years ago. Look how long it took us. Yeah, we don't have 20 years now. We don't have 20 years.  
 

[00:35:51] Marco Ciappelli: And that's my last observation is that is that now it changed so fast that I don't know where the bottle is going to is going to happen. I keep envisioning to take two, you know, adversarial AI, just put two AI batch and I One is the good one. 
 

One is the bad. And I'm sure somebody is already doing it. Put them on the ring. Here's all the information you need. Figure it out. Figure it out yourself and let me know. Let me know where the defenses are not good enough or how creative you can get to knock it down. So I'm not gonna be on that ring, Sean. 
 

[00:36:28] Sean Martin: The battle is gonna be in, in the mapping of controls against, mitigating controls against Uh, risk and exposure, and that's going to happen faster than our, our human brains can deal with. So we're going to have to use AI to understand that, I think. And, um, yeah, to your point, Ron, I think strategically look at, uh, every part of the business and every part of the security program and every part of the risk management program. 
 

And say, where do we sit today and, and know that it's going to look very different tomorrow and then start, uh, I don't know, it's a tough problem. It's a tough problem. I think the only thing I can say beyond that is surround yourself with smart people who are looking at all these tough problems as they surface. 
 

[00:37:17] Ron Bennatan: Curious, curious people. Curious people. I think, I think that's the key.  
 

[00:37:24] Sean Martin: And I would say within reason, be, be curious as well. Cause that's Where, uh, that's where you'll find something that only you can find as a human sometimes as well. So Ron, it's clear to me, probably all of our listeners that we could talk about this for days 
 

uh, but I think we painted a good picture today. That, uh, this is a topic that. You can't ignore and need to think about it and, and surround yourself with smart people like you. Get curious, get curious. So thanks Ron. And to, uh, to all of the team at Imperva. We actually had maybe almost a year ago now. Uh, we had a chat with, uh, your CTO and now CISO as well, uh, Kunal, uh, Anand, and we touched on this, uh, topic as well and a lot of interesting things there too. 
 

I'm going to, I'm going to include a link to that episode because it's, it's still  
 

[00:38:19] Ron Bennatan: And he's a, he's a brilliant, curious person. Exactly.  
 

[00:38:23] Marco Ciappelli: Exactly. But, but You know Sean, you, we talked to Kunal a year ago, so you know how many things happen in AI in a year? I think it's time to bring him back.  
 

[00:38:32] Sean Martin: What, what makes a human brilliant is the foresight to say this is, this is going to matter regardless of some of the, uh, details. 
 

Um, and Ron has that and Kunal has that as well. So, um, so with that we'll include, uh, any links that Ron has that, uh, that he thinks might shed some additional light on this and, uh, of course links to connect with the Imperva team and, uh, many more conversations coming up, Marco.  
 

[00:39:09] Marco Ciappelli: Yeah, I want Ron to come back because that jar is still full, so we need to dig deeper and unleash  
 

[00:39:15] Sean Martin: it like that, Ron. 
 

I know.  
 

Stop it there. So many more.  
 

[00:39:21] Marco Ciappelli: Really enjoyed it. Really, really enjoyed it.  
 

[00:39:23] Ron Bennatan: Thank you, me too.