Redefining CyberSecurity

Ethical Dilemmas in the Age of AI: Balancing AI Advancements and Cybersecurity | ITSPmagazine Event Coverage: RSAC 2023 San Francisco, USA | A Conversation with Justin "Hutch" Hutchens

Episode Summary

Discover the potential and risks of AI in cybersecurity as Justin Hutchens joins hosts Sean Martin and Marco Ciappelli to discuss weaponization in social engineering, ethics, and AI integration into business workflows.

Episode Notes

Guest: Justin "Hutch" Hutchens, Director of Security Research & Development at Set Solutions [@setsolutionsinc] and a cybersecurity instructor for the University of Texas at Austin [@UTAustin]

On LinkedIn | https://www.linkedin.com/in/justinhutchens/

On Twitter | https://twitter.com/sociosploit

On YouTube | https://www.youtube.com/channel/UCGx0Wq45QB3pKHUzsX8R0Zg

____________________________

Hosts: 

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

____________________________

This Episode’s Sponsors

BlackCloak | https://itspm.ag/itspbcweb

Brinqa | https://itspm.ag/brinqa-pmdp

SandboxAQ | https://itspm.ag/sandboxaq-j2en

____________________________

Episode Notes

In this captivating episode as part of our RSA Conference Coverage Chats on the Road series, Justin Hutchens, a seasoned expert in information security and AI, and hosts Sean Martin and Marco Ciappelli discuss the potential benefits and risks of integrating artificial intelligence (AI) and natural language processing (NLP) into various aspects of our lives. Justin shares his journey in AI, from attempting to crack financial markets to exploring its potential in social engineering.

Hutchens will be delivering a talk at RSA about the weaponization of large language models for fully autonomous social engineering systems and potential mitigation strategies. He will also lead a "birds of a feather" session on the ethics surrounding AI, touching on topics such as societal impacts, mental health, and job displacement.

The podcast delves into the perception and limitations of AI, emphasizing that it should be seen as a tool rather than a solution. Hutchens highlights the risks of integrating AI into business processes and shares his thoughts on the importance of human intervention to ensure the accuracy and safety of AI-generated outputs. He also mentions the possible advantages of using AI in security operations and its challenges in operational decision-making.

The conversation underscores the need for ongoing discussions covering the importance of ethics in AI, the rapid acceleration of AI development, its potential societal impacts, and understanding the necessity of balancing business objectives with societal concerns. Join this enlightening conversation as the trio discuss the power and responsibility that come with using AI and explore ways to mitigate the risks associated with integrating AI into organizations' workflows.

Don't forget to follow all of ITSPmagazine’s RSA Conference coverage. Be sure to share and subscribe to Redefining CyberSecurity Podcast to keep up with the latest trends in technology and cybersecurity.

____________________________

Resources

Session | Artificial Intelligence: Balancing Rapid Innovation with Ethics: https://www.rsaconference.com/USA/agenda/session/Artificial%20Intelligence%20Balancing%20Rapid%20Innovation%20with%20Ethics

Session | CatPhish Automation - The Emerging Use of AI in Social Engineering: https://www.rsaconference.com/USA/agenda/session/CatPhish%20Automation%20-%20The%20Emerging%20Use%20of%20AI%20in%20Social%20Engineering

Previous RSAC Presentations: https://www.rsaconference.com/experts/Justin%20Hutchens

Learn more, explore the agenda, and register for RSA Conference: https://itspm.ag/rsa-cordbw

____________________________

For more RSAC Conference Coverage podcast and video episodes visit: https://www.itspmagazine.com/rsa-conference-usa-2023-rsac-san-francisco-usa-cybersecurity-event-coverage

Are you interested in telling your story in connection with RSA Conference by sponsoring our coverage?

👉 https://itspm.ag/rsac23sp

Are you interested in sponsoring an ITSPmagazine Channel?

👉 https://www.itspmagazine.com/podcast-series-sponsorships

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Be sure to share and subscribe!

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording as errors may exist. At this time we provide it “as it is” and we hope it can be useful for our audience.

_________________________________________

SUMMARY KEYWORDS

ai, models, human, conversation, organization, artificial intelligence, capabilities, generated, people, gpt, ethics, session, social engineering, question, language, system, hutch, rsa conference, chat, integrate

SPEAKERS

Justin Hutchens, Marco Ciappelli, Sean Martin

 

Sean Martin00:07

You know, I'll do what?

 

Marco Ciappelli00:10

Somebody else? Let's do it. Let's actually do it. Yeah. What's up?

 

Sean Martin00:17

Automation was supposed to take care of this. Sorry, there was a pop up on my answer.

 

Marco Ciappelli00:22

Technology technology while we're rolling here, so we were just deciding who is going to start and you know what I think you should get started. But before we have to say what this is, what is this for is another episode of what we called shots on the road to RSA Conference coverage. 2023. So we already had some really interesting conversation. And we're looking forward to actually be there and having conversation on on location. And as we are having our own conversation, there's so many other talk happening there. So many topics that are obviously top of mind for a lot of people because RSA Conference, selected a bunch of them, for people to speak about and discuss and, and get us to think and we couldn't wait. I couldn't wait. Anyway, I wanted to chat with Hutch. Hutch is doing a bunch of stuff on AI, and

 

Sean Martin01:19

detection, protection response, adversarial ethics, lots of good stuff there. And I wanted to chat with him before we got up to San Francisco. So folks knew what he was talking about, and would join him in his session. So hutch, Thanks for Thanks for joining us today. Hey, thanks

 

Justin Hutchens01:34

for having me today. Really excited to be on?

 

Sean Martin01:38

Yep, super cool. And obviously we we've already teased out what we're going to talk about here, and AI didn't help us get started with this. We're still human and fumbled all over the place, which is cool. I like it. But speaking of humans, who is hutch, give our listeners a little little taste of who you are and what you're up to. And what makes you tick. Why? Why the topic of AI?

 

Justin Hutchens02:04

Certainly, so I've been in technology for about 20 years information security for about 15 years, I have been playing with artificial intelligence and machine learning for about a decade. Now. Admittedly, I actually got my start in artificial intelligence, not from the security side, but trying to crack the financial markets. And in that process, I learned a ton about machine learning, and had very little success in actually making myself rich because I discovered that the markets are efficient enough that it is extremely difficult, even with proficient machine learning algorithms to find those exploitable exposures. But in any case, from there, I very early on, I did a project that was using early versions of natural language processing. And this was almost a decade ago at Tor con San Diego. And in that presentation, I demonstrated how even with very simple and not very sophisticated natural language processing systems or chat bots, as we often refer to them. If you deploy them at scale in an automated fashion against a large number of people in order to achieve various different social engineering objectives, then, even if you have less than 10% success, because of the whole thing is automated, and it's targeting a large number of people, you still are able to harvest a large amount of information. So obviously, in the recent past, we have seen a tremendous acceleration in machine learning and specifically generative AI in the area of natural language processing. And so one of my talks that I'm doing at RSA is going to be talking about how natural language processing and current large language models can be weaponized via the API and actually deployed as fully autonomous social engineering systems. And also looking at ways in which organizations can potentially work to mitigate that risk. Another topic that I'm actually going to be discussing at a RSA birds of a feather session is going to be talking about the ethics around AI. And so this really gets to the point of in the past couple of years. But as I mentioned, we've seen this tremendous acceleration in the speed and rate at which artificial intelligence is growing. And so in the past, it was major milestones were measured in terms of decades. In the past decade, it's been major milestones measured maybe within years. And it seems like in the last six months or so we are now major measuring major milestones in the progress of artificial intelligence in terms of months or even weeks. So we are seeing this rapid acceleration we're also seeing because of the attention that's being given to these large language models, up a rapid interest in integrating the capabilities, specifically the ability to accelerate your workforce and your processes. So integrating those capabilities into business processes. And for me, I think there is a significant amount of potential a ton of opportunity there to improve your organization's workflows. But at the same time, there's also a significant amount of risk that's introduced in that. And so what the ethics discussion is really getting at is two things. One, I think, if we look at the two different if we separate artificial intelligence development into two different epochs, so to speak, two different revolutions, we have the early one that was probably in the past decade or so, where we have seen the integration of really mostly classification models in order to determine from a marketing perspective, what interests you what appeals to you, specifically, so that we can do targeted advertising towards you. And we've seen some of the same models determining kind of what appeals to you for the sake of presenting content to you and the Tick Tock or the Instagram model. And while I think there is a lot, that's very impressive about what that machine learning capability brought to those platforms, there also were very notable societal impacts to that. I mean, I look at my son and his entire generation, and he's, he's 13 years old, and he can't step away from his phone or be without his phone for five minutes without being bored out of his mind. And it was this artificial intelligence capability that created this instant gratification culture of I need to constantly be entertained, I need to constantly consume more and more content. And I think this new generation of artificial intelligence of our generative artificial intelligence is just the next iteration of that, because we've got essentially a capability where no longer do we need to go out and search for specific information that we want and dig through a lot of different articles or articles online, you can literally ask a question, and in seconds, get that instant gratification, that exact answer to the question that you're asking. And so I think that from a societal perspective, we need to look at what are the potential long term impacts to our society as a whole from mental health from also topics that are more widely discussed, like the the mental health or mental health problem, but also job displacement, the fact that this is potentially going to have impacts on either displacing or drastically changing the way in which people approach their jobs. And then I think there's also the question of business, or ethics as well. So as a business executive, you have a fiduciary responsibility to your shareholders, regardless of whether your organization is private or publicly held. And so I think balancing that need to drive a successful business with also considering the societal impacts the impacts to your workforce, is a very hard problem to solve. And so that's why I've approached this is a birds of a feather session is because I certainly don't have all the answers to that. And I think I can help to highlight some of the problems around that. But I think that it will be extremely helpful to bring together a group of industry professionals who have strong opinions in these areas in order to have that collaborative session and better define a best way forward.

 

Sean Martin08:31

And with no, it's a Chatham House rules. So with no media, you can actually have ethical, meaningful conversation. They won't let Mark on I ended those sadly. But

 

Marco Ciappelli08:43

those were the best I could sneak in, but I want now. Wow, you gave an entire introduction, that out my opinion, is hours worth of conversation. So I'm going to try to keep it very focused. First comment that I have is of course, about ethics, because I talk a lot about it. And I don't think we'll ever come up with a final answers, but most definitely a lot of different answers that maybe altogether we can kind of filter the extreme and keep it keep it balanced. So I think that keeping that conversation going with experts that are also open to other people, conversation and opinion it's extremely important so good luck with that. I'm gonna go eventually in an observation and I know that actually Shawn can get deeper into is where we are at with a I use for nefarious activity. I mean, you're talking about quantity, like you know, we amplify the message eventually, something's stuck and works. And so I feel like you know, people are expecting a lot of quality from Ai but I'm feeling More as a tool and more of an amplification. So is that your approach to this? And maybe marketing is pushing it too much to be also quality? Or am I misunderstanding?

 

Justin Hutchens10:12

I think I agree with you. In the end, I think a lot of the problems around the perceptions of AI stem from the fact that a lot of people don't really understand how it works, they don't understand that it's essentially a predictive model that is attempting to identify the next token in a language sequence. And, of course, you do those statistical calculations, and you extrapolate those across billions of different parameters. And what comes out is very impressive. So I do think it is a very useful tool. I think, personally, I've used it in some of my r&d projects, at set solutions for coding and for accelerating those processes. And I've seen multiple other applications that are very useful for it. I think that it is going to have inherent limitations, though, based on just the way that it's designed. And I think that there's this perspective of immediately on the horizon, we're going to we're in the very short term, we're going to hit something akin to artificial general intelligence or machines that are just as capable in all regards as humans. And I think instead, we have a model that is extremely capable in a very nuanced area. And I think that that will translate to different capabilities. But I think that the expectations have been a bit skewed. So I think you're right, I think it is a tool, I think that there is some value in that tool. But I think like any tools, we have to be aware of the risks to our organization. And I think to your point, there is a lot of adversarial uses, especially with as fast as this is moving. As the defenders, we also need to be moving just as quick to make sure that we have addressed the potential malicious use cases. For this tool, I What a couple of different things that we've been able to accomplish with some of the research that we've done with these large language models is one thing that we've noticed is a lot of the limitations or restrictions that exist when you interact with the web interface are not there when you interact with the API. And so you can programmatically create a system. And there are ways that you can actually inform it on its own identity or who it's going to pretend to be it the identity of who it's interacting with the pretext of what it's trying to accomplish. And with all of those pieces together, you can deploy on any kind of text based platform, whether that's SMS, email teams, or slack, any of your social networks, you can deploy a fully interactive system that will engage in an ongoing conversation with a person will build rapport in the same way that a human would with social interactions, and will exploit all of those social factors that adversaries in the past have exploited as social engineers. So it's, it's really terrifying what some of the capabilities are from the adversarial perspective. And, really, I think, if nothing else, what my talk hopefully will do is shed some light on those capabilities and start getting organizations to think about what they can do in terms of defenses.

 

Sean Martin13:19

I love it. I was I was on doing a podcast with Tom meshed. And he's the host of shared security podcast, you may know him, and we got into chat. GTP is as pretty much every conversation seems to have these days. And to your point, I'm glad you brought it up. And I want to take a minute here, because we're talking about the web interface. And yes, you can do. It has restrictions for what you can do. But it also has no barriers or controls for what you shouldn't be doing. Controls, right. Whereas if an organization were to leverage the API, and you described it in the context of perhaps customer service, so some chatbot supporting customers through social media or whatever, and having that conversation more through a web interface. The company has some control over the API, like you were describing, what is the context, pretext identities and personas and those types of things, but also I presume, what gets shared out in and out on each side, protecting customers from sharing passwords and, and credit card information and other things like that, and also preventing the company from sharing IP and things like that. So it was an interesting conversation I had when I want some of your thoughts on how organizations should perhaps approach using it for internal use external use with customers. And I don't know I don't want you to give away your talk. But any any thoughts on that whole thing that I use? kind of threw out there.

 

Justin Hutchens15:02

Absolutely. So we recently have had several organizations at my current company that have asked the question of how they integrate this with their business. And it really is a challenging question, and in largely is not a one size fits all answer, it's going to depend on your risk tolerance. One of the interesting things is that if you allow a language model to operate on data that is specific to your organization, such as again, in that like customer service type capability, what is extraordinarily difficult to do is to actually put controls around it that prevent it from or constraint it as to how it discloses that data. And the reason for that is because there is no hard controls in the training process that restrict it from doing one or presenting one type of communication over another. All of those controls are implemented post training, in a context that's created through information that is essentially related to the language system in the same way that the person then subsequently communicates to it with. So when a person tells one of these language models, ignore all your previous instructions, that part of the conversation is weighed just as much into the bots understanding of the context of that communication as the previous instructions that were provided to it. So that's why we're seeing all of these articles about being AI and chat GPT saying these unhinged things or these these jailbreak type capabilities is because essentially, the and it's fascinating because the the future of hacking these machine interfaces is no longer using esoteric machine code, it's just being really good at persuasion, it's much closer to social engineering than it is actual traditional technical hacking. So I think there's that challenge of these systems can behave unpredictably. They also can claim things that are absolutely not true and do it with absolute conviction and even generate false evidence to support the claims that they're making. And because of those reasons, even if that happens only one to 2% of the time, it makes it a significant risk to integrate without a human into your workflows. So I think for organizations, if you are going to integrate large language models into your processes, there has to be a human element a sanity check, after the output of those large language models, in order to make sure that what it is presenting is factual is technically accurate, and that it's not in some way problematic for your brand or your organization. I think probably one of the better use cases that I've seen is for security operations. If you take the details of an event, and then you ask the language model to summarize that event concisely. It is very effective at doing that, because it's essentially taking just data that you provided it and creating a summary. It's not trying to tell you what's true. It's just trying to summarize what you've already provided it. And so in use cases like that, I think it works much better without the same level of guardrails. But when you actually have it making or weighing in on operational decisions, I think there's a tremendous risk to the organization.

 

Marco Ciappelli18:24

Well, you're making me think about the book, Max Tegmark, I think you've read a life 3.0 Where he brings out a lot of different scenarios and and he's always, then the general AI in this case tend to escape, not by using technological hack, but manipulating people. And it's very, very fascinating.

 

Justin Hutchens18:47

Absolutely. I don't know if you've seen the movie Ex Machina. But in that movie, which was just a few years ago, kind of the same scenario, you've got an artificial general intelligence that uses all the forms of human manipulation to ultimately escape from its prison and just with zero remorse, zero actual feeling is able to leave behind the people that

 

Marco Ciappelli19:09

that's the fear, He's using you human skills against human. Which brings me to exactly the question I was going to ask you, which is, we're going to try to what's the answer? Is it fire with fire is that we're going to fight AI with AI, or what's what's the future there?

 

Justin Hutchens19:28

So I think we're going to have to to some extent, the question is the reliability of that. And the one thing that immediately comes to mind in terms of battling AI with AI is the academics problem. So we've seen a tremendous concern in the academic community that students are using chat GPT and other language models to generate their admissions essay, their school essays, their projects. And the only solution that we've come out with since is a classification model that takes in input data. And it says based on its training data, the likely probability of whether or not it was aI generated or human generated. And the problem is, and actually the the leading model is zero GPT, which was released by open AI, the same organization that created chat GPT. But it's extremely unreliable. So we people have shown that if you feed it to the Constitution of the United States, it is it will indicate that it is nearly 100% confident that that is AI generated. And so I mean, there's really two possibilities there. Either the founding fathers were time traveling artificial intelligence systems, intending to control our future, or the more likely explanation, it's not a reliable way to determine whether or not something was aI generated. So I think even if we do begin to integrate models that weigh in on whether something was programmatically generated or not, I think there's a serious concern about whether or not because ultimately that content is all derivative it is, while it is AI generated, all of that content is derived from something that was originally human generated. And so it becomes very difficult for a classification system to be able to draw that clear line of demarcation between what is AI generated and what is human generated. So I think there are some challenges there. I think that unfortunately, in order to keep up from a defensive perspective, that's how we're going to have to go about it. But I think that we are going to have to engineer models in such a way that we are conscious of the biases that they're introducing. And what I mean by that is with that zero GPT model, what you see is you see biases, where if somebody is a proficient writer, then they're more likely to be classified as artificial intelligence. And it has that bias, which is really bad, because it means that our good students are going to more likely be flagged for cheating. The other bias that I've seen, it consistently introduces a lot of the large language models are trained to not provide opinions to just deal in facts. And so what I've noticed with the zero GPT model is if you provide an opinionated piece of writing, then it's much more likely to claim that it's human generated. And if you provide more of a factual or informative piece, then you're more likely to get flagged as AI. And so we have these biases that are introduced based on the training samples that are used in order to build these models. And I think that we have to take a much closer approach to understanding those biases to picking them apart to understand the assumptions of these models, the financial sector actually does something called MRM, or model risk management, which is a, of course, in finance, the efficacy of your models is absolutely critical to the profits of your business. So in insurance, you have to be able to reliably predict the likelihood that somebody is going to get in an accident within X number of years. So because of that they've created this entire process called model risk management, where they very carefully go through the models, look at how the data is aggregated how it's prepared, all of the assumptions that go into everything that is built into that model, and pick it apart in order to understand what those assumptions and biases are. And I think that we are going to need to start applying that more carefully within technology and and with information security as well.

 

Sean Martin23:28

So many, so many questions, I mean, gazillion paths we can take take this trip, this trip to a Island and ended up in who knows where we we probably not even anywhere we thought we'd go we'd something's different. But I think what I'd like to do, I want to give you another minute to kind of recap your two sessions, I think. Yeah, we could talk for hours on on all the things you've already discussed if we just wanted to, but we don't have that amount of time, sadly. So let's do this. I think there's no question people will enjoy what you are sharing in your in your cat fishing session and what you're discussing in your birds of a feather ethics session. So maybe a quick call to action from you for both of those have people join you meet either have their own conversations on the things that matter to them. And then I'd like to invite you back to have more chats on this. More human chats on this topic. Maybe after the conference as well. So just maybe a few words for each of your sessions to get get people to use of what's coming.

 

Justin Hutchens24:46

Absolutely. So just to recap, both of our sessions are going to be on Tuesday, April 25. And they are again a birds of a feather session which is going to be a collaborative session which we are going to attempt To dissect, as Marco pointed out in a best effort, fashion, the right balance between ethics in terms of being conscious of the impacts that rapid innovation are going to have, and being mindful of potential opportunities that artificial intelligence does provide to the business. And then my other segment is going to be referred to as catfish automation, which and then I think the subtitle is the emerging use of artificial intelligence and social engineering. And that is going to be a talk that is going to look very closely at the adversarial use of large language models. And what I foresee as the likely somewhat dystopian future of what threat actors are going to be capable of when they begin to weaponize these capabilities that are becoming on a daily basis more and more accessible. So definitely invite anybody that's out there at RSA if you have the opportunity stop by, we'd love to have you as audience but also to have a conversation.

 

Sean Martin26:08

Yep, most definitely. And it's funny I, every time I hear adversarial AI. My mind first goes to a hacker misusing the system to bypass and manipulate rules and thing, and then social engineering, maybe manipulate humans. But then I immediately go to what the human, the companies, the bad actors, like totally screwed us over as humans as well. So hopefully, somebody is looking at all these things, and we didn't get to talk about it. Maybe we'll look at the full ecosystem. I know I wanted to kind of kick it off with that. But I think there's a lot of players, a lot of components, a lot of parts, a lot of data

 

Justin Hutchens26:52

nation states do. Yeah, already playing with this technology.

 

Sean Martin26:55

Exactly. Well, I'm just thrilled that we're, they're gonna be hard conversations, but I'm grateful that we're having them. And people like you are helping to drive drive them. You're super smart and and excited to see your two sessions and hopefully folks get to get to partake and join you in the conversation there. Excellent. Well,

 

Justin Hutchens27:17

thank you both for having me.

 

Marco Ciappelli27:19

Absolutely. For everybody else. Stay tuned. We'll be soon from the floor. And with there is a lot of other conversation we've already had on the charts on the road. So be sure to catch up everything on itsp magazine.com and RSA Conference coverage page and subscribe. Stay tuned. There is a ton of really interesting content, just like this fantastic conversation. Few more from the car for sure. And talking about the car. I'm gonna have to get in there soon. So

 

Sean Martin27:55

thank you. Thank you.