Redefining CyberSecurity

Exploitation of Humans by AI Assistants | A Conversation with Matthew Canham and Ben Sawyer | Las Vegas Black Hat 2023 Event Coverage | Redefining CyberSecurity Podcast With Sean Martin and Marco Ciappelli

Episode Summary

In the lead-up to Black Hat Las Vegas 2023, hosts Marco and Sean converse with experts Ben and Matthew about the intersection of AI, cybersecurity, and psychology, exploring how the perceived blurring line between AI and sentience might be manipulated by AI to exploit our cognitive biases, altering our perceptions of reality, and posing unprecedented risks in our increasingly digital world.

Episode Notes

Guests: 

Matthew Canham, CEO, Beyond Layer Seven, LLC

On Linkedin | https://www.linkedin.com/in/matthew-c-971855100/

Website | https://drmatthewcanham.com/

Ben Sawyer, Professor, University of Central Florida [@UCF]

On Linkedin | https://www.linkedin.com/in/bendsawyer/

On Twitter | https://twitter.com/bendsawyer

Website | https://www.bendsawyer.com/
____________________________

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast and Audio Signals Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

____________________________

This Episode’s Sponsors

Island.io | https://itspm.ag/island-io-6b5ffd

____________________________

Episode Notes

Welcome to a fascinating new episode where we delve deep into the confluence of cybersecurity, psychology, and philosophy in the realm of artificial intelligence. In anticipation of their insightful presentation at Black Hat Las Vegas 2023, our hosts Marco and Sean had an engaging conversation with Ben and Matthew, shedding light on the astonishingly rapid developments of AI and the accompanying cybersecurity implications.

Within the last few months, the GPT-4 and ChatGPT language models have captivated the world. There is a growing perception that the line between AI and sentience is becoming increasingly blurred, nudging us into uncharted territories. However, one must question if this is genuinely the case, or merely what we want or are predisposed to perceive.

Ben and Matthew's research outlines the fundamental "cognitive levers" available to manipulate human users, a threat vector that is more nuanced and insidious than we ever imagined.

In their upcoming Black Hat talk, they aim to reveal how AI can exploit our cognitive biases and vulnerabilities, reshaping our perceptions and potentially causing harm. From social engineering to perceptual limitations, our digital realities are at a risk we have never seen before.

Listen in as Marco and Sean explore a captivating debate around the nature of reality in the context of our interaction with AI. What we think is real, may not be real after all. How does that affect us as we continue to interact with increasingly sophisticated AI? In a world that often feels like a simulation, are we falling prey to AI's exploitation of our human cognitive operating rules?

Marco and Sean also introduce us to the masterminds behind this groundbreaking research, Ben Sawyer, with his background in Applied Experimental Psychology and Industrial Engineering, and Matthew Canham, whose work spans cognitive neuroscience and human interface design. Their combined expertise results in a comprehensive exploration of the intersection between humans and machines, particularly in the current digital age where AI's ability to emulate human-like interactions has advanced dramatically.

This thought-provoking episode is a must-listen for anyone interested in the philosophical, psychological, and cybersecurity implications of AI's evolution. The hosts challenge you to think about the consequences of human cognition manipulation by AI, encouraging you to contemplate this deep topic beyond the immediate conversation.

Don't miss out on this thrilling journey into the unexplored depths of human-AI interaction.

Subscribe to our podcast, share it with your network, and join us in pondering the questions this conversation raises. Be part of the ongoing dialogue around this pressing issue, and we invite you to stay tuned for further discussions in the future.

Stay tuned for all of our Black Hat USA 2023 coverage: https://www.itspmagazine.com/bhusa

____

Resources

Me and My Evil Digital Twin: The Psychology of Human Exploitation by AI Assistants: https://www.blackhat.com/us-23/briefings/schedule/index.html#me-and-my-evil-digital-twin-the-psychology-of-human-exploitation-by-ai-assistants-32661

For more Black Hat USA 2023 Event information, coverage, and podcast and video episodes, visit: https://www.itspmagazine.com/black-hat-usa-2023-cybersecurity-event-coverage-in-las-vegas

Are you interested in telling your story in connection with our Black Hat coverage? Book a briefing here:
👉 https://itspm.ag/bhusa23tsp

Want to connect you brand to our Black Hat coverage and also tell your company story? Explore the sponsorship bundle here:
👉 https://itspm.ag/bhusa23bndl

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/podcast-series-sponsorships

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording as errors may exist. At this time we provide it “as it is” and we hope it can be useful for our audience.

_________________________________________
 

Sean Martin: Marco.  
 

Marco Ciappelli: Sean.  
 

Sean Martin: It's, it's interesting when what you think is real, is really not real.  
 

Marco Ciappelli: Or is it?  
 

Sean Martin: Or is it? Right? I don't know. Is it perspective? Is it, uh, is it what you want it to be? So what reality is, is what you think it is?  
 

Marco Ciappelli: I don't know. I'm convinced I live in a simulation to start with. So, we're already having some kind of a, some kind of an issues here. 
 

But, uh, you know, when we talk about these things, I still feel like, and I talk a lot about this on my show, but I still like we're talking about sci fi. We're talking about the future. Then all of a sudden you realize the future is... And, uh, it's not a book, it's actually things that we're talking about, not a movie, not a movie. 
 

It's actually a talk, uh, that is about to happen at black cat USA 2023 in Las Vegas. So it's very much real. And, uh, and I'm so excited to have this conversation that I already book another conversation after this, this is how much I trusted this conversation.  
 

Sean Martin: The question is, are you going to have that? 
 

Marco Ciappelli: I'm going to have it? My evil twin. It's going to happen. All right. Perfect. It's going to be much more fun.  
 

Sean Martin: I can't wait. Well, let's, uh, let's talk to, uh, these two evil twins, uh, partners in crime that put this presentation together for Black Hat, uh, it's called me and my evil digital twin, the psychology of human exploitation by AI assistants, and, uh, there's a lot in there, I'm sure there's a lot. 
 

Outside of what you're presenting as well, leading up to, uh, the findings and the interest, interesting topic that folks are probably just going to overspill the room trying to listen to on that day. Um, so I'm excited to get into, into the topic, but before we do that, I think there's a bit of background to cover first, who Ben and Matthew are, thanks for joining the show, and then, uh, how you met and what led you to, This particular session. 
 

So Ben, I'm going to start with you a few, a few words about, uh, what you've been up to, where you are now, and then over to Matthew and, and then you can kind of both jump into, to how you came together for this.  
 

Ben Sawyer: Sure. So, my background is in Applied Experimental Psychology and Industrial Engineering. 
 

I'm, um, depending on which story you tell, either an engineer that walked across to talk to the psychologists or vice versa, which makes me a little weird to both tribes. Um, I've always liked building things. Uh, I did my doctorate. I did my master's in engineering, uh, but my doctorate working with human experimentation. 
 

And it puts me in a very small tribe of people who really care about the intersection of humans and machines, of which Matt is one. Um, Matt, uh, and I met at a conference for people who care about this intersection, uh, called the Human Factors and Ergonomics Society. And I was working or had just finished working for the Air Force's 711th Human Performance Wing. 
 

And I knew what the three letter agencies looked like, because I met a lot of them in that role, and Matt was wearing a... A badge that said, uh, consultant, if memory serves, uh, nice generic suit, and that just looked like a lot of folks I'd met. And it was interesting to see somebody at that conference, you know, who would have that interest. 
 

That's, that's rare anywhere. And that started a conversation that I think has been going on nearly a decade, if not a little more now. Um, Matt and I have worked together on quite a number of projects in the past, at the intersection of neuroscience and cybersecurity, at the intersection of, um, humans and cybersecurity in many, many contexts. 
 

But this moment, I think, is a little different than all the others. We've both been waiting, I think, our whole careers. Knowing that it was likely we would live through the moment where socially able technologies got so socially able that, uh, the game would change dramatically. And here we are.  
 

Sean Martin: Yeah, it's interesting. 
 

And I, I want to hear, uh, you Matthew, but just the, you mentioned neuroscience and it's easy to forget that. Behind a lot of what we're going to be talking about is science of the brain. As soon as you have an interface like chat GPT, it's easy to forget that. Oh, yeah, there's human cognitive science in there. 
 

So Matthew, uh, your background and, uh, and, and what you're up to now at the university.  
 

Matthew Canham: Well, and I'm glad that you brought that point up about neuroscience, uh, because when I. So my Ph. D. Is in cognitive neuroscience and the very first semester of my program. Um, you know, I was expecting something a little bit more like psychology and instead we were building neural network models of vision systems, human vision systems trying to model these systems. 
 

And so that was my first real exposure to, um, AI being used as a description of how human cognition works. And so that's really kind of how I got into the field. And, um, then, uh, my master's work, uh, focused on, uh, human interface design. And then, um, Uh, my doctoral work focused on, um, collaboration and online teams. 
 

And, um, so after I finished my PhD, I, I did a stint with the U. S. government and, uh, at the tail end of my tenure there, I managed a program in emerging technologies where I was exposed to technologies that were new and up and coming from across. The spectrum of technologies that you can imagine everything from biotechnology to wireless to AI. 
 

And, um, then I got an offer to join the faculty with the behavioral cyber security program at University of Central Florida. So I decided to pivot back into academia and I was there for. Um, until about two years ago, and now I full time consult, uh, in security and most recently starting to focus in AI security, which is how, uh, I relate to this talk. 
 

Marco Ciappelli: Okay, so I'm going to take it, but I'm going to re reframe myself from going to social here because I know that this is going to be focused on. Cyber attack surface and techniques, but I do have to make a note because it's kind of a reflection of what we've been doing as ITSP magazine, where we started with intersection on society. 
 

I mean, society and cyber security, then we added technology a few years later, because you simply cannot. The touch the three of them together. So as you're describing all this thing, you know, people that are really wanted the machine to start to be, uh, more human, even people embedding to these, they're declaring that. 
 

Oh, I think it's getting sentient. And they're like, yeah, I think you want that. Maybe it's not quite there yet, but, um, are we, are we really there? Are we still in kind of a. It's an illusion that the machine is thinking, uh, and then we can go on cyber security. But I need to ask you this, both of you. 
 

Ben Sawyer: I start by saying that there are certainly people who are good at projecting the illusion that they're thinking. And that's not even saying that they're not sentient. I'm just saying that that's a common social skill. You know, we've all fallen for it. Okay. The other thing that's true is that these things are very good at playing human games. 
 

That doesn't mean they're sentient, but it definitely creates, uh, an opportunity for us to not even hope that they're sentient, but just feel deeply that they are. Um, humans are filled with, uh, you know, hooks and triggers and switches and levers, uh, that, that can be used for manipulation. And we use that with one another all the time. 
 

Um, our technology is increasingly able. to access all of that. And what we're facing right now is a set of technologies that are startlingly able to do so. So while I don't think it is sentient, I think that uh, the people who have staked their careers on saying it's sentient, probably deeply believed that was true. 
 

And probably we're going to see a lot more of that. The question of whether something is sentient may be something that's very abstract. philosophical question. The question of whether you think you're talking to something sentient is something that you make a decision about, and I think a lot of people who interact with this technology are making the decision that it is. 
 

Sean Martin: Yeah, it's not, not a one or zero here, Matthew.  
 

Matthew Canham: Right. Well, so our, uh, cognitive, uh, architecture, uh, predisposes us to, um, impose sentience or impose, um, uh, autonomous or, or, um, I want to say, um, human like traits on inanimate objects. And I can remember, uh, one time in particular that, uh, I had a vehicle, a car that died on me, and I was convinced that that car was deliberately picking the worst time to, uh, die on me and, and leave me stranded that it possibly could. 
 

Because it, it certainly optimized for it, but in retrospect, I don't think it did this, you know, in a sentient manner. Um, now that being said, when we're already predisposed this way and you start to add a few... Levers to that, or you add a few, uh, characteristics that sort of nudge us in that direction, then you get a very powerful, uh, combination. 
 

And then the other thing I think that, um, shouldn't be ignored here is that for the last two decades, we've had an industry emerge that is tailoring content to us individually. And what we're seeing now with, um, these AI systems is that that is just ramped up. by like 10 or 100x. And, um, when you look at, um, some of the extreme edge cases, it was a case of a gentleman in England who, uh, decided to try to assassinate the queen. 
 

It was, uh, someone else, uh, who, uh, committed suicide after their therapist convinced, their AI therapist convinced him it might be a good idea. Now, these individuals were probably already predisposed in those directions, but the, the interaction with. that AI just sort of nudged them a little bit more in the direction that they're already heading. 
 

Sean Martin: So when I think about the technology part of this, um, two things come to mind. One is scale and time is the other one where. If somebody, let's say a bad actor wants to conduct the phishing campaign, they, they might just do a one shot blasted out to as many as they can, um, and see what they get. And, but with something like AI and machines, uh, one, we can reach more people, um, in a more targeted way, but perhaps, and maybe your thoughts on this, um, it doesn't have to be get the action in the one. 
 

thing that I'm going to deliver. I can build a relationship over time and you may not act. You may not even think. It may be it just triggering something in the back of your mind that you're not, that later you then think about that and even later. You then act on. So I don't know any thoughts on that. 
 

And obviously connected to to your stuff here.  
 

Matthew Canham: Yeah. So about three years ago, I started running into bots, uh, that were initiating, um, gift card scams. And what we were seeing was real scripted attacks that would go, uh, about three to four interactions. And then there was this very distinct switch where you could tell that a human had taken over. 
 

I think, I mean, so what that does for the bad actors is that it allows them to filter out anyone who is not already predisposed to go along with the con, right? And what we're seeing now with, um, the emergence of the LLMs is exactly what you said, is that we can have, uh, almost an automated, uh, yeah, an automated catfishing scheme where these interactions instead of, uh, three or four exchanges, you could talk about something going on for six, seven months before it actually gets to the meat of, of what the, the aim of that scam is. 
 

Sean Martin: And I can only, I can only connect this and Ben, I'll let you go in a second, but I can only connect this to another conversation we had also from Black Hat, uh, from yesterday, talking about cookies, where it's an identity and your, your session lives on with you for as long as that cookie exists. And in fact, that's, that's your key for, uh, the AI interaction, if I'm not mistaken. 
 

Ben Sawyer: I think it's going to get a lot stranger to, you know, we all lived, uh, I believe in this room, judging by overall levels of gray, we've lived through a period where the internet brought forms of fraud that were just previously unimaginable. Um, I watched my older relatives try to grapple with it. And it's interesting because, um, my beard is the same shade as theirs was when, uh, they were doing that. 
 

And, and I think there's going to be a whole new wave of things that were previously unimaginable. So, while this technology will be great at writing phishing emails, and it is, and there's some wonderful papers, and, and blog articles showing how capable it is, It also is going to allow for the type of one on one fraud that is limited by the number of humans who perpetrate that type of fraud right now to become much more ubiquitous. 
 

I mean, right now, the number of grifters who could separate you from your life's savings... 
 

Um, but this technology will allow not only the creation of a huge number of human machine relationships, but also the ability to spin up highly talented manipulators, virtual manipulators, in large numbers, the way you might send spam emails. Get weirder, too, because, you know, if you, uh, Marco, decided to turn on me and betray me, you and I have many, uh, cues, socially, that we share that might give me a fighting chance of knowing that. 
 

I mean, maybe not. I don't know your overall level of, you know, ability to betray. You might be excellent at it. He's pretty evil. I, I, Sean, I, I, I was going to ask, but I mean, regardless, we share a lineage, uh, and, and, and, and social cues and other things that would give me a chance, um, with a virtual agent, you feel like you have that, that's that feeling of that, the, the social interaction with you, but that, that thing is not real. 
 

What we are in any way. In fact, when you talk with them, it's interesting. I don't know who said it, but I'm one of the journalists writing about them. It says like the max slips a little bit and you see the, the insanity of what this thing actually is. I mean, what is it? It's a stochastic model, trying to figure out the world through relationships in our language. 
 

That's a crazy object. And, and what is true about it also is that it has none of the cues that it might use. To tell you that it's been compromised, which can happen instantly. And in that instant, you don't need the spam email. The thing that you've built this human trust in that's helping you can very suddenly have motivations that are not your motivations and work against you. 
 

And it probably won't be like in Hollywood where the screen will go bzzz and then, you know, its eyes will turn red to give the viewer a nice cue that it's turned evil. It will just very silently and instantly start working against you. Even as it helps you in other capacities. So that's a much more interesting threat than a phishing email. 
 

And I think we're going to see some really, uh, terrifying things happen as people place a lot of trust in these things and learn exactly how much they can work for you and against you. Simultaneously.  
 

Matthew Canham: So, uh, of the two of us, I'm usually the pessimist in the room. I'm going to roll reverse.  
 

Marco Ciappelli: Oh my God. Not now. 
 

I'm worried.  
 

Ben Sawyer: I must have gone in dark there for a second.  
 

Matthew Canham: Um, to, to come off of what, uh, Ben was saying about the weird, um. So something that's not talked about as much, but a project that Ben and I are working on is social engineering active defense, or SEED, and these things are not only going to be used by malicious actors, right? 
 

Because if I know that I'm getting phished by a certain Source, I can turn that around and I can act like a willing patsy and just keep them going, you know, down the line and, um, you know, uh, string them along and waste their time. So I think that's something else that we're going to see if we haven't already seen it is sort of this A. 
 

I. Versus A. I. Uh, interaction happening before it even gets to the human.  
 

Ben Sawyer: That's one reason we use the moniker digital twin. You know, it's a, it's a known phrase in technology. And really, these things get asked to be digital twins for us. You know, you sit down with Chad GPT. Very quickly, you ask it to write an email for you. 
 

You ask it to act on your behalf. Talk to me about something in my life. Help me make a decision. Um, what you're asking it to do is act as you. And in that, I think we're going to see a lot of, um, machines that are acting as us interacting with one another in the way that Matt describes. And, um, that's going to get pretty exciting at times. 
 

I mean, spam filters are a much more boring technology than what is to come. Um, which will probably be engaged in wasting one another's cycles to make it too expensive to attack people in terms of just computational time. Yeah,  
 

Sean Martin: I was just thinking about that. The, uh, the amount of time we're going to have to spend, well, assuming we even twig on it, the, the, the, the, the amount of spam that I delete in the emails that I delete, just one example, right? 
 

If it starts to interrupt our daily lives because we're trying to assess and analyze and get rid of junk that we don't want because we asked it to in the first place. Um, the other thing that comes to mind, and we probably don't need to go down this path, but just the idea of, um, multiple people and twins coming together. 
 

I know, like. Uh, I've heard stories about dating apps, using, using AI to write and communicate with others to, to be a better person in that, for example, uh, could be some interesting things we'll save a lot of that. I think for when Marco has you back, hopefully he'll invite me. I doubt it because he doesn't like me on some of those, but, uh, I want to get to your session because, uh, I think people. 
 

Well, if they're, if they're already going to black hat, they're probably already bookmarked this to join you. If they're not, they might want to consider going to Vegas and, and, uh, catching this, this session. Give us a, an overview. Um, I know we've talked a lot about the topic, but give us an overview of what people. 
 

can expect to hear and maybe even see when they, when they see you on stage there. Well,  
 

Matthew Canham: I think what we want to do is, we are going to talk about LLMs and we do want to talk about some of the technology driving these, but very quickly we will also want to establish that We are not just talking about LLMs. 
 

I think the analogy here is that what we're seeing with LLMs, I'm sorry, large language models, is kind of like for those of us who are great enough to remember the Internet before browsers. I think we're seeing kind of like what we saw with the Internet before browsers. And the world I think is waiting for right now is the first Netscape navigator. 
 

to come online, which will be, uh, interactive video, interactive, uh, voice and sound and, um, but powered by something like a GPT five in the background. And when we have that, um, That's going to obviously change everything, and that's, that's what we're going to be talking about in our, in our talk is what happens when we cross that threshold, and also what are the specific human vulnerabilities to that, because we've already talked about sentience and the predispositions to ascribe sentience. 
 

Uh, attributes to something that obviously isn't and, uh, and then we've got a few Easter eggs that we'll share that, uh, is probably going to be a little bit disturbing for a few people.  
 

Sean Martin: I'm picturing every one of us having our own media enterprise where we have multiple channels creating videos and shows some, some of it real with us acting some with our. 
 

Evil twins and evil twin dogs and cats and our friends and family and  
 

Ben Sawyer: I don't know. I mean Hollywood's striking about that right now. Right, yeah. The idea that you own your identity is, is an interesting idea that might not last our moment. There's all sorts of types of privacy that have been given up. 
 

in the last 20 to 30 years in ways that, you know, an 18 year old can't even conceptualize the types of privacy that existed in the 1980s, for example. But I, I think, um, you know, a lot of what's wrapped up there is all of these assumptions, like, it'll be me and a system. Well, no, the system is A chunk of code that has many heads and does many things. 
 

Your therapist and your financial analyst and your girlfriend and your cybersecurity program all rest on the same code base. They're just wearing different faces for you. You take that back another step. Um, they serve your company. Everyone in your company has that relationship with that same code base. 
 

Take that back a step. They serve your country. US code bases will be different than other code bases. So this, this thing will feel very. Personal and at the same time will be as distributed as a website, you know, and, um, that I think leads to a world that's very difficult to conceptualize for many of us who've grown up with types of privacy that are about to vanish. 
 

Um, and they're already vanishing, I say about two, but this is happening right now. The future, uh, I believe...  
 

Sean Martin: At this point, I'm not worried about privacy.  
 

Ben Sawyer: Yeah, I agree. But yeah, I think you might...  
 

Sean Martin: Something acting on my behalf is a little more disturbing, I think.  
 

Ben Sawyer: Well, but here you are making a podcast with your voice in your face. 
 

Right now, you can't imagine a world where you were separated from them, and they had intent beyond you. That would be a weird thing  
 

Marco Ciappelli: I usually talk a lot more than this, but i'm thinking so much that i'm a little Um, I mean i'm watching a movie many movie put together in my head like the multiple face But underneath is the same one Acting and and i'm thinking this is like one Central big computer controlling, you know a big jarvish controlling everybody But without going there is there A way that you think it can be used to not get there into this dystopia and actually harvest, you know, a little bit of a more utopian future for, for these. 
 

And I, I know it always goes on both sides, but when it comes to cybersecurity, again, I'm, I'm trying to start here, all this rush into putting AI. In every single product you have in technology, powered by AI, you know, powered by Intel, not paid for this, but I put it there back in the old days, um, regulation, legislation, uh, can it help? 
 

Do you, do you see a way out of an inevitable blue? Dark, very cool future.  
 

Ben Sawyer: I'm generally the optimist.  
 

Marco Ciappelli: Oh, I can be, I can be very negative. So,  
 

Ben Sawyer: I don't think that this future has to be dystopian at all. Okay, good. And in fact, um, I wouldn't be working in this space if I thought it was only going to be dystopian. 
 

Um, I think, uh, at this moment we have a great opportunity to get ahead of this technology and make it less dystopian. And in fact, um, in some ways, uh, there are utopian possibilities. There are situations where things right now that are, uh, very unevenly distributed like education and, um, access to good information and expertise will become far more evenly distributed to, to everyone. 
 

And, um, I think what is important is to work, uh, in good faith to do that, first of all. There's a lot of snake oil and a lot of cash grabs going on right now, and that will be true. But we are very hopeful that, um, the community at large feels the sense of responsibility that we do. I think the other thing is it's important to create a conversation. 
 

With the parts of the scientific community that can help. Um, the idea that algorithmic cyber. Exists has always been a little weird, you know, um, the, the systems that are, uh, code watching code have always had these gray signals that humans had to go in and classify and, and you always need somebody checking the firewall to check the exceptions. 
 

Right? So, so that sort of mentality gets a lot weirder when the, um, technologies themselves are intensely social and we think that a lot of the possibility for a brighter future comes with it. Working with the large body of knowledge we have in psychology, sociology, and social sciences, but also with the sort of human technology fields like UX. 
 

Um, UX and AI, it's gonna be a huge thing. Uh, and also, uh, some of the things that you might not normally think of that, counseling as a discipline. I think has an enormous amount of relevance in a moment where the technology talks back and you see that large companies are retaining people who were professionally speak to other people to learn how these things should be speaking to us and helping us. 
 

Sean Martin: Matthew, I want your, your thoughts on this as well. And I want, I want maybe, maybe a couple of points, but you can include them as you're, as you're describing your view on this. Um, We know we're not good at fixing things before they're a problem. Um, that's why we have a cybersecurity industry that's all about detecting and responding. 
 

So I'm wondering, do we, do we have and or do we need something along those lines in this world? And then to Ben's point, uh, on UX and, and what, what it's doing. Do we, do we have and or need something? To promote transparency, those kind of those two things.  
 

Matthew Canham: So let me take those in reverse order. Um, first of all, with the UX side, um, I guess the first thing I would say is, is when people talk about this technology, the first thing that comes up is AGI, artificial general intelligence. 
 

And I don't really see that as being the most immediate threat. What I do see as being the most immediate threat. And I mean, I think this is within the next 2 or 3 years is that. These technologies are going to fundamentally change how we relate to information, and I don't think we are ready for that. 
 

Um, and I don't think there's any way to be, become ready for that, right? Because I don't think we understand how that relationship is going to change. Up to this point, the internet has been kind of the same relationship that we've had with technology, but just It's instantaneous, it's distributed around the world, it's asynchronous, so on and so forth. 
 

But the information itself is fundamentally kind of the same. Um, what's going to change now is that You're not going to, I'm trying to think of how to, it's, this is difficult to describe because we don't really know what, what's coming. Right. But right now, when you write a document and you interact with Word, Word is still a relatively passive partner in that relationship, and that's going to go away. 
 

And it's going to be much more like when I collaborate with Ben. And we, we sort of riff off each other's ideas, right? And so that's, that's coming. Um, now in terms of the not being good about fixing things beforehand, I, I think it's sort of a fundamental love of the universe that you can't fix things before they happen. 
 

And so I think this is where a lot of the disruption is going to come from is that, um, uh, somebody mentioned regulation earlier. Um, AI is going to disrupt government. I mean, at what point do we really need a human legislature anymore if our digital twin legislature can represent our interests better than The humans and we have more and maybe we have more faith in that because we know it can't be bribed. 
 

It can't be blackmailed because it slept with a prostitute when it shouldn't have, um, you know, a whole list of things. But us in the security world understand that these things could be. Compromised all the same, but that's not what I'm talking about. What I'm talking about is the perception of them. And so my, my concern in that regard is that this is going to be disruptive in ways that we are not going to anticipate. 
 

And in terms of an approach to that. I would argue that to a certain extent, we should probably look to nature as a model. Nature is fundamentally anti fragile. And I think that's the approach that we need to start taking is can we somehow limit the damage that this thing, these things can cause? So that we can still experiment and let things fail, but it's not going to be catastrophic and take down the entire global web or something, for example. 
 

Ben Sawyer: Yeah, I also think it's really important to realize how crucial cybersecurity community is going to be. You know, it's interesting that this is a community that was built around a revolution that was at the time as unimaginable as this one seems now. And you think back to moments like BlackBerry's. 
 

Taking over our government, that seemed weird and radical at the time and seems kind of quaint now, um, when you consider the community that exists in, in the cyber security profession at large, uh, movements like DevOps, uh, and, and sort of the understanding of how to look across an organization and larger organizations of humans and look for the types of risk that exists there. 
 

To take technology that is unproven and and red team it and try to figure out how it's going to break people. I mean, it's really interesting and instructive to look at the teams like ours that are red teaming large language models to understand how they're dangerous. We sound like we're at the forefront, but we're using a playbook that's well understood in the cybersecurity community and is, I think, really, really vital to passing through to a future that's, that's not dystopian. 
 

And that's a future we really want. And I think a future that, um, I as an academic and, and, and Matt as an entrepreneur, I mean, this is going to be our life's work and the life's work of a lot of other really amazing people in this community who are going to help get us there. I do have faith in that. 
 

And it's going to be weird sometimes.  
 

Sean Martin: I'm just, I'm picturing the denial of service where, where the, the system goes down and you can't do something. I don't know. I don't know. I want to be positive. I'll, I'll, I'll join you. I'll join you in your positivity. I will be positive about this. Um, I'm grateful that you guys are. 
 

Joining forces and bringing this topic to bear at Black Hat and sharing your thoughts and insight and research and other elements. Easter eggs and everything else. At Black Hat, it's Thursday, August 10th. Me and my evil digital twin. Psychology of Human Exploitation by AI Assistants. That's a mouthful. 
 

I'm sure you had help writing that. It's Heidel, uh, perhaps it's some jet GPT, but, uh, no, I'm, I'm seriously, uh, thankful for what you guys are doing and, uh, happy to have you on the show. Looking forward to either hearing or being part of the conversation with Marco as we, as we blow this out, uh, psychologically and, uh, philosophically on the next one, but until then, uh, We'll include a link to your session, links to your profile. 
 

Of course, everybody stay tuned. There's lots coming from, uh, Black Hat, including the other. We had another talk on LLMs and security. We had a talk on cookies, which I mentioned earlier. So a lot of cool stuff to, uh, to talk about here. And, and, uh, hopefully you get a chance to see Ben and Matthew. So thanks guys. 
 

Marco Ciappelli: Thank you very much.  
 

Ben Sawyer: Thank you all.  
 

Matthew Canham: Yeah. Thank you.