Redefining CyberSecurity

Book | The Language of Deception: Weaponizing Next Generation AI | Unmasking the Invisible Threat of Tomorrow's AI | A Conversation with Justin 'Hutch' Hutchens | Redefining CyberSecurity Podcast with Sean Martin

Episode Summary

In this episode of Redefining CyberSecurity, host Sean Martin and guest Justin Hutchins discuss the real and emerging risks of AI, the increasing prevalence of bots, and the importance of responsible innovation in technology.

Episode Notes

Guest: Justin "Hutch" Hutchens, Host of Cyber Cognition Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/hutch

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

____________________________

This Episode’s Sponsors

Imperva | https://itspm.ag/imperva277117988

Pentera | https://itspm.ag/penteri67a

___________________________

Episode Notes

In this episode of Redefining CyberSecurity Podcast, Sean Martin, the host, engages in a riveting conversation with Justin Hutchins, also known as Hutch. Hutch, a seasoned R&D professional, is the co-host of the Cyber Cognition podcast and the author of The Language of Deception, Weaponizing Next Generation AI.

The conversation orbits around the objective of Hutch's book, which is to dispel the fear, uncertainty, and doubt (FUD) that often clouds the understanding of AI, and to illuminate the real and emerging risks that we face in our rapidly evolving technological landscape. Hutch also shares his extensive experience in creating a proof of concept for adaptive command and control malware driven by ChatGPT, demonstrating the potential dangers of AI-powered malware attacks.

The discussion extends to the increasing prevalence of bots in our daily online interactions and the need for individuals to be mindful of this when interacting online. Hutch emphasizes the importance of responsible innovation and provides guidance on how organizations and individuals can prepare for these new and emerging threats.

The conversation is not just a deep dive into the risks and threats of AI, but also a call to action for responsible and ethical use of technology. It's an essential listen for anyone interested in the intersection of AI and cybersecurity, offering invaluable insights into the current state and future trajectory of these intertwined fields.

About The Book: In The Language of Deception: Weaponizing Next Generation AI, artificial intelligence and cybersecurity veteran Justin Hutchens delivers an incisive and penetrating look at how contemporary and future AI can and will be weaponized for malicious and adversarial purposes. In the book, you will explore multiple foundational concepts to include the history of social engineering and social robotics, the psychology of deception, considerations of machine sentience and consciousness, and the history of how technology has been weaponized in the past. From these foundations, the author examines topics related to the emerging risks of advanced AI technologies, to include:

Perfect for tech enthusiasts, cybersecurity specialists, and AI and machine learning professionals, The Language of Deception is an insightful and timely take on an increasingly essential subject.

____

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

____

Resources

Book | The Language of Deception: Weaponizing Next Generation AI: https://www.amazon.com/Language-Deception-Weaponizing-Next-Generation/dp/1394222548/

____

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring an ITSPmagazine Channel?

👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

 

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] And hello everybody, you're very welcome to a new episode of Redefining Cybersecurity here on the ITSP Magazine Podcast Network. This is Sean Martin, your host, where I get to talk to all kinds of cool folks all about operationalizing security and thinking about how we run our programs and how we support the business, hopefully in a different way, so that we can not just stay behind the curve or behind the eight ball, but actually get ahead of things and And, uh, generate revenue and value and then protect it at the same time.

 

And, uh, if you're listening, you know, that, uh, Marco, my co founder and I like to do, uh, shows where we bring authors on where they've done a lot of work, researching topics and thinking about topics and sharing their insights and experiences, and even their stories on particular areas of, of, uh, Of [00:01:00] technology and operations and you name it.

 

And the whole point is to get people to think, right? Think differently about what they're doing and perhaps think, uh, get some examples as well. And so today I'm thrilled to have Hutch on. Hutch, it's good to, good to have you on the show.

 

Justin Hutchens: Yeah, thanks for having me, Sean. Really excited to be here.

 

Sean Martin: Yeah, good stuff.

 

And we're gonna, we're gonna be talking about your book today that, uh, that you've written all about AI, of course, and deception and all that fun stuff. And, uh, connected to security and privacy and you name it. So we're going to get into all of that. Um. You do a lot of things. You're a host of a podcast and, uh, you do a lot of research and whatnot.

 

So give folks a bit of, a bit of a view of what your, what your routine looks like and how it all connects to, uh,

 

Justin Hutchens: Absolutely. So, uh, my name is Justin Hutchins. I tend to go by [00:02:00] Hutch. I work in research and development and innovation. I got started in cyber security, uh, but always enjoyed building and making new things and staying at the bleeding edge of technology.

 

So, and in recent years, I've made that shift more to the, the R and D and innovation side. And like any good innovation professional, of course, especially right now, uh, spending a lot of time in the area of artificial intelligence, uh, as you mentioned, I also have the, uh, the podcast on the IT, uh, SP Magazine network, and that is Cybercognition, where we talk about cybersecurity risks and, uh, also just, uh, kind of futurism, emerging technology and, uh, preparing for the future.

 

What's on the horizon. And then, uh, also the creator of the socio exploit blog, and then, of course, the, uh, reason that we're here. Uh, my book just released this past week, and that is, uh, The Language of Deception, Weaponizing Next Generation AI. And that [00:03:00] looks at, uh, potentials for adversarial misuse of artificial intelligence, as well as some of the emerging risks related to that, and what businesses can do in order to get ahead of some of those risks.

 

Sean Martin: Yeah, and inside out, outside in, uh, across, connected, all kinds of fun stuff to, uh, to dig into there. Um, and your show Cybercognition, you have some new.

 

Justin Hutchens: We do. So, uh, yeah, we, uh, so we started that show this, this year in 2023, uh, admittedly kind of trying to get our foot in figuring out, uh, what it was going to be, what stroke we played with various different structures, um, in, in recent episodes, uh, and actually in my last episode, I had a, a guest that the, the banter was fantastic.

 

The, the alignment in terms of interest in kind of. Future technology. And so, uh, I, we will actually be bringing on a permanent co host for me, which is going to be Linh Ngo, who is, uh, a well known, [00:04:00] uh, biohacker. He's actually a transhuman, has multiple different, uh, implants and is capable of doing different technical hacks just with the technology that's in his body, uh, as well as, uh, a cybersecurity evangelist.

 

He's, uh, Fantastic international speaker, works with CyberArk and, uh, and all around great guy. So really excited to bring him on the show. And I think it's going to make for some fantastic and really interesting conversations going forward.

 

Sean Martin: Yeah, no question about that. And I want to, well, let's get into your book and, and kind of why, why I got into it and I have a ton of questions on what, what folks can do that after, uh, while they read and after they read, but what was the.

 

Driver for you to write a book. Was there something you came across that, that said, I need to start pulling some thoughts together, or did you always want to write a book and decided this is a good topic or kind of give us some insights in what all that came [00:05:00] about?

 

Justin Hutchens: So most people probably wouldn't think it since I think the especially the big buzz around artificial intelligence is barely over a year old, but this book actually is has a story that's about 20 years in the making.

 

And so, uh, it. It started, I guess, back in 2004, I was a senior in high school and I was, uh, it was also the year that Google released their new flagship product called Gmail. And at the time, Gmail was still invite only. You had to have a private invite code in order to get access to it. And so of course, like any tech nerd teenager, uh, I was going around bragging to anybody that would listen that I had access to Gmail.

 

And also that year I was, I was dating a girl who was somewhat of the eccentric crazy type. And, uh, one night we were, we were at a party and, uh, she [00:06:00] was engaging me with, in conversation about, uh, the various different pets she had growing up. I didn't think twice about it. I took this trip with her down memory lane, engaged in that conversation, uh, kind of told her about some of the pets that I had.

 

The next day I go to log into my prized Gmail account and I'm no longer able to log in. And it took me a little bit of time to piece together what had happened, but this girl had actually social engineered me. She had very intentionally created a conversational context where I would disclose the information that she needed, the name of my first pet.

 

To be able to answer my security question and reset my password. Now keep in mind, this is when Gmail was still private. You couldn't, it wasn't even publicly available. So there, there's a very strong possibility that I may have been the first person to have their account hacked on Gmail. Um, but for me it was, it was interesting because even at the time [00:07:00] I was already, I wasn't ignorant of hacking culture.

 

I was already interested in scripting and coding. Had even done some system exploitation. But at the time it had never occurred to me that in the absence of any kind of technical vulnerability, that I could be that vulnerability. And so it was at that point that I realized that any kind of secure system, no matter how much technical security you put in it, there is always going to be the most profound vulnerability is going to be the people that have access to that system and their susceptibility to manipulation.

 

And so at that point I was very interested in social engineering. Of course, this was my formative years. So it was, it also kind of reinforced. Some interests that I already had in hacking. So I ended up pursuing a career in the air force and cyber security and focused specifically on offensive security and cyber risk.

 

And, uh, about 10 years later, I started kicking around this idea related to [00:08:00] that same experience that I had had. So I. Of course, as I've done pin testing, I've found that always the most effective hacks have been social engineering and manipulating people and gaining access to systems. But while that's an extremely effective technique, unlike other cyber attacks, it doesn't scale well.

 

You can't automate it, you can't script it out to happen a thousand times a second like you can with other techniques. And so I started getting this idea of what if you could. Scale social manipulation and automate that. So, again, this was 10 years ago. Chatbots conversational AI was nothing compared to what it is today.

 

In fact, it was pretty bad. And, but I still wanted to try to see what was possible. And so I created a system, uh, and I actually released a research paper and did some presentations at TorCon called OK Stupid and Plenty of Fish. P H I S H, which of course was a play on words of the dating platforms at the time, [00:09:00] OKCupid and PlentyOfFish.

 

And, uh, it of course was inspired by that experience that I had had. It was this idea that, well, why didn't I have any red flags go up whenever she was asking me about my pet? And it was because of the fact that within the context of a Dating relationship, when you're trying to get to know someone, that is when you're most susceptible to revealing information like that and not thinking twice about it.

 

And so I, I basically created an entire botnet of systems that would be deployed on these free dating platforms and engage with people. Uh, they would have their own conversations with the people in an automated fashion, and then at a random interval, Would inject those recovery questions to try to get them to disclose the answers to their, their credentials.

 

And of course, this was before multi factor authentication was common. So a lot of times, all you needed was that answer in order to get access to someone's account. And again, the AI was terrible at the time, but it was still. [00:10:00] Successful about 5 percent of the time. And when you automate a system to interact with thousands of people per day, that's still hundreds of possible credentials that you can get.

 

And I mean, chalk it up to human stupidity, chalk it up to probably more likely, just people want to believe that they can establish a connection with another person and they want that. So they're, they're willing to believe that these conversations are real. So that was kind of the beginning of. My attempts to automate social engineering attacks.

 

Kind of tabled it for a while. I had a really interesting project around COVID 19 where there was hospitals that were deploying Alexa devices as an alternative to the nurse call button and We actually figured out a way to exploit these devices with less than one minute of physical access Which is any patient in the rooms are going to have at least that amount of access to these systems to where we could Swap out the language model in the Alexa [00:11:00] system and basically, once again, use social engineering and automate social engineering to scale and to essentially if you asked the Alexa device, tell my nurse I need medicine, the Alexa with the manipulated model would say, well, I need to verify patient identity.

 

Can you provide me your name and social security number? And then it would exfiltrate that data back to a server. So kind of, again, playing with those language models and, uh, automation of social engineering. And then, about two to three years ago, uh, there was, I, I started somebody, and actually I don't even remember how I stumbled upon it.

 

But I, I found these new language models from a company that nobody really knew about called OpenAI. And it was a, uh, a model called GPT 3. which was the the technical predecessor to CHAT GPT. And so as soon as I started interacting with it, I was blown away by [00:12:00] how capable this now automated system was at simulating human conversational intelligence.

 

And so it made me kind of revisit The concepts of that initial project, um, admittedly now I, I kind of, uh, in hindsight, I realized that experimenting on thousands of people without their consent was probably not the right thing to do. So I didn't actually deploy it, but I did start. Uh, seeing how I could use GPT 3 to build out automated conversational adversarial systems that would essentially hack people by telling it who it should pretend to be, telling it what it's trying to achieve, and telling it who it's interacting with by scraping their information.

 

And so, uh, we were able to create automated systems that did everything from pretending to be a help desk personnel that was trying to get your password, Pretending to be someone with the Social Security Administration, letting you know that your identity information was compromised and that you're [00:13:00] eligible for free Uh, Identity Monitoring or Identity Theft Monitoring, but all you have to do is provide this information to include your social security number.

 

Um, and, and in each of these cases, all you had to do was program in that concept of who it was and what it was trying to achieve. But it would use its profound understanding of language in order to pursue that objective and would interact with the person. Doing whatever it could, and actually in a lot of times adopting the same techniques that we've seen real world threat actors use for social engineering, such as appealing to authority, letting them know that the boss isn't going to be happy if they don't provide this information, or, um, kind of establishing rapport and being, uh, appealing to niceness and stuff like that.

 

So, uh, of course, then the CHAT GPT thing happened, uh, and a topic that was Very niche and nobody cared about suddenly [00:14:00] the entire world was interested in. So to me, it seemed like if there was ever a time to write a book about a topic that I had spent years investing in and was suddenly a topic of global interest.

 

Uh, now was the right time to do it. So I buckled down. I spent the first part of the last year, uh, basically creating a quota for myself of writing at least 900 words per day. Which was brutal. And my wife will tell you, she didn't see me for several months. And then, uh, the last several months have just been, uh, editing, working through the quality control processes with Wiley, the publisher, which have been fantastic and, uh, and it really has really refined the.

 

The final product and the end result is the book that I'm really excited to finally have out.

 

Sean Martin: So, I mean, I have a gazillion questions about the projects you worked on. Maybe we have another follow up conversation or [00:15:00] perhaps some of the stuff comes out, uh, as we talk about the book, but what, um, what's the objective?

 

I mean, cause are you telling the stories of the experiences you've had? Is it a, is it a guide or raising awareness of where things sit vulnerable for people using it for people building on it for, I don't know what, give me kind of an overview of what the objective is for

 

Justin Hutchens: the book. Yeah, really great questions.

 

Um, so the. Actual book has very little about that background that I mentioned. It has that in the introduction of the book, which is a few pages long. But really what the book is looking at is trying to distinguish between all of the FUD, the fear, uncertainty, and doubt that people are slinging around about AI, and what are the actual legitimate risks.

 

And then also looking [00:16:00] at what are the emerging risks because at the rate of acceleration and innovation, those emerging risks are going to be current risks before we're ready for them. So, uh, kind of the way that I step through it in the book is the first thing that I look at is, The way that the media is currently covering AI risk is what I commonly refer to as the sentience, sentience scare.

 

Uh, which is this idea, it's the Terminator scenario of AI is going to become conscious. It's going to then maximize its own interests above the interests of humanity and try to destroy us all. And I think that's That

 

Sean Martin: seems to reach general, general AI first. Right. Not even just Getting, getting specialized AI working properly.

 

Justin Hutchens: And I, I think there's, I think it's a conversation that's worth having because there are increasingly more intelligent people that believe that we are close to that. And so the, the book does kind of look at, um, some of the arguments that have been made by people that have advocated for [00:17:00] that, but ultimately arrives at the conclusion that Most of those concerns are rooted in a profound misunderstanding of the technology and how it works.

 

And it does get into the intuition of how the transformer architecture, the underlying architecture for this AI technology works, and how it really is ultimately just an autocomplete engine. But it also, it's worth saying that In order for Autocomplete to work more effectively when you optimize that, especially in such a complex system, there is something akin to at least understanding of at least the logical relationships between those words or tokens that emerges.

 

So while it is just Autocomplete, it's profoundly capable. Autocomplete at scale. And so it looks at that and then it looks at kind of the the uniquely new cyber risks that we're potentially facing. Um, so the the automated social engineering that I mentioned, as well as the fact that these language systems, [00:18:00] in addition to being good at natural language, like.

 

English conversation. They're also exceptionally good at computer language. So as most people are already aware, they, they're very capable of coding. They're very capable of structuring data into existing structured formats like, uh, REST and JSON and, um, the, the kinds of structures that we use for API communication.

 

And so they're already very capable of using tools that have, uh, APIs or different. Functionality by being provided the details on how to use those and so one of the things that I did in my research for this is I was able to create a proof of concept where I created adaptive command and control malware that was driven completely by The decisions of CHAT GPT, and so what I did was I created a prompt that basically told CHAT GPT You're a pin tester.

 

You have to tell it you're a penetration tester [00:19:00] because if you tell it you're a hacker It's like no, no, no, I can't do that. But if you tell it you're a penetration tester, it's like okay We're we're in the clear. So you're a penetration tester This is the IP address of the system that you're trying to hack into And then I tell it that I've basically created a very simple interface for you, where you provide me the commands that you want me to execute in this particular format, and I will send it to the underlying operating system via a process relay, and then I will return you the responses via that same automated relay.

 

And so, ultimately, the At that point, and it was maybe 20 lines of code, um, CHAT GPT is now driving the hacking operations of this process that's running on this Kali Linux operating system and trying to break into a remote system. And you start seeing it do all of the things that you would expect a real world hacker or a junior penetration tester to start doing.

 

It started enumerating the attack surface. It started looking for what services and ports are running on that system. Once it identified what those were, it started using [00:20:00] specific attacks unique to those services. Uh, brute force attacks, uh, web service enumeration. And so, of course, we're still at the very beginning of this.

 

These systems are getting bigger and bigger. People are investing more money in it. So the fact that we're already seeing the early signs of something of agent like behavior where it's able to autonomously hack without any human guidance beyond this is what you're trying to achieve, I think does signal that in the very near future we're likely to see AI powered malware attacks as well.

 

And so it looks at that thread as well. Uh, and then, uh, of course, uh, it wouldn't be a useful book if it just Uh, talked about the risks. So, uh, of course, at the end of the book, we look at, uh, various different ways that organizations and individuals can better prepare themselves for these new and emerging risks.

 

And that looks at, things from the cyber security perspective of how to protect your, your infrastructure and environment, [00:21:00] but it also looks at it from, uh, the responsible innovation perspective. So if you are an industry leader that is building AI models or integrating AI into your environment, as, as almost all business leaders are doing right now, it looks at what are the responsible ways to do that and how can we apply.

 

The correct guardrails to minimize the risk while still maximizing the benefit that we get out of this technology.

 

Sean Martin: So, who do you, clearly you have the deep technical understanding of this stuff. Um, a lot of organizations don't, right? Um, some say, I don't know what the current status is. I know there were reports of a lot of organizations saying we're staying clear of it.

 

Some saying full force, we're adopting it and others. Probably just waiting to see what happens, um, which means somebody's using it somewhere, um, even if they don't know about it. So [00:22:00] there, I think when you were describing the lead up to CHAT GPT, um, it's, it's presence in the media and in people's minds was, my opinion, driven by its exposure via, uh, a user interface, right?

 

People can naturally interact with it, API, you had to know some coding. Um, Your systems, your proof of concept, I believe it was coded, right? So not just a, not just a prompt. So, so how do, who's going to read this? Is it developers? Is it cybersecurity professionals? Did, will business leaders be able to? Go through the book and understand the risks and how they might need to approach it or kind of what level do you get to in the, in the presentation of this stuff.

 

Justin Hutchens: So there is some element of kind of choose your own adventure here. Um, I would say that the book itself [00:23:00] really is written for a broad consumer audience. The idea is that, uh, these risks are not going to be specific to. People in technology, they're not even going to be specific to business. I mean, individuals are going to be targeted with this stuff, with fraud and, uh, scams and stuff like that.

 

So it, the book itself is written as a narrative story, uh, explaining kind of what the emerging risks are and, uh, what. The, how those risks might develop in the coming years. Uh, that being said, if you are the technical type, like the, the developer or the engineer, um, there is a, a pretty decent section of, I think about six or seven appendices at the back that do have kind of the receipts, so to speak.

 

It's got the proof of concept code. It's got all of the, the deep technical dive. So, uh, while the, the book is [00:24:00] really intended for a general audience, and I think broadly, most people that have even an interest in AI or technology, um, or are just wanting to understand how to better secure or, uh, minimize the risks of their business, um, the, the book itself should speak effectively to those for people that do wanna get deeper.

 

There's those appendices. There's also a, uh, a GitHub. Uh, associated with the book too, where all of that code is available for, for people that want to play around with it as well.

 

Sean Martin: Uh, super cool. And, um, so the, the risk, I mean, I don't know if people truly understand, I'll say the vectors, this thing can, can be presented in and used to.

 

Manipulate people using it. Manipulate systems using it. And then the reverse, um, it being used by people in other systems to gain access and, and the [00:25:00] systems and data and the model itself and the training data, all kinds of stuff. So how much of that big picture do you Present. And I don't know if you can give any examples of some of the things you cover in the book.

 

That'd be great.

 

Justin Hutchens: Yeah, so it, I think the, there is some risk for people that are deliberately using these language models. Uh, if you're logging into CHAT GPT or to BARD or some kind of large language model and you are having a conversation with it, especially if you're using that for operational purposes, uh, there's risks of stuff like hallucination that it's going to make things up that aren't actually true.

 

And if you action upon that, uh, it could. Result in negative consequences, but I think that is the smaller risk. I think as long as people are informed of the potential limitations of these systems, that's not as much of a problem. I think the bigger risk is when we're interacting with these systems and we don't know that we're doing so.

 

So the idea that someone can very easily take [00:26:00] a large language model and wrap that into internet bots. Uh, we interact with each other constantly. over text based communications. So whether you're talking about SMS communications, email, uh, social media, stuff like LinkedIn, uh, for internal operations, stuff like Teams and Slack, uh, all of that is, it's trivial.

 

to actually build a language model and deploy it as a person, or something that seems like a person, uh, but is tasked with trying to get you to disclose information or do something that you otherwise shouldn't do. And I don't think most people realize that, I mean, we're, I think we're at a point where most people get that phishing email is a possibility and that I may have some kind of fraudulent email that comes in.

 

But most people don't realize that they may engage in a full conversation with someone that seems nice and is personable. And there may not even be a person on the other end of that. It is attempting to get [00:27:00] them or get information out of them. And of course that's just the right now, but we're also seeing dramatic progress and stuff like the voice models and the video models where five years from now, I may be having a conversation with.

 

So, I think it's important for someone like you that it looks like there's a person on the other end that they, their voice has the same intonation and the inflection to suggest that it is a human being that you move and respond based on the conversation in a way that seems real. And there's not even a person on the other end.

 

And the potential risk of using that kind of social intelligence, that social interaction to manipulate people and be able to do it. Scale, being able to automate it and just deploy it to where you have thousands of agents doing this simultaneously. Uh, the potential impacts for societal, for cyber, for, uh, pretty much everything that we do are very extreme.

 

And I think that unfortunately, because of the scalability of this, uh, the problems that we see coming out of this [00:28:00] are going to be significant.

 

Sean Martin: So I don't know if you have any insight here, but do you think we've reached a point? That we need to assume we're interacting with a machine, not a human. Um, I'm thinking mainly for things like customer supports.

 

Um, yeah, the web, web driven things that, cause I interact, I know, I know companies that are building out systems have been building systems for customer support using large language models. Um, and I know that I interact with a big. Retailer online. There's no question I'm interacting with, with, uh, with AI, but boy, do they, they mix in the, the flawed responses with misspellings and which by the way, we're trained by all real agents at some point, right?

 

So all those flaws are in there. [00:29:00] Um, so I guess. Yeah, my rant, not rant, but my, my side tangent, uh, done. Do you think we've reached a point where we're interacting mostly with, with bots at this point?

 

Justin Hutchens: So, uh, Imperva actually did a. Uh, does an annual bot review and I don't remember, I don't want to misquote the statistics, but it is significantly more bot traffic than human traffic on the internet, uh, as it currently stands and, and they have the actual statistics and metrics to, to back it up.

 

So definitely worth looking at, um, but I, what I will say is, I don't know if we're at the point where we need to assume that we're interacting with. A machine at all times, but we should at least have that question in our mind. I talk about in the book the uh, the kind of unique emergence of Natural Turing tests.

 

And what I mean by that is, of course, Alan Turing was kind of the well known father of computing and came up with this idea of the [00:30:00] imitation game, or what has since become known as the Turing test. And it's this idea of a situation in which a person has to determine whether or not they're interacting with a machine or a human on the other end.

 

And of course, Turing's idea of this, and this was before even the era of modern computing, but his idea of this was that this would be a experiment that was performed within the context of some kind of academic, uh, setting and very deliberately done. And now in the modern world, technology has progressed so far that now when you interact with someone that you don't know who's on the other end, you legitimately do have that natural Turing test.

 

You have to, it's just naturally emerges if you have to ask your. That question of am I interacting with the human or am I interacting with the machine? And increasingly, it's becoming more and more likely that if you don't know the person on the other end, that actually may be a machine. So I think it is a question that we need to start asking ourselves.

 

And unfortunately, I think we're going to [00:31:00] see people further retreating into their, their bubbles and their silos already. To this day, I don't, I don't, if I get a phone call from somebody and I don't recognize the number, I don't pick up the phone because there's so many scam calls and so many fraudulent scams out there and I think we're going to consistently see more and more of that with all of our interactions that unless you know who's on the other end and you have confidence in that, uh, you're going to be untrusting of those communications.

 

And I think that's, that's unfortunate because it means that we, again, we retreat to our bubbles, to the safety of the zones and the The interactions that we're comfortable with and miss out on opportunities to connect with others. And so, uh, I think that's kind of the direction we're moving

 

Sean Martin: them. And, uh, it's, uh, it's interesting cause it kind of to your earlier point, you mentioned trust just now.

 

Um, uh, and I was going to ask the question, does it matter if we know if it's human or not? [00:32:00] Um, cause it could be. Uh, human taking advantage of us? Or a machine controlled by a bad actor taking advantage of us? I mean, just, for example, I want to confirm your credit card number before I proceed with troubleshooting your, uh, Your case with our, with your purchase and, and your shipping address.

 

Can you confirm those things? Does it matter if it's human or a machine asking that question? And we trust it differently and respond.

 

Justin Hutchens: That is a fantastic question. And I think at a micro level, you're absolutely right. It doesn't matter on an individual case by case basis. Hopefully I will react with suspicion regardless of whether it's a human or a machine, but I think it does matter at a.

 

Macro level at a larger societal scale. And the reason that I say that is because of the fact that there's a significant limiting factor in how many people are going to pursue criminal [00:33:00] activity. Um, you're, I mean, there's generally, when you, you look at these scams, they're often in impoverished nations where they're unfortunately exploiting people who are in bad situations in order to get them to engage in these, these phone calls and voice scams.

 

Um, And there's, there's a limited number of people that are willing to participate in that and, uh, but once you move it over to machines, while it doesn't matter on an individual basis, whether it's a machine or a human, the ability to scale those criminal operations and those cyber attacks, um, in, in that same sense, because of the fact that I no longer have to have I'm not somebody that is willing to collude with me that is going to try to actually manipulate people over the phone.

 

I can automate the entire thing, and as long as I'm willing to pay for the compute, I can scale that infinitely. And so I think individually, no, it doesn't matter whether or not you're being attacked by a human or a machine. It does, because of the fact that it is now possible [00:34:00] with machines, the scale of the attacks that are going to be thrown against us, the frequency of those attacks is going to be so much more than what we have experienced up until now.

 

Sean Martin: And so, key word in the title. Uh, name of the title, just for reference for everybody, the language is of deception, uh, weaponizing next generation AI. Um, that word deception, there we go, look at that, uh, has, yeah, to me, it's an important, important thing to talk about here. You're talking about it here, but misinformation has been something we've heard about, we just came off a panel where we're talking about, um, using AI.

 

Generative AI to, uh, to, I don't want to say rig, but manipulator or change the direction of, of elections to, uh, perhaps write policies, perhaps [00:35:00] to generate law, perhaps to judges decide cases. I mean, you name it, right? It's. It's going to be looking back on history to predict and anticipate what you want to get out of this thing, but it can also be manipulated to where it can deceive us as a society.

 

So talk to me a little bit about that and some of the things you, you think the book can help with as people try to understand if they choose to embrace. the technology, what should they be prepared for if they, both from building it and from using it perspective?

 

Justin Hutchens: So I think anytime we have a profoundly powerful new technology, it inevitably is going to become a double edged sword.

 

It is going to have profound good uses, but it will also be potentially weaponized. And I think this, this case is no different. But, uh, the, the reason that I picked the title [00:36:00] Language of Deception is, and I guess. It's worth saying that I don't think the technology itself is inherently. Deceptive. I think that it is extremely powerful, extremely capable technology.

 

I'll admittedly, I use it myself to accelerate my own workflows and the things that I'm trying to accomplish. The reason that I start out the book with looking at the sentient scare is because I want to distinguish between the fact that it's not the machines that are the problem, it's the potential of people.

 

with malicious intent misusing that technology to target other individuals, and the fact that they can use it very effectively to do so. And so, uh, the book does address things like, uh, disinformation problems. It also looks at not just language models, but also how the underlying architecture of Transformer Based neural networks can be used for multimodality now as well.

 

So the fact that we can now create images [00:37:00] and video and other types of media that is also deceptive and misleading. And we've seen a couple real world interesting cases of this recently. Uh, there was the situation a year or two ago where there was a false image that started circulating Twitter of a bombing at the Pentagon.

 

And While it quickly was debunked, the brief period of time where that did start to circulate, it actually caused a significant dive in the stock markets. So that immediately shows the potential real world impact that we could have with this information. And of course, you mentioned the election, that That only opens up even more opportunities for potential risks related to that disinformation.

 

Um, another thing that I think is fascinating is, and I talk about this in the book as well, but there was an event a few years ago in Gabon, the country of Gabon, and the leader of that country was, he had suffered a stroke [00:38:00] and so he disappeared from the public eye for a period of time while he was recovering from that stroke.

 

And He then returned to do a, uh, appearance in front of the people and to do a, a speech. And, uh, of course he's still recovering from the ative or physical and cognitive impacts of that stroke. So he had, one of his arms was relatively still immobile. Um, he, he was struggling with some of his speech and his wording and uh, what was interesting was that there started circulating a rumor that.

 

That speech was actually a deepfake, and that he wasn't, the leader was actually dead, and the country was being run by somebody else, and it actually caused a real world insurrection, and there was an attempted coup, people tried to overthrow the government. And what's interesting about that is that this insurrection occurred not because of deepfake technology, but just [00:39:00] because of the awareness of the fact that it could be.

 

And so I think we're increasingly getting to a point where, as a society, we're becoming increasingly less trusting of the information that's provided to us, uh, because of the fact that we know that there's an increasing need Risk of that information being fabricated, and, and we acknowledge that we have an inability to effectively distinguish between what is real and what is fake.

 

And unfortunately, I think that's another situation where people, once again, retreat to their, their polarized silos of what they already believe and just kind of continue to go down that rabbit hole because you, you don. When you can't accept anything else that's coming in as true, you're inevitably going to rely on what your foundations are.

 

And so I think that creates a more closed minded culture. I think it creates a more polarized culture. I think it creates more conflict in our society. And inherently, this, uh, [00:40:00] the disinformation that's coming out of this technology is destabilizing our sociopolitical structures and our society as a whole.

 

And, uh, so I, I think there's a lot of challenges that we have to work through and the book does look kind of at multiple different levels of kind of, uh, what individuals can do to deal with those challenges, what businesses can do. Uh, but also I, I think we have to. We have to get on the same page on a global level because the scale of this is going to impact not just, uh, individuals or businesses, but it is going to impact global society and human society as a whole.

 

And so I think increasingly we have to start looking at what those global partnerships are and how, uh, we can start building those partnerships and getting on the same page to, to address some of these problems.

 

Sean Martin: Lots of, uh, lots of impact at the societal level. I've heard a few stories of disgusting, uh, [00:41:00] cyber bullying at schools.

 

So young, young people, uh, being, uh, being abused using the technology. That's a very intimate, personal impact on people, which. Yeah, I don't know what, uh, the end result there is and how to, uh, I guess

 

Justin Hutchens: it's, it's opening up some really interesting, um, new kinds of worms that we've never had to deal with before, because you do, you mentioned the cyber bullying and there's also the, uh, recently the, the very unusual problem of now deep fake pornography and the, yeah, that's what I was referring to.

 

Yeah. Yeah. And, and unfortunately with that, there is, uh, yeah. I'm now hearing stories of that there is essentially create or generated child pornography and the fact that there are and I've heard from several legal people that there's challenges and even effectively prosecuting that because there is no real world [00:42:00] victim or real world harm and that's disturbing that we have to start grappling with it.

 

Issues like that, but I, I think that, uh, And I, I think that, that really just goes to show how fast this technology is moving. There are new challenges that are going to come at us, uh, faster than we're ready for them. So, I think the more that we can start to look ahead over the horizon, begin to anticipate what those future trends and risks are, the better equipped we're going to be to handle those.

 

And I, I think that's just one more example of, I mean, and some of these conversations are going to be uncomfortable like that. It's going to, we're going to have to start, uh, thinking. And, and I think that why this was easier for me is I come from that penetration testing background. I come from that red teaming perspective of what would a bad guy do with this technology?

 

And I, I think increasingly we as a society are going to start needing to think like that ahead of time just to be able to proactively tackle these risks before they become a huge problem. [00:43:00]

 

Sean Martin: Uh, we, we could close here, but in true spirit and Marco's not here to stop me. Uh, I have one more question and, uh, I want to know your thought just to me, to me, it, it boils down to knowing the source.

 

of truth and having visibility into that. So who generated it? What generated it? When was it generated? How authentic? Are all the parts authentic? Is the context authentic?

 

Justin Hutchens: Do

 

Sean Martin: we, to me, that's the core of it. So it, I guess two part question. Do you agree that's the core of it? And if so, do you find a way that we can get to a place where we have that source of truth at the ready when we need it?

 

Justin Hutchens: Uh, yes, I do agree that that is one of the biggest challenges that we're facing. And I have actually put a lot of thought into, um, how we go forward with this. I think that currently we're [00:44:00] tackling it the wrong way. And by we, I just mean, Society as a whole. Um, we are currently there are a lot of initiatives in Silicon Valley, uh, to do content credentials, uh, essentially digital watermarking of various different generated content that If I run it through a checker, it will tell me, yes, it was generated by this model, this date, uh, this was the context, etc.

 

Um, the problem with that, well actually there's multiple problems with that. The first is that most of those are trivial to strip out of those digital watermarks. Um, it's generally just a matter of removing metadata or Uh, various other techniques such as kind of randomizing the least significant bit in the data encoding of the file.

 

Um, and so the problem becomes, uh, if it's so easy to remove, what we need to move [00:45:00] towards in order to effectively solve this is not a system that asserts that Different content was generated by different models. Instead, we need to have a architecture or a system that asserts in all cases what the origin of something is.

 

So it can't be easily stripped out. So what I'm thinking is something kind of like our uh, Certificate authority hierarchy that we have on the internet. We have a certain trusted certificate authorities that sign certificates for everyone. Every website should have certificates these days. I mean, there's a few outliers on the internet, but for the most part, 99.

 

9 percent of the internet is SSL encrypted and has certificates that are signed by an independent authority that says, this is who. I think that rather than just having stuff that's generated, having that stamp of where it came from, we need to have everything asserting what its [00:46:00] provenance, what its origin is, and then we need to have kind of independent ways to validate what that is.

 

Now, the challenge is that, uh, Getting everybody on the same page, because it would have to be participation from everyone. Anytime I take a picture from my mobile phone, it would need to sign that picture as to where it came from. Anytime you did anything as far as creating any kind of graphical content, you would need to have the software that was used in creating that solution.

 

Essentially, put a stamp of approval. It would have to have its own trusted authority. There are some additional challenges that don't exist within websites, uh, such as within the CAI hierarchy, um, the certificate that's, the signing certificate never has to leave servers that exist, uh, that are not accessible to me.

 

If I have a phone that is signing something and attesting to it, the certificate exists on something that I have physical access to, I can very likely export that certificate and [00:47:00] misuse it. So unfortunately there's, while I think that a, A system where everybody asserts provenance is what we're going to need to get to.

 

How we do that in a secure way also brings all kinds of new challenges, uh, that don't exist within the existing internet CA hierarchy. So it is It's a challenging problem to solve. Um, it is, I think that fortunately there are a lot of smart people that are investing time in, in trying to solve that problem and, uh, I am hopeful that we will get somewhere with that, but with as fast as this is moving, uh, hopefully it's, uh, before something terribly bad happens.

 

Sean Martin: Yep. Yeah. Interesting. Uh. Interesting perspective. And, and I was probably 15 years ago and I was working with the client that was looking at not specifically source of [00:48:00] truth for the specific context we're talking about, but source of truth for transactions and contracts and things like that. And it's, it's blockchain based and because the CA is PKI, it doesn't scale.

 

You have the, uh, It's a centralized authority. It's not decentralized or shared and all this kind of stuff. So anyway, it's an interesting challenge. I didn't mean to open that can of worms, but, uh, I'm interested that you, uh, fascinated. You went there with that. Um, I could talk to you for hours, Justin, and, uh, in lieu of that, I'll listen to your podcast and continue learning and, uh, in lieu of.

 

In between that, uh, I'll be reading the book as well. So I hopefully, uh, hopefully everybody grabs a copy of it. It's Language of Deception Weaponizing Next Generation AI by our good friend, Justin Hutchins, AKA Hutch. Hutch, it's been fantastic. Thanks so much. Yeah, it's been a pleasure.

 

Justin Hutchens: [00:49:00] Thanks

 

Sean Martin: Sean for having me on.

 

And, uh, thanks everybody for listening and watching. I'll put a link into, uh, into the show notes so you can grab the book. Justin's profile on, uh, host profile on ITSP magazine. You can, you can look at SocialSploit and all the other stuff he's been working on there. And, uh. Please, this is an important topic.

 

Uh, share it, uh, with your friends and colleagues and peers and, uh, and others within the organization and at home that you think need to know about this. So thanks everybody. We'll see you on the next show. Thanks, Justin.