Redefining CyberSecurity

From Deception to Connection: Exploring the Ethical Dimensions of Cybersecurity | A Conversation About Cyber Deception and the Cyber 9/12 Strategy Challenge with Rob Black and Marco Ciappelli | Redefining CyberSecurity with Sean Martin

Episode Summary

In this thought-provoking episode of the Redefining CyberSecurity podcast, host Sean Martin engages in a deep conversation with guests Rob Black and Marco Ciappelli about the challenges and complexities of cybersecurity, exploring responsible cyber actor behavior, the use of deception, and the existential threat posed by AI and the metaverse.

Episode Notes

Guests: 

Rob Black, Director at UK Cyber 9/12 Strategy Challenge [@Cyber912_UK]

On LinkedIn | https://www.linkedin.com/in/rob-black-30440819/

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin
____________________________

This Episode’s Sponsors

Imperva | https://itspm.ag/imperva277117988

Pentera | https://itspm.ag/penteri67a

___________________________

Episode Notes

In this thought-provoking episode of the Redefining CyberSecurity podcast, host Sean Martin engages in a deep conversation with guests Rob Black and Marco Ciappelli about the challenges and complexities of cybersecurity. The discussion revolves around the need to define the ultimate goal of cybersecurity and the potential impact on society, privacy, and human connection. They raise important questions about what it means to be a responsible cyber actor, exploring the clash between freedom of speech and content control.

The trio discuss the difficulty of finding a balance between preventing harm and protecting fundamental rights.

Deception emerges as a fascinating topic, with the conversation digging into the potential of using deceptive tactics to deter and disrupt cyber attackers. They ponder the ways in which attackers' decision-making can be influenced and their experiences manipulated to make it more challenging for them to succeed.

The conversation also takes a philosophical turn, contemplating the existential threat posed by AI and the metaverse. They explore the potential loss of authentic human connection in a virtual world and the implications for society.

Throughout the episode, they emphasize the importance of taking a comprehensive and strategic approach to cybersecurity, going beyond technology and considering psychological, social, and ethical factors. This conversation challenges conventional notions of cybersecurity and urges listeners to consider the broader implications and ethical dilemmas inherent in the digital realm.

Get ready for some thought-provoking insights that will surely encourage you to further explore the complexities of cybersecurity and its impact on society.

____

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

____

Resources

UK Cyber 9/12 Strategy Challenge (Website): ukcyber912.co.uk

The Tularosa study: An Experimental Design and Implementation to Quantify the Effectiveness of Cyber Deception (2019) Ferguson-Walter et al, Proceedings of the 52nd Hawaii International Conference on System Sciences 2019: https://hdl.handle.net/10125/60164

Friend or Faux: Deception for Cyber Defence, (2017) Ferguson-Walter K, LaFon D, Shade T in Journal of Information Warfare (2017) 16.2 28-42: https://www.jinfowar.com/journal/volume-16-issue-2/friend-or-faux-deception-cyber-defense

Design Thinking for Cyber Deception (2021) - Ashenden D, Black R, Reid I and Henderson S, Proceedings of the 54th Hawaii International Conference on System Sciences 2021: https://hdl.handle.net/10125/70853

Cyber Security: Using Cyber Deception to Fight Off Our Attackers — Who is Our End of Level Boss? (Article): https://medium.com/@rob_black/cyber-security-using-cyber-deception-to-fight-off-our-attackers-who-is-our-end-of-level-boss-c6d2697eada

____

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring an ITSPmagazine Channel?

👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

 

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Welcome to the intersection of technology, cybersecurity, and society. Welcome to ITSB magazine. You're listening to a new redefining security podcast. Have you ever thought that we are selling cybersecurity insincerely, buying it indiscriminately, and deploying it ineffectively? Perhaps we are. So let's look at how we can organize a successful InfoSec program that integrates people, process, technology, and culture to drive growth and protect business value. Knowledge is power, now more than ever. Imperva is the cybersecurity leader whose mission is to protect data and all paths to it with a suite of integrated application and data security solutions. Learn more at imperva.com. Pentera, the leader in automation security validation, allows organizations to continuously test the integrity of all cybersecurity layers by emulating real-world attacks at scale to pinpoint the exploitable vulnerabilities and prioritize remediation towards business impact. Learn more at pentera.io. Marco. Sean. Can you hear the seagulls? No, I think we should open the window, but another thing I would like to hear is some tractors. There are plenty of tractors and some seagulls if you listen closely. Now are you going to go with special effects, sound effects, or you're actually bringing the real deal? I'm going to make people listen really, really closely to see if they can hear so much so they won't even pay attention to this conversation. Oh, wow. Oh, wow. No, no, no. Let's have a good conversation, not disturbed by that. Well, I was having a little joke here because I think the topic we're going to talk about, which is around deception and multi-disciplinary approach to cyber, I can see a role of the seagull in our current approach where we're just constantly squawking and saying something. The loudest one gets the food. I don't know, I might be going a little off track here, but the point is there's a lot of noise and we seem to keep doing the same thing over and over and over. We're just squawking at each other, all us techies who know how to sling code and deploy products and respond to attacks that we cross. But there's much more to it than that. There's a strategy, there's communication, there's planning, there's understanding the impact to the business, which goes well beyond just squawking for a piece of bread on the beach. So I'm grateful for you to join me, Marco, to have this, hopefully it will be a fun conversation. For those watching, they see we have a guest joining us as well, Rob Black, who was on the show not too long ago. We wanted to have you back. Rob, it's good to have you. Good to be back and looking forward to a good conversation again. Yes, it's going to be fun. We had a really good one at InfoSec EU and I'm excited to keep the conversation going. For those that haven't heard that, first off, go listen to that conversation. Wait, wait, wait, not now, after this. But in the meantime, a quick word, Rob, to refresh folks, who Rob is, what you're up to and why this conversation of multidiscipline and deception? Hi, everyone. Yeah, just give a bit of background about myself. I'm a lecturer down at the UK Defence Academy, where I teach our British military to think about integrating cyber into military operations. In that role, formerly, I was also Deputy Director of the UK's National Cyber Deception Laboratory, where we looked at the role of deception and how we could use deception effectively to be much more proactive in the way we defend our networks and defend our institutions in the cyber and virtual domain. Alongside that, I'm also the Director of the UK Cyber 912 Strategy Challenge, which is very much about focusing on upskilling and delivering the next generation of cybersecurity leaders who are skilled across not just technology, but strategy and policy. And for me, it's fundamentally about having this multidisciplinary approach to cyber. And I think too often, we've almost defaulted to the technology-centric excitement, the shiny toy that cyber brings. And most of my career, I'm saying this as a non-technical person. I can't fix computers. I can't code. I can't chase bad guys in the networks directly because I can't do that. But I kind of feel like I'm an imposter because I haven't got the technical qualifications accreditation. But yet I've made a valuable contribution to cyber. And if you look at the evolution of the thinking around cyber policy and cyber thinking, there is so much more to cyber than just the technical aspects. And it's about working with those skills, understanding that language, and bringing that language to bear, whether it be in the boardroom, in companies, at governmental levels. It has to take a whole of society, whole multidisciplinary approach to cyber. And I think too often, we've just been slightly too easily falling back on the technology side. So from a career where I've spent the last 10, 15 years in cyber, I felt an imposter because I'm not a techie. And yet I felt like I've made a valuable contribution to each stage. So I'm hoping that through this conversation, we can talk about exactly those challenges. We might even mention a few seagulls and how they do come back into cyber deception, quite literally, because it's all about playing with the understanding that the attacker is having and what we can do about it and how we can manipulate that. So I hope that gives a bit of a background for you. Yeah, it certainly does. And before people shut this episode off, because they're thinking, I have a strategy, I have a plan, I've written policies that drive my controls to determine how my tech works at the hands of my staff. Maybe a definition of what strategy really is, what you're talking about, because I think there may be a slight disconnect, what people are doing and what they're actually doing. I don't know. I mean, I think that's a really good question. And I think a few of us can point to a few different definitions of strategy and all come up with different interpretations. And if you think about perhaps IT security strategy, if you think about procurement strategy for your IT assets, you can lead yourself in a variety of different directions. But for me, I think focusing on strategic thinking, strategic considerations around cyber often means we have to lift our heads up from the technical dimensions of cyber. And I'm being very deliberately nonspecific in a definition of strategy, because I think actually all I'm encouraging people to do is lift their heads up away from just thinking on the technical solutions. And if we are focusing on comparing and contrasting technical solution A or B for whatever gap we've got, we're not necessarily thinking as holistically and broad as we need to in terms of dealing with the situation. If I reflect on some of the business continuity work I used to do in the MOD, the British Ministry of Defence before I moved across into academia, we talked about business continuity. It didn't matter whether it was a terrorist bomb or a JCB digger or even a tractor, bringing a tractor to the conversation. It didn't matter if that was the problem that caused the, or that was the issue that caused the cutting of the comms line. The reality was you had to deal with the reality of a lack of comms and dealing with that ensuring business survival, business activity continued, or the resilience of your systems to bounce back as quickly as possible. So at that point, the strategy and the thinking around that was much greater than just dealing with a technological incident or the aspects at hand. And that's what I would encourage, that thinking right there. Well, I'm going to jump in because it's like, I feel like we've been knocking on this door and singing this song, Sean and I, since when we started ITSP Magazine. As a matter of fact, our original focus was on cybersecurity and society. And when you start putting the two together, and then we added technology just to make it even more, more fun, you have to open your mind, right? So your interaction with my post on LinkedIn that came through, you know, connect here in this conversation was about the fact that I was saying that we never look into philosophy and ethics in technology like we've done nowadays with artificial intelligence. But it's a way to look inside ourself and realize that everything is connected. And your comment was right on that. We need everybody, psychologists, sociologists, anthropologists, thinkers that are just not thinking in that way. Because I feel like the bad guys to come back in cybersecurity, they come in where your strategy fails. I mean, you can put four walls, but if you have no roof, what are you going to do? That's where they're going to come in. Absolutely. And I think even more fundamentally to that, the virtual domain, the first ever man-made virtual domain, the artificial domain that is cyberspace, really does challenge our entire societal existence. It causes us to question our relationship between the government's and our government's responsibility to protect us. The fundamental theories of government from, I think, the 1700s, and I'll question my history as to whether I'm accurate. What is the role of government in protecting the citizen? Well, we all know it's about protecting their security and allowing them the ability to have economic activity and be safe and secure. Cyberspace really challenges those assumptions. What is the responsibility of government in cyberspace to protect the individual? What is responsible in governments to protect individuals and multinational organizations? Should the US government or the British government be responsible for just the British element of a multinational organization? How should they deal with it? Let's take the standard cybersecurity issue. We have organizations, if I pick the UK or the US, who are being attacked by cyber criminals in other nations. They could also be attacked by hostile nation-state threats. In any other domain, terrorism, warfare, you would be expecting an armed response by either the police or the military. In cyberspace, what response are we expecting the government to provide? That's a really difficult question to answer. That's not me calling the governments out and criticizing them. It's actually asking us to really fundamentally revisit our philosophical assumptions about the role of the state in protecting the citizen. That's where you bring the philosophers in, right there, straight away. Then if you think about it in more detail, the traditional challenge in cyberspace is what does violent look like? What does violence mean in cyber? Then we need to revisit the concept of informational violence, informational harm. Is it the same as property violence? Do we capture it in the same way? If I delete your hard drive, is it like I've stolen your front door? Is it like I've stolen your filing cabinet? Well, it's slightly different because I can temporarily delete it. I can remove it. I can place it. Is that theft? Actually, if I look at it and look at the information contained in a business, it's an existential threat to the survival of the business. Does it feel right that we just classify it as a property-related issue or data as property? No, it doesn't fit in those categories that we traditionally use. We have to revisit those assumptions right there, right again. It causes us to revisit everything. I think for me, the really exciting bit is in understanding human relationships and interactions. This is where I see the psychologists and the anthropologists is absolutely key. We have now got, and if you think about traditional research in this space, it used to be called computer mediated communications. That's the research that was done on traditional cyber aspects many years ago. That is exactly what it is. In today's modern age, look at us now, we're interacting through technology. We're interacting from different parts of the globe without any physical connection between us. We are having this conversation mediated through these technological channels. What does that mean in terms of the concepts of trust? How do I know you're there, Marco? How do I know it's really you? Same with you, Sean. How do I know? How do I expect that? Traditionally, we would shake hands. I trust you that way. We'd have a laugh and a joke, build a bit of rapport. The person might even go for a beer. That would be how we do it. How do we do that in cyberspace? How do we do that for business in cyberspace? Then more in the cybersecurity space. How do we understand who's trusted, who's not trusted? How do we engage with someone? How do we make sure they know that they shouldn't be in our networks? How do we communicate? It causes us to revisit everything. All of the traditional psychological experiments and theories that we've heard about, compliance, how we make friends and influence, all the traditional experiments need to be revisited to look at how it's happening and how it happens now when it's been shaped and mediated through technology and computers. It's so exciting for the psychologists to come to the fore in this world. Where I work in the military, thinking about offensive cyber and the application of cyber as a tool of statecraft, there is no way of causing real physical harm. Yes, we can point to examples like Stuxnet, where there might be some ability to cause physical disruption, but that's a one-off or an outlier. The ability to have an effect on the decision-making of an attacker or on a defender is through manipulating their understanding of what's going on, and that is where deception comes to the fore. Here, we've got a whole role that psychologists can play in understanding how individuals make sense of the situation, how they interpret what's going on around them, and then how that can be exploited to our advantage. As defenders in cyberspace, if we're just focusing on technology, we're missing the trick, absolutely missing the trick. Before we dig into the deception piece, I want to maybe hone in on something you said. What do we expect government to do? I feel we can only answer that question if we know what we want the outcome to be. What are we trying to accomplish? Where are the boundaries that we're okay operating within, and what do we do when something happens that pushes us out, or we break out, or something happens where we're not in that space that we defined as good? Regardless of who ... Who helps us get back to that known good state? Government or whomever. Do we even know what the good state is that we're trying to reach in the first place? Because I feel to your point we're chasing the ransomware, we're chasing the IP theft, we're chasing the online fraud, stealing, buying shoes and reselling them for ungodly amounts of money. Those are the things we're chasing, kind of missing the bigger picture. And each of those might have an individual response. But then we miss the big picture of what's the ultimate outcome for society that we're trying to accomplish, which might then define how we operate our programs. Well, I guess I think that's a really interesting question. And the UK is certainly grappling with that at the state level and with the international community. We recently released the National Cyber Strategy, I think, last year, possibly the year before. Post pandemic, my timeframes have all changed. But they talk about the UK being a responsible cyber actor. What does a responsible cyber actor look like? What is acting responsibly in cyberspace? Is it about helping others build their capabilities up? Is it about conveying a series of principles of good practice? And this is again, a conversation that's coming with the use of AI. How do we ensure that we are operating in a way that allows us to act responsibly and uphold those values in society that we want to be seen? And I think this is where it becomes a real challenge because you have some real clashing of freedoms here. If I look at the information domain, the responsibility of a nation to allow the freedom of speech and the freedom of thought, as it were, versus the restriction of free speech in order to prevent harm and prevent the sharing of harmful material and harmful content. If that isn't at the heart of the social media conversations we've been having across our nations over the last few years, I don't know what is. And that shows how much of a challenge it is. As soon as the nation state makes a decision to say, actually, we need to control this content or manage this content. How does that sit against the nation state wanting to encourage freedom of speech or freedom of thinking, freedom of democracy, and so on? And actually, that is a real difficult situation to find ourselves in, particularly as we find more challenging scenarios and situations. And when we turn to other nations who might not be acting in the same way and might not be encouraging that responsible situation, exploiting that vulnerability in the West to protect and uphold free speech and that freedom to association in a virtual way. So it really does become a challenge for us across those areas. You know, what is crazy is that we want something and Sean, you said, you know, what is our goal? I feel like we don't know. We can go to the legislator and say, yeah, this is our goal because this is our vision. This is our ethics. This is how we want the cyberspace to be. I think right now we are in a phase that cyberspace is just too big for us. It's too hard to even grasp for, you know, not only the regular person, but even the legislator, even for the technologists in order to understand that they want to keep it in the technology realm, right? Because we still don't know what it is. It's changing. It's been changing us. We need a new way to define ourself. And then we can say, okay, now these are the rules that we need. But you mentioned privacy. I mean, many times I make the joke, privacy is not, you know, it's not what it used to be. It's like nostalgia. It's not what it used to be, right? It's kind of like said like that, but we don't really know. We just don't know. So we need to look inside and see like how we approach this new media or new space. I mean, we talk about the metaverse and you ask 10 people, they'll give you a 10 different definition of the metaverse. So how do I going to regulate that if you don't know what it is? And I think the challenge there is not just how do you regulate that or know what the challenge is, but how when it is so comprehensively affecting the human experience, can you even start regulating it? Are we regulating it for online harm? Are we regulating it for technological abuse? Are we regulating it for privacy concerns? Each start each silo that we take the perspective and we'll have a different way of bounding it. But yet, it needs to be a comprehensive and comprehensively wide interpretation in order for us to get it right. And that's almost impossible. And it's certainly impossible if we're only bounding it by bringing the technologist in to focus on a technology perspective. Yeah, I find AI and the concept of the metaverse and its virtual existence fascinating. The speed of which adversarial learning is occurring and the speed of which we're seeing these developments occurring means that very quickly, the essence of humanity, the essence of human being is going to be really called into question. And actually, that does feel like there's a sense that we need to protect that and show that because if I can go and watch a concert tomorrow with a an AI generated version of my favorite singer, and I still enjoy it, and I still have a good time, does it matter? I'm not seeing that singer in person? Or am I going to have to pay a premium to have the real human authentic experience? And actually, if I want to see that human or that human performance, but I can see him in every town hall in every city around the world all through their AI generated content, how I lost out am I having a lesser experience? But if I'm enjoying it and getting the buzz from it in exactly the same way, does it matter? But then that might apply in terms of a pop star, but how does that apply in your personal life? If I've got an authentic artificial intelligence, there's more intelligence than human because of the speed at which they can learn potentially even more emotionally intelligent than other human. And I'm remarkably lonely in my life. Do I need to find another life partner? Or can I just use AI? And could I replace that? In which case, there's a real fundamental existential threat to human society here. And we're not even talking about the rise of the robots killing us. It's just talking about the fact we're replacing the need to have authentic human connection. And that's quite scary, if nothing else. We're exciting. Exciting. I'm waiting for Marco to train tape three, our robot to replace me. So I can just relax. I already did, dude. There we go. You said your tape three. How would I know? That's the question. How would I generally know? Exactly. So let's talk about the deception because as with everything, when we're talking about anything, there are ways, different ways you can slice things, right? So you can use technology for good, or you can use it for nefarious purposes or just exist or whatever, some sliding in between. So deception, I think we want to talk about it in terms of using it as a means to make it harder for a bad actor to succeed in their journey to take us out of our comfort zone. Right? Absolutely. And actually, I think that's the critical consideration. What are we doing to make it more challenging for our attackers? And if I think of the definition of threat, threat is often defined as capability and intent. So what are we doing to disrupt their capability? What are we doing to disrupt their intent? Most of the work we're focusing on is disrupting their capability, the resources they're using, the tools they're using and blocking them or stopping them from doing it. But what are we really doing to mess with their decision-making or understand their decision making? I was listening to a podcast the other day, Charles Van Der Waal was talking about this with the ransomware actors, how they've evolved, how their tactics are changing, but also how they're having to learn about the victims. And they're picking victims based on their ability to understand the victims, mainly through language, so they know what data is valuable when they're stealing it. So actually there's a series of decision points that actor is taking, not only in gaining access to the network, but also traversing through the network, then discovering information, discovering the crown jewels which they might be stealing, and they have to make a decision about how much time they spend in the network, what are they going after, what information should they take back with them and why. So these decision points are happening all the time. How are we as cyber defenders interacting or being conscious of those decision points and manipulating the attacker's experience? What could we be doing? It doesn't even need a technical solution. The NSA did a brilliant piece of research, and I can point it to you and steer it to you, where they told a series of pen testers that there was deception being deployed on network, and they told another series of pen testers absolutely nothing. And you know what? The pen testers who were told that deception was deployed on network did two things. They progressed more slowly through the network because they questioned every single part of the network, because if it looked too good to be true, that must be where the deception was placed. If it looked as though it wasn't in the right place, that must be a deceptive asset. So they moved through the network much more slowly, they didn't achieve their goals, they spent more time and effort than the other pen testers who weren't told that there was deception in the network. We talk about the seagulls. Exactly, if I just sow the seed of the concept of seagulls, all of your listeners are now listening out for that seagull. I don't need to do anything more than that. So what is the equivalent of doing that in cyberspace? Yeah, I might put a press release out that I'm doing this big business deal and investing in this cyber security company. I might not be specific about which tools I'm using, why not put a press release out that I'm using deceptive technology? And at that point, or I'm partnering with these organizations who are using deceptive technology, anyone who approaches your network as a result, after that, who've done their open source research, and we know they do their open source research, will now be thinking, what can I trust? And what can I not trust? And at that point, you're already influencing the decision making like you've not done before. And that is putting us on a more forward footing than we currently are doing. But I think there's other examples where we can look across other domains. I'm sure all of you have watched a Hollywood film, and been scared, or frightened, or panicked, or crawled behind the sofa because there's a bit scared. I remember being scared by ET, that shows my failure in life. But you know, I was petrified. It was a 2D film in front of me. And I thought it was so realistic. I thought there were aliens out there who might be scary. What is the equivalent in our virtual manmade networks? What is the equivalent that we can do to make our attackers scared? Liam Neeson famously talks about that in the phone in Taken, he talks about if I find you, I'll come down here, come down and kill you. What is the equivalent of doing that? Could we, you know, if we've got an ongoing investigation into a cyber compromise, it could be there for 20, 30, 60, 80, 100 days, we could be doing an investigation and finding out who they are. Why couldn't we sew some data back into our networks, which just let them know that we know who they are? Could you imagine being an attacker inside someone else's network looking for interesting stuff, and you come across a nice shiny document. And in that document, it had your home address, had the name of your kids, the school that they went to. Now, that might be pushing the limits in terms of what we're ethically allowed to do. But just think about the effect on their decision making. Are they going to be focusing on what they're going to be doing in the network? Or are they going to be focusing, I need to find out if my wife and kids are right. And there was an attack on US CENTCOM a few years ago on Twitter. The Syrian Electronic Army posted a load of details of US servicemen who are operating in the Middle East. Do you not think those servicemen were now thinking, okay, my home address has been leaked online by some evil bad guys. I want to make sure my government, my military, I'm making sure my family is safe, making sure my kids are safe. Do you think for one moment, they weren't doubting whether that was a priority, and whether they could focus on their day job? I'm pretty sure that the first thing in their mind would be, I need to make sure my family is safe. So what would be the equivalent in cyberspace? How do we make our attackers question their own efforts, question whether they should be carrying on this activity? And I think there's a wealth of activity we'd be doing there. Wow. You know, you didn't mention a single time anything about technology. And I'm thinking like a confrontation on an old battlefield, like Roman Empire versus, you know, the Turkish Empire, whatever it is. And all comes down, in my opinion, tell me if I'm wrong, that with, if you just think technology, everything needs to come down with when you clash on the field. But there is so much you do before you actually get to that point, you can scare your enemy, you can deceive your enemy by doing a move instead of another. And, or you can do like, you know, the Maori dance, where they really don't want to go to the fight, they just try to scare them with like yelling and dancing and say, we're very strong. But the truth is, we don't want to get into this fight. So it's amazing how it seems we'll go back in time where when you just think technology comes down to who has the biggest weapon on the battlefield, and it's not that. And you know, deception has been a core part of strategy for decades. And you know, I can go back to Sun Tzu, I can go back to the Chinese stratagems. Yeah, well, then, you know, it's been absolutely cool. It's about achieving the effect without necessarily having to go to the full level of force by getting the by getting the attacker to erroneously make sense of what's going on, and then to decide to decide to do something differently to what they were otherwise going to do and that being more favorable to us. So there was a whole wealth of strategies, thinking experience that we can bring to bring to the fight here that, again, is so much more focused on the decision making of the attacker, and not just on the technology at hand. If we think of nuclear deterrence, often compared to cyber security, and that's not quite relevant. But you know, nuclear deterrence is all about having a psychological effect on your attacker, because they know that you've got a capability that you might use in defense. So the moment you have to use your nuclear weapon, that deterrent effect has failed, because the person is carrying on anyway, and you've had to respond. But before then, it's all about messing around with their understanding what's going on, and making them question if they're going to push you too hard that you're going to respond. And is it worth the threat to them? So again, deterrence is another example, where it's playing on the understanding and decision making of the attackers. I think there's so much we can be doing in this place. And it does involve that broader, more holistic approach. It involves bringing the psychologist in, they should be on the front line of this cyber war. They should be on the front line of this cyber fight, not just the technologists, they have to work with the technologists, they have to understand it. But they have to contribute, because we have to think about our attackers as someone who is thinking and making decisions that we need to be exploiting and engaging with. And that's absolutely key. It's no different to the burglar alarm at your home in your home. The burglar alarm is communicating to your attacker, your burglar, to let them make a decision. Do they want to burgle your house? If they choose to burgle your house, they know there's a burglar alarm there. And guess what, they spend less time in your house, because they're nervous that the alarm is there. And there might be a silent alarm. So that burglar alarm is working in multiple different ways on the decision making of the attacker. So I love all of this. And I mean, over the years, we've heard, think like a hacker, know thy enemy. And then the other, the next step is so you can put controls in place to Yeah, right. Or even worse, I'm going to call out the industry or even worse, we draw a funny cartoon of them and call them a particular type of animal and say they're the bad guys. We dismiss the human element of them. I'm not and I understand why because of the geopolitical aspects and considerations of attribution, and whether or not, you know, individual companies should be calling out certain nation states or whatever it might be. But we dehumanize our attackers. If you think of the attacker, what's your first image, a shadowy figure with a hoodie on that does not bring a consideration of the human back into this. It's absolutely critical. So how do we do I love the psychologist, I look back to my early days at Symantec, where we're building a ton of products. And, and they did a brilliant job. The products did a brilliant job at protecting against the new DOS virus. Back, I'm not like I'm talking about here. Eventually, users had to interact with with those applications, to one know that they were actually doing what they're supposed to do to help it respond when when the application couldn't take control on its own. knew when they are to alert them if they had to do take some other action that things were they went off the rails completely, or they've infected the network, whatever the case may be. My point is, at one point, we realized they're humans interacting with the technology. And we needed to bring in people that weren't just technologists to help us connect with that human usability engineers is what I'm talking about. So and people who we did studies on how people thought, what do you what do you feel when you see this screen or see this alert or, or don't see it. And that psychology, the understanding helped us build hopefully what I thought was a better product at the time. So talk to me a little bit about what you're doing. Maybe this is a good place for the cyber 912 strategy stuff to come in where how we bring different folks together, multiple disciplines to have this view that is well beyond just, I have this view of the attacker and put some technology in there. I mean, I think the ultimate goal and you're absolutely right is that you've talked about the user usability challenge, and you brought in experts on usability, and it helped improve the product. Now we need to apply that offensively, you know, quite literally, what is what are the alerts the attackers are seeing? How can we overload them? So they don't know which alert to deal with first? How do we manipulate their tooling? There's a great example of a a chap who was so annoyed at playing a particular computer game online, because of all the people who were cheating and downloading crack weapon packs, they were getting all the weapons. So what did he do? He designed and developed a poisoned, cracked weapon pack. So the people who downloaded the cheat pack got a set of weapons that didn't do what they were supposed to do. And they blew up in their face. You know, they, they would go and fire when they load up this knife or sniper rifle, whoever run in front of the crosshairs got shot, it didn't matter if they were civilian, it didn't matter if they're your own team, or they were the enemy, it instantly shot and killed them. How can you work effectively in a team if you're shooting your own teammates, they were very quickly disowned, or someone else who, you know, another tool, they throw the grenade as the grenade gets thrown, it doesn't go far, it literally drops in front of you and explodes instantly killing you turning into Burning Man. Very clever, effective manipulation of the technology to have an unintended effect by the user. But knowing that the user is expecting it, the computer never lies and expecting it to act in a certain way, and then fundamentally undermining that. So that whole usability thing needs to be applied defensively. And I think that's a great, great example of that. But I think ultimately, this is about bringing diversity of thinking into cybersecurity. Your podcast, I can't speak, your podcast definition, redefining cybersecurity. That's exactly it. How do we bring that competitive advantage? How do we bring that diversity of thought and diversity of thinking? Well, the one answer that is definitely wrong is fishing out the same pool of people all the time. That does not work. So how do we enable conversations? How do we bring experts in from the arts, experts in from science? How do we bring in the Hollywood filmmakers who are designing exciting new ways of shocking and stimulating the audience? How do we bring in the I'm thinking about how do we bring in the theme park ride engineers to talk about the experience the passengers have on their rides? How do we bring that in to think about the experience the attackers are having on networks? And then likewise, how do we reach out to the non traditional areas of academia, the non traditional areas of people, you know, take people's education to say, hey, cybersecurity is cool. It is interesting. And you might not have the traditional technical skills, you might have other skills that are equally as relevant. And they're complementary skills, but they're missing in industry. And the UK, one of the UK government departments did a review of the kind of skills gap and the skill shortage and highlighted that technical skills were identified by, I think, almost 50% of the organization is lacking and missing. But shortly behind that, they also recognize that the traditional skills are often often relevant in cybersecurity, but not seen as technical leadership skills, communication skills, sales skills, marketing skills, absolutely relevant, but missing, because people hadn't moved across into cybersecurity. So how do we illuminate and educate someone who's doing a degree in geography to recognize that they can have an interesting and successful career in cyber, and can end up being a cyber warrior of the future on the front line? How do we get psychologists to move away from doing perhaps rather traditional psychology experiments, understanding how we build trust between people in a non online age? How do we get them excited? The idea of building trust with a smart device? How do we get them excited about building trust through, you know, virtual tele telecommunication systems or online meetings? How do we get them to see that all of their expertise and experience is relevant and valuable in this new virtual age? And I think that is going out broader and wide. And that's one of the things I love about the UK cyber 912 strategy challenges, we reach out to a range of different disciplines in different universities, we allow teams to form with whatever student background they want. And we find that the most successful teams who participate in the competition are often those teams who have got one, possibly two techies, but also got one or two people from other fields, whether it be politics, international relations, even classics, and they come in and they look at the problem that they're dealing with, from a different perspective, they offer that critical thinking from a different angle, they offer that diversity of thinking, and it helps them come up with more effective solutions, because they have to work with the techies, they have to engage with each other, they all have to understand the problem. And they all have to think about a collective solution. And that is a really unique and exciting position to be in in this new field of cybersecurity. And that's what we need to do. Otherwise, we're just relying on the same production line of people coming through. And we're not necessarily going to move the thinking forward. And that's not me being critical of those people who are being qualified and experienced. We absolutely need them. But we need them. And it'd be like building a football team of just put just defenders or just attackers, we need both, we need them to work together and understand that they need to be in this position over here, or they need to be sporting there. How do we enable that in cybersecurity? So there's a funny story. There's Sean's show, it's called Redefining Cybersecurity, and mine is called Redefining Society. Don't ask how that happened. But, you know, we're kind of we're talking about it. And I think we go back to the beginning. And I think we're close to wrap here, though it's a Sean decision. But, you know, realizing that our online interaction, our social media, our avatar, watching a show, or an hologram on a city square that is not really there, whatever you were saying before, it has become part of being human. It's a new society. And it's not a line that says, this is real. This is cyber. No, we are living our life in both places. So if we're living our life in both places, our psychology applies also online. And the moment that we can realize that, that's when it's going to get and it is already exciting. I know, psychiatric doctor that are studying social media interaction, the relationship with our devices, and I had those conversation, but I don't know why we're still resisting that understanding when it comes to cyber security is almost like the shaman wants to keep his power by telling you now you don't understand this is this is old tech. Yeah, you know, it is my power. You know, now we need everybody because that's become our life. Absolutely. I mean, example of that in society, if you wanted to understand me, it's so much easier now, if you have access to my online data, you just need to look at my Spotify playlist, you can look at my podcast listening, you know, you can see what I was interested in, you can see what I was motivated by, you can see what I'm watching on Netflix, and you would understand me so much more better than just observing me walking down the street. So our online digital footprint is providing us so much more identity. Look at online dating, probably 1020 years ago, it was strange if you did online dating. Now, the first thing you do, if you meet someone in the real in the physical world, you check the online profile to make sure they're real. It's a complete flip from where we were 1020 years ago. We are fully integrated, there is no going back. This is, you know, this isn't a scare. And this is the reality that we're in. And now we need to look at these new skills, these new approaches, these new disciplines of big data enabled human science research to understand people, we need to think about how our online presence is different to our offline presence. How do we present online? How do I present to you in this meeting? And how consistent is with with that with me when I present offline? Is there any difference? Where is that difference? Why is there a difference? Is there a difference because of the technology? Or am I presenting because I'm able to do something differently? All of these questions need to be answered, we need to revisit all of our psychological theories and experiments to bring this computer mediated understanding of self back into the fight, or back into our understanding of society. And that's why I think it's a really exciting cutting edge to be on. Because everything is new, we can revisit everything. And when else are we going to be able to do that? It's a new experience for us all. Yeah. And I think for me, as we wrap here, I love all of this, but I'm afraid to leave the real world behind because we've done so much. I mean, a hacker or a bad actor is a pest in the cyber world. We deal with pests in lots of other places. Agriculture. Marco and I did a podcast on weeds overtaking lakes and killing fish. I mean, that's a situation that it's not a cyber situation. That's a real situation that we deal with, with technology. Humans help some. I love that we have to think differently, but I also think we have an opportunity to look back at how we've done things over the years, decades, centuries, to say, it's not that different, really. We try to make it different. We try to make it so unique and mysterious that I think what's really interesting, I think, I think you're absolutely right. Our brains haven't evolved. We look at how quickly our brains evolve. The challenge we have is that as human beings, our brains haven't caught up with technological developments. So I still get stimulated in the same way as I did 100 years ago, 200 years ago, even 1000 years ago, perhaps. The problem is we've got these virtual ways of being stimulated. So how does my interaction with you virtually differ to my interaction with you in person? Again, how am I being stimulated? If I watch something on TV? Yeah, you see crowds of people crying at a pop group splitting up, they're having a real lived experience. Even though they've never met them, they've watched them on TV, they've probably seen them in a film, they might have seen them in a concert. So in the virtual age, what does that ability to have that effect on that person at scale at range, whatever? What does that mean? Because the brain hasn't caught up. And also, I think another good example is the work Johan Hari has been talking about in terms of focus. We are really bad at multitasking, our brains can't multitask, we context switch, and we context switch really poorly. But with with a cell phone in our pocket, a mobile phone sitting there constantly alerting us, we're always micro switching or context switching, it's destroying our ability to concentrate, because our brains haven't evolved to this multi pronged assault of technology. So then again, you have to bring it back to that that physical world, because fundamentally, we're humans, and we have to think about our interactions. But, Sean, I want to I want to say the last thing. No, that's a comment on your and I've been noticing how we always look into a metaphor when we explain the cyber world, right? We'll talk about the seatbelt, we talk about the helmets, we talk about security. And you also said, we look into a metaphor or an example that is in the real world. And my point here, and maybe I'll leave that to the audience to think about is that it's kind of like Rob Wen there, we kind of need to understand that it is a new way to, to interact with each other. Maybe there is not an example on our real world, because the metaverse experience, the virtual reality experience, that social media, the way we interact with other, it just doesn't exist a parallel in the real world. And we need to kind of open up our mind, one doesn't exclude the other, don't get me wrong, we're not leaving a planet to go in another. But my thought is, we need to be conscious that things happen differently in the cyber world. And we need to be conscious of that. I would say technology hasn't caught up with the physical world yet. And that the technology, I would say that our being and our world here on Earth and in the planet ecosystem, technology can't even touch it. I think it's way more complex. We don't know, we don't know enough about ourselves, and the planet and the solar system to even come close. I think we know. Yeah. I think if we explored that, we'd understand the technology a bit more as well. You need both. Yeah. What's the point? Oh, with me, you're always gonna be fine. Well, I enjoy this, Rob. I love all of this. But I think that's why it's so exciting. Because we've talked, you know, we're touching about, you know, the fact that technology hasn't caught up with society. We've talked about society not catching up with technology. This revisits everything. This really does make us question our assumptions on life. And how exciting is that? We need a new social contract, too. That's right. Absolutely. Fundamentally, yeah. Yeah, I think signed by seagulls. Can you let the seagulls in now, Rob? An attractor. I'll get you a squirrel here in my garden. All right. Well, listen, I mean, I have a feeling we made people think way more than we gave them any answers to take back to their programs. But I think the point is, we need to think bigger, differently with each other, I'm going to say. So to your very first point, Rob, if our heads down, even if it's if it's down with each other in our own little teams, we're going to miss the point. And so pull the head up, think differently, interact on a broader scale, and bring other people in that aren't on your team. And change your thinking, change your assumptions. So you mentioned a couple of things. One of them, there was an NSA link or something we'll link or something we'll include in the show notes, whatever we can get from you, Rob, we'll include those for folks to connect with, of course, your 912, Cyber 912 strategy, information, and of course, your profile so they can reach out to you and keep this conversation going with you if they wish to do so. So thanks, everybody, for listening. Thanks, Rob, for joining us. Marco, good to have you on. We can get philosophical today. Yeah, me or my digital twin. That's right. So thanks, everybody. Keep redefining cybersecurity. Pentera, the leader in automation security validation, allows organizations to continuously test the integrity of all cybersecurity layers by emulating real world attacks at scale to pinpoint the exploitable vulnerabilities and prioritize remediation towards business impact. Learn more at Pentera.io. Imperva is the cybersecurity leader whose mission is to protect data and all paths to it with a suite of integrated application and data security solutions. Learn more at Imperva.com. We hope you enjoyed this episode of Redefining Security podcast. If you learned something new and this podcast made you think, then share itspmagazine.com with your friends, family, and colleagues. If you represent a company and wish to associate your brand with our conversations, sponsor one or more of our podcast channels. We hope you will come back for more stories and follow us on our journey. You can always find us at the intersection of technology, cybersecurity, and society.