Redefining CyberSecurity

The Pathway to Innovation: Understanding and Embracing Cascading Risk for Technological Progress | A Conversation With Trond Arne Undheim | Redefining CyberSecurity Podcast with Sean Martin

Episode Summary

Join host Sean Martin and guest Trond Arne Undheim in a thought-provoking episode of the Redefining CyberSecurity podcast as they explore the intersection of technology, innovation, and risk management.

Episode Notes

Guest: Trond Arne Undheim, Founder of Yegii [@Yegii_Insight] and Research Scholar in Global Systemic Risk, Innovation, and Policy at Stanford University [@Stanford].

On Linkedin | https://www.linkedin.com/in/undheim/

On Twitter | https://twitter.com/trondau

Website | https://trondundheim.com/

On Facebook| https://www.facebook.com/trond.undheim/

On Instagram | https://www.instagram.com/trondundheim/?hl=en

On YouTube | https://www.youtube.com/channel/UCI4EpjuQzb58EiawzElwvYQ

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

____________________________

This Episode’s Sponsors

Imperva | https://itspm.ag/imperva277117988

Pentera | https://itspm.ag/penteri67a

___________________________

Episode Notes

In this thought-provoking episode of the Redefining CyberSecurity podcast, host Sean Martin is joined by futurist, Trond Arne Undheim, as they engage in a deep conversation about the intersection of technology, innovation, and risk management. Trond offers deep insights into the world of risk and the need for new paradigms to address emerging challenges.

The conversation starts with a discussion on the importance of systematic feedback and validation-driven strategies in fostering innovation. Sean and Trond highlight the positive aspects of risk information, emphasizing that it can help save resources by redirecting efforts towards more viable avenues.

Sean and Trond explore the notion of systems thinking and the challenges it presents. They explain that when we describe something as a "system," it implies that it is something we cannot fully control, but rather something we are amidst. They also touch on the concept of cascading risks, highlighting the potential dangers of multiple risks working together.

The conversation shifts to the role of organizations in managing risk. Sean and Trond acknowledge the complexity and short-term focus of many risk management approaches and express the need for new institutions (non-profit, government, etc.) and companies (commercial product/service providers, for example) to address this gap. They mention the rise of industries focused on specific risk areas, such as cybersecurity and ESG risk, and predict that more industries will emerge to provide risk management services. Sean and Trond also explore the idea that a higher level of risk can spur innovation, but caution against irresponsible risk-taking. They stress the importance of finding a balance between risk and innovation.

Join Sean and Trond for an engaging conversation rooted in philosophical discussion about the future of technology, the potential risks posed by emerging technologies like AI and bio-risks, and the impact of risk management on society. This episode of Redefining CyberSecurity Podcast helps to navigate the challenging landscape of technology and risk. We hope you enjoy it!

____________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

____________________________

Resources

Yegii | https://yegii.org/blog/

____________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring an ITSPmagazine Channel?

👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________
 

[00:00:00] Sean Martin: And hello, everybody. This is Sean Martin, your host of the Redefining Cybersecurity podcast here on the ITSP Magazine podcast network, where, as you know, if you listen to the show, I try to unpack, uh, security risk and policy and all kinds of fun stuff as it impacts our business and hopefully doing it in a way that we can enable business in a secure fashion. 
 

So get ahead of yourself. Get ahead of the threat. Reduce the exposure. Make some good decisions in the beginning, uh, to, to really, uh, drive that revenue and protect it on the other side. And as you know, or probably kind of guess, uh, technology is in no way, shape or form slowing down. Uh, we see it in the form of AI, uh, most recently, but there are a number of things that are hitting businesses, giving them an opportunity to expand and grow and reach new customers and give them new. 
 

Whiz, bang, shiny, flashy things. And, uh, with that, uh, comes networking and connections and communications and data and, and all that to me, uh, says exposure and risk. So we're going to kind of pull all those things together and, uh, I'm going to do that, uh, Somebody who's been on the show before with my co founder, Marco, and they got into a more, uh, I'll say philosophical discussion around society and technology. 
 

And, uh, and I believe there's a little bit of cyber in there as well. We're going to take a look at this, uh, this topic a little more from a business perspective and Trond. Great to have you on the show. Thanks. Excited to be here. It's going to be fun. It's going to be fun. And, uh, it's enough rambling for me. 
 

I want to hear you hear all the fun things you're involved with. And before we do that, of course, I'll, I'll put a link into the other episode that you had with Marco. So, so folks can listen to that, but, uh, I want people listening now to know a little bit more about who Trond is and, uh, what you're up to. 
 

So a few words. From you, if you wouldn't mind to kick us off.  
 

[00:02:04] Trond Arne Undheim: Sure, Sean, I'm a futurist. I work on the future of technology and society more broadly, but also very specifically on risk issues, uh, significant risk issues. I work on the biggest issues there are that are threatening our civilization. But I also, to your point and to your show, I, uh, you know, I'm a businessman. 
 

So I work on the, uh, issues that enterprises can, uh, protect themselves with in the here and now. You know, whenever you talk about risk, it's important not to get stuck in the future. We, we all know very little about the future. That's the challenge, right? So it's very important to figure out how to get there and how to get there safely. 
 

And, uh, you know, uh, I work at Stanford as a research scholar working on risk and innovation and also policy issues and mitigating risk, very importantly, right? It's not just about describing what the problems are. It's about coming up with business and governance strategies. So that's what I do. And then obviously, uh, do some other things. 
 

I have a podcast, Futurized. Um, I, uh, run, uh, you know, some consulting. And I do a lot of speaking, uh, all around the world. So  
 

[00:03:14] Sean Martin: I love it. Well, it's an, it's an honor to have you on and I can't help as I was saying my stuff and as you were describing some of the work that you do, just how vast and broad. Uh, the topic of technology is and the impact it can have on, on society and all the things that make society work from government to commercial enterprise to us as individuals, and you have a book behind the augmented lean, which is to me the opposite, right? 
 

How do we trim all this stuff? Um, which may in some sense be part of the answer to the problem. Cause you also said. It's hard to predict the future, but we're creating it, so I personally have a hard time believing that we can't predict the future if we're actually in control of where it's heading. 
 

Unless you know something, I don't know that we're, a path is being put forth before us without our involvement.  
 

[00:04:15] Trond Arne Undheim: No, Sean, I wasn't implying that. I was simply trying to be modest. You know, futurist is already a very immodest job title. Uh, but it's somewhat descriptive of what I do, which is trying to think about the forces that are shaping the future. 
 

And we do know a lot about those forces. We don't have the exact data, and we certainly don't know the future, but we can map what futurists will describe as plausible futures. And we can map them pretty accurately as scenarios. So, so that's what I'm up to doing. Beyond that, you know, you get the occasional ask for predictions and then sometimes we offer those as well with usual caveats. 
 

But they're, you know, they can be kind of fun. But it's a very serious endeavor. And incidentally, the fun thing, the good thing about cyber security is that in my context, in my world, looking at all these other risks that no one cares about or that no one cares about in general, In the right sort of combination, we can get into that. 
 

I work on cascading risks. So the idea that you cannot just focus on one risk because that many of them are very connected. You mentioned AI. Well, AI connects with other digital technologies. It connects with social trends and stuff. But cyber security is super interesting in that it is actually a defined area where business has said there is a risk. 
 

We're taking it very seriously. Thanks, Nate. We are going to establish a whole line of business around it and everybody is going to be doing this so You know, there are surprises in cyber security as well for sure, right? There are things coming in from left field always But it is one of those fields where I am confident that there is a whole industry Behind it and there's also a lot of demand for it already. 
 

So even if the risks are increasing They are at least in some measured way being already handled and people are thinking about the emerging problems or, uh, you know, opportunities obviously also, uh, for technologies. There are many other areas of technology and certainly outside technology where no one is thinking. 
 

But these are connected risks. So in one sense, it's a very fortunate situation to be talking about cybersecurity. We know some of the risks, we have seen them, they play out on a minute by minute basis. And that for me is good news because there's so much data on the risk picture. There are other areas of risk where the data is more scanned. 
 

And that is far more challenging.  
 

[00:06:42] Sean Martin: And that might be interesting to touch on, because I often look for parallels where we do something well, and we kind of miss the mark in other areas. And then we might have some learnings cross cross a border there. I'm interested to know, are there any areas of the future that really. 
 

And this could be from a societal perspective, uh, or from a commercial perspective.  
 

[00:07:11] Trond Arne Undheim: I am definitely very excited about how technologies, uh, we will take a role in, in reshaping our world, uh, which a lot of people think is, you know, a, a, a world of D, uh, degrowth. And I kind of tend to agree with that. We are going into a world where we're going to have to take a bit more care of our surrounding environment. 
 

And that is. Without technology, perhaps a sad prospect of sort of like, you know, slowing down growth, but I think with a lot of these technological opportunities, this becomes. Something where there's so many business opportunities to come in and actually what I would say is completing the innovation process. 
 

I think previously innovation was like, Oh, you know, I'm a startup founder. I'm going to come up with something. Oh, I'm a business. I'm going to have this nice new business model without thinking about the long term effects of that business model. And it's also government's fault and everybody else's fault for sort of not saying, wait a second, you're not done with your innovation. 
 

What I'm very excited about is this new phase of innovation where we take a little bit of a longer view so that when you have innovated, meaning you have something that's either new or exciting to people or is You know, solving some interesting issue, you have to also think about what might happen 10 years after, when this is perhaps successful, when this is perhaps a platform, maybe it's widespread in society. 
 

Think about if someone did that around social media, we would have had a different decade right now.  
 

[00:08:44] Sean Martin: So is that, and this is fascinating to me because before we started, I, Told you that everything looks like a project to me. It looks like that in my head. And when a project has a start and a finish, but when you're innovating, uh, it's a, it's effectively a never ending cycle. 
 

Yes. There may be points along the way or milestones, but in my experience, those, those points and milestones, when you're, you're in pure innovation mode. Not delivery mode, but innovation mode. Um, you're trying new things, and throwing spaghetti against a wall to see if it sticks, and, and trying different sauces and whatever, I'm using food analogy here, but we, we tend to not constrain ourselves with an end point. 
 

And we trend, tend to not constrain ourselves with... This business model. We'll try this business model with this product or this innovation. We'll try another one with this, the same product and innovation. So my point is we, we like to try things and, and iterate on those trials. And therefore it's hard to kind of see where things might. 
 

And, uh, I don't know if you have any insight into that or if 
 

[00:09:59] Trond Arne Undheim: I know, I agree. And, you know, I'm not going to take away the innovators or founders desire to, to innovate or any, uh, anybody else who innovates, that's a great endeavor. I've been very supportive of it. I've worked with thousands of startups, in fact, uh, to help them connect with business and, uh, and clients and other things. 
 

So when I was at MIT and other places and as a, as an investor. But, uh, I, I think that sometimes it's different people that just has to have that role. What I'm just saying is the innovation process is much longer than we previously thought about it. And what you're talking about, the spaghetti at the wall kind of thing, that is one aspect. 
 

It's actually also not the only way to innovate. There are more systematic ways to, uh, to innovating. There's, uh, basically validation based innovation now that, you know, is practiced in the best large companies. And what that means is Instead of sort of having this, like, uh, you know, finger up in the air, looking at where the wind is blowing and then trying to innovate, uh, that way, you, you go out to startups and to other sources of information and you direct your strategy, whether it's innovation or, or just new product development, you know, more mundane, incremental stuff, but you, you gather data from the environment, right? 
 

From the outside world, you need to know what stakeholders are saying about this. And we have so many more ways of gathering that, you know, uh, cyber data being one of them, you know, what is the security environment going to look like if this product is out there, right? There are so many, uh, examples of, of new products that break security. 
 

You know, we don't even have to get to quantum. You can go to much simpler stuff. And, uh, that's just an example, right? So that the innovation picture. Is a systems picture. You can't just escape that. You can throw spaghetti all you want. But if you want to serve a successful dish night after night, you know, and be a successful restaurant owner in this technology space, you have to cater to more than just the spaghetti and the oil. 
 

You actually have to cater to the meal. 
 

[00:12:01] Sean Martin: So I guess the, I don't know if I'm going to stick with the analogy or not, but I guess the, the point I want to kind of have a discussion around is the, the concept of the data, right? So we're, we're blessed and maybe cursed with having a ton of, ton of data. So I love that you. Said innovation, if it's driven by data and systematically pursued and has a more formal stance to it, can actually pull in risk information, including, including cyber risk, including ESG risk and other things that organizations are faced with these days. 
 

So do you have experience with organization? You said some of the larger ones are doing this, Do you have an experience you can share where companies are innovating secure by design, if I could call it that?  
 

[00:13:00] Trond Arne Undheim: Well, I think Barsch is one of those examples, a colleague of mine. We're actually working on a project together, uh, you know, to describe that. 
 

Longstanding, uh, relationship with them and worked with them and, and there are other companies, but just to take one, so, you know, they're German technology company and, and what they do really is, uh, they have realized that in order to spin out startups or, or ideas, really, sometimes they're not startups, but they're innovation concepts. 
 

They need the systematic feedback that they can get from dealing with, uh, other innovators. So they have put that into a system. I guess my colleague calls it innovation and validation driven strategy, and it's information about the risks. And I think, you know, very often we think of risks as a negative thing. 
 

It's like something that stops us from doing something, but actually risk information can be really, really positive. You can save an enormous amount of resources if you know that you need to foreclose one avenue or one type of product and go for something else. So Bosch, I guess, is an example of a company that realizes that at their size. 
 

It makes much more sense to validate and test and, you know, kill certain ideas at earlier stages, uh, than it does to, to use the spaghetti at the wall model of innovation. So I guess the larger the company, the more that makes sense, but you still have to innovate. So, you know, you do have to change your culture. 
 

This is not saying innovation is not going to happen. And you can't mandate this sort of stuff. So you have to allow for a lot of experimentation. Uh, people are innovators. I mean, you know, the model at Google X where they kind of are still sort of employees of Google, even if they are working inside of X all the way until somebody says, okay, now we are a startup. 
 

So they are sort of in this like isolated innovation space and it's important to protect that space. So, I don't discount that, and I think, you know, Bosch and other large companies that understand innovation, they do both. They let there be a time for experimentation, but they also realize that within the bounds of what makes sense for us, we need to test it against the market. 
 

Test the demand, test the risks, and we need to gather that evidence and then make a validated opinion. Does it make sense for us to go forward? Or maybe it makes sense, but not now. We need to test more. So I think again, you know, it's innovation is both fast and slow. And if you're really doing radical innovation, you can't just go fast and break things. 
 

That is, I think, an outdated view of innovation. I think one that tested through time, I think we will realize that many of those, uh, innovations modes that were so typical in the early digital era. They don't work for one when you have hardware involved, they don't work when you have people involved and, and, and you bet there's a lot of people in hardware out there in business, right? 
 

So you can't just run around and break things, you know, in a garage, you're dealing with real things. So this book Augmented Lean back there, you know, that's about manufacturing innovation. And there the specter is, you know, robots taking over. Is that good or bad? Well, it's good for efficiency, but you know, it's not all that great. 
 

Unless you are doing exactly what the robots do best, and then you are augmenting, to that point, you're augmenting the humans in the process. So, so much of digital innovation in manufacturing has actually to do with, uh, being lean. And that means augmenting the people working, uh, you know, on, on problems with digital technologies. 
 

Yeah, and sometimes it's, uh, coordinating things with robots and, and other technologies, but not always. And, and they're, you know, the human in the loop is I think going to be pretty interesting. Even in the next decade. Plus, it's safer. It's, it's, humans have our, we have our problems, but... We do have our problems. 
 

Generally, yeah.  
 

[00:17:08] Sean Martin: I think a nice balance there is, is a good strategy. I don't know what the ratio is yet, but, uh, maybe that changes over time. And perhaps we'll come back to that, because I'm interested in, in the, the role of technology in, in, in identifying and mitigating risk, uh, specifically cyber and, and security operations. 
 

Um, but I want to... Maybe stick with one point on this innovation first, because you talked, and I'm going to look at this in the context of risk. So you spoke to cascading risk, and you described Bosch basically having a system view, if I'm paraphrasing this correctly, a system view where there's innovation and with the goal of feeding the broader. 
 

System. And to me, that says there, and we know this with microservices and everything's API driven and, and, uh, we're building systems of systems. And a lot of the innovation comes in, in how multiple parts come together to achieve things that is impossible with individual parts on their own. So to me, there's complexity or opportunity with systems of systems, but then complexity, which adds. 
 

Risk, who's looking at the connected system of systems to see where things might fail. And then you have, uh, I don't know if it's up and side or back down again, the, the cascading risk that you alluded to earlier. So there's kind of three parts here, and I don't know, I mean, this, each one could be its own podcast, I presume, but any thoughts on those three things? 
 

[00:18:45] Trond Arne Undheim: Well, I certainly have, uh, some thoughts on systems. I think the challenge with the systems metaphor, because that's really what it is, right, is that the moment you start saying the word system, you have kind of admitted that you know less than you really did at the outset because it's, and then so the word complexity comes in because it's essentially when you say system, you are kind of implying something that I don't fully understand that somehow runs automatically. 
 

And autonomously from me. So you're sort of saying we are part of it, but we can't fully control it. We just re it's more of a statement of realizing we are in the midst of something moving. That's essentially what you mean when you say system. And then you try to describe it, obviously with simplified models. 
 

Um, But, uh, cascading risk refers to this idea that if you think about risk as like single factor issues that you have to worry about, and you know, historically for me, right, it would be nuclear risk, right? The biggest risk of all. Post war, we're worried that the world will end because someone's going to push the wrong button. 
 

And you know, the world probably almost ended. So this was real. Um, nowadays, however, there's many more risks that deserve that status. There's at least three or four that deserve to be on almost equal. Uh, status with, with nuclear and bio risks, think about, you know, pandemics, but also even worse, you know, synthetically created pandemics that are potential, uh, future developments here, you know, lab leaks and things, or, or even purposeful stuff, which gets into more kind of the cyber area of, you know, bad, bad actors that are exploiting vulnerabilities. 
 

But also, of course, the specter of AI as such. And whether AI in and of itself and not so much, I think it's more, again, you know, bad actors plus AI and bad actors could be states and whatnot. The thing is even AI or virus or nuclear in and of themselves, they're just single inputs. The cascading part is the fact that even those three working together is super scary, but think about the other 140 risks that I certainly have identified and I work with. 
 

They may not look like much, like some of them are kind of societal phenomena, other things are, you know, business issues or the collapse of sort of governance systems. Right now I'm really worried about that. Collapse the formerly advanced nation states, for example, think about all the technology that's embedded in whether it be Israel, you know, Russia now, even the US, which is showing, you know, signs of cracking, you know, in the foundations. 
 

What if the US were to split apart? Right? Where would all that technology go? You know, there are many, many things right now where if your perspective shifts from a decade to 25 years to 50 years, my perspective right now is 50 years, 2075. Right. What is the world going to look like then? And if it looks different, where did all the technology go? 
 

Who owns it? Who stole it? Who has it? Which institutions are in charge? Are there companies in charge? Are those good companies? You know, is there goodness, you know, or are we just looking at a bunch of malignant actors? And that's when cascades become dangerous because here you're looking at a set of factors. 
 

You don't know how many. Some of them perhaps are just small risks, but together they're a little bit like the river Amazon, right? They form a delta and they have these effects. When they're good effects, they're bountiful and beautiful like a river delta. When they're bad effects, they are floods and they just wash over everything and destroy. 
 

And that's the metaphor of a cascade. So the question is, now you have that metaphor in your head, how do you mitigate a cascade? You certainly would have to think more redirection than stopping. Like it makes no sense at least later in a cascade. You can't stop the Amazon river, right? Where do you begin? 
 

So you, you can maybe redirect some things early on, but once the cascade is in motion, I mean, you can only flee.  
 

[00:23:03] Sean Martin: Yeah, or don't be in front of it in the first place.  
 

[00:23:06] Trond Arne Undheim: Right, and that's again to the point with risk is there are so many responses to risk and I think you in the cyber security community know this. 
 

There are ways to even thrive on risk and that's a super exciting thought, huh? Think about this, the world with a higher level of risk can actually spur more innovation. So you asked me what I'm excited about. I'm excited about communicating how the threat level can spur even more innovation without taking irresponsible risks doing that innovation. 
 

[00:23:38] Sean Martin: So do you feel we're doing enough innovation in the field of risk management? Or is it an afterthought?  
 

[00:23:50] Trond Arne Undheim: I think risk management is, to your point, it's operational, it's short termist, and it's good for what it is. But when faced with kind of disruptive or bigger variables changing, we need new paradigms. 
 

So then you get into the systems thinking again, and then you're a little bit stuck in a loop because, you know, who do you know that's a super expert on systems? Right. It becomes a very abstract, really fast, plus academia is of no help. Well, that's a little bit of an overstatement, but you know what I'm saying? 
 

Academia is geared towards expertise. Systems is almost the opposite. Systems is geared towards a hundred different types of expertise. That's a whole new model. Transdisciplinarity is, is really not very established and that's what we need.  
 

[00:24:43] Sean Martin: So this, this brings me to a point that I think of often, maybe not in this particular way all the time, but just the, whether it be operational, short term, longer term, looking at the big picture, 20, 30, 50 years ahead. 
 

Um, organizations, I don't think, unless they're. Super big like a Bosch and, and realize that 50 years from now in order to survive, they have to be prepared for that. And so they make the investments along in, in some kind of mitigation as well as innovation. And maybe the two to come together and, and spin nicely as you described it earlier. 
 

But I don't think a lot of companies have the wherewithal to even short term Deal with risk. I think, I think they have, they have, they've set aside funds to put teams in place and buy some technologies and put some controls and hopefully reduce some exposure in the first place so that they're not overwhelming the tools and then the people in the processes. 
 

Um, But I don't know that an individual organization has everything they need to adequately identify and mitigate, mitigate the risks they have, cyber or otherwise for that matter. And I'm wondering, do we need, do we need into, I mean, I can look at the, the insurance space where they have data and they, and they use this information to help drive their business, uh, for insurance, reinsurance and whatnot. 
 

Um. Those are people who are experts in, in that data and knowing what to do with it for their business. Do we need something like that or, or are we going to continue to expect companies to have that expertise and wherewithal?  
 

[00:26:40] Trond Arne Undheim: I think you're pointing to something important. We're going to need new institutions. 
 

We're also probably going to need a whole bunch of new companies that are providing that risk function. So, you know, you mentioned earlier in passing, you know, the ESG risk. And there are plenty of ESG jobs on the market now, and they say, you know, asking for people with previous ESG experience, which is... 
 

Kind of a pipe dream, right? Cause those people don't exist. Uh,  
 

[00:27:09] Sean Martin: it strikes me that, that there's a whole industry building around that and not, not cyber risk  
 

[00:27:17] Trond Arne Undheim: yet. Yeah, exactly. So, but cyber risk, ESG risk, these are discrete, you know, distinct sort of risk areas where there are at least. is enough of an attention around the topic that there is an industry building slowly. 
 

And I think there are many, many more risk areas like that. I think every business will have its risk officer, even very small businesses, because they will be either responding to other people's, other company's risk, or govern, governance risk, or they will be, uh, you know, faced with their own risks. My point, though, is that that is not necessarily as scary and sad a world as some people want it to be. 
 

And you know this from cyber security. It just it becomes its own industry and it has its own advantages. You're you build up a market for it and you build up expertise. Now, will every small company solve all of its long term problems this way? I don't think so. But it's I think the ecosystem approach here is the right one, right? 
 

You just have to be kind of in the in the flow of information and then eventually you will pick up. Or acquire the assets needed to deal with your particular risks. And they will be shifting faster than before. So you just, you need, uh, monitoring. You need digital monitoring tools, sensors, all that kind of stuff, right? 
 

Uh, and we need a better system to distribute those kinds of sensor systems. Some of it is very, very digital. Other things, you know, it's human intelligence. I don't think we can get rid of that, certainly not in the next few decades. 
 

[00:28:50] Sean Martin: So I like this concept and, and, uh, especially when we start looking at systems again, um, cause an organization will certainly be enabled by systems which are updated constantly by third parties and nth parties as those systems get built and augmented. And. So I'm wondering, in this, in this world where we have entities and institutions that are responsible, or at least helping, um, let's go back to your augmented lean example, where technology has a role, maybe not a replacement, uh, to solve the problem. 
 

But your view on... The relationship between technology, call it robots, or AI, or whatever it might be, um, in this space. How, how do organizations, or how do these institutions connect?  
 

[00:29:53] Trond Arne Undheim: I think the best manufacturing firms certainly, uh, they start with the people they already have. So they already presumably have an approach to, to lean, meaning they have simplified and clarified the way that, uh, workers, uh, From the shop floor up are working together and they realize where some of the inefficiencies are and when they're trying to fix things They would never start with the idea that oh, there is some new shiny technology out there Or even there is a new shiny risk out there whether that be AI or some hacker group or some attack that just happened You just don't take only an event based Approach to it to take a systemic approach you say what is this reflective of long term? 
 

And then you develop a systematic approach. And technology is almost the last part of your answer. There obviously are efficiencies in things like digital apps on the manufacturing floor. But some of those efficiencies can be reaped by just installing a couple of sensors, maybe at each workstation, and digitally just making the information much more available without disrupting the original workflow of workers. 
 

So at the, at the base level of an organization, that's really what technology can do and it translates all the way up to knowledge workers, you know, wherever we are in an organization. Whatever you do when you're implementing new routines, don't implement new routines. Just, you know, or don't just give out gadgets. 
 

Try to make sure that you respect the way people already were doing things because... An individual is always part of a team. That team has a way of doing things too. So, when you think you're improving something by putting a device on the table and saying, you all use this, you may destroy something else that was even more valuable. 
 

And I think that goes for introducing any kind of technology. And that is augmented lean. You ask the people on the shop floor or knowledge workers, you say, what exactly is your issue? And then you fix that. And if there's a technology fix for it, you put in place a technology fix, but then you monitor and you observe and you check, did I destroy something in the process? 
 

If I did, let's pull it back out. So that's what the best organizations do. It's as little disruption and as little leadership as needed,  
 

[00:32:23] Sean Martin: Yeah, so I think that's, uh, I think there's an interesting, interesting point and what I'd like to do if we can, Trondd, is maybe bring that home for security leadership teams listening to this and maybe their business leader partners, um, I mean, because we're, we're talking very abstract, uh, scenarios for very large companies and not every organization has the, has the resources to, uh, to go as wide and big as some of the things we've talked about here. 
 

So how, how can a security team. innovate their security program, let's say. So security leaders, what, any thoughts for how they can take some initial steps to, to apply some of the things that you've learned and experienced over time in this space?  
 

[00:33:16] Trond Arne Undheim: Yeah, we just pick a backup on augmented lean. I think, you know, anytime you're either looking at trends, technology trends or otherwise, or you're looking at Uh, picking up technology that's going to give you more efficiency is just stop for a second and, and realize that if you're chasing the technology in and of itself, that's the wrong thing to chase. 
 

There's always going to be a plethora of possible technologies you could invest in. They, they might all give you an incremental advantage. However, most of them, if you don't do it right, will actually just give you trouble. bureaucratic trouble. So, you know, think about what the problem is you're trying to solve. 
 

And if you cannot do that with your own resources, then, you know, look widely at what other peers in the industry or, or, or indeed at universities, what's coming down the pike, and then test it out in small portions. There's a reason why we pilot things, right? So I think this goes, whether it is implementing advanced AI algorithms in cyber security programs or inside of your protocols in your little company, that could be a great idea. 
 

But it might also be that not being the first adopter of these kinds of things is an even wiser move.  
 

[00:34:39] Sean Martin: Interesting. Any thoughts on having existing technologies and using what you know to make a... different or perhaps better system?  
 

[00:34:55] Trond Arne Undheim: Well, any system can always be simplified. That's almost like a, an axiom of systems studies, right? 
 

Because a system is only looks advanced, but the, the whole idea of a system is that there are rules. And once you discover those and discover how the system really works. Most efficient systems, they are efficient for a reason. It's because they have found the system itself has found the most useful way to operate in order to carry out the functions it needs to, to grow and thrive. 
 

So if you've discovered the true rules of a system. They are actually not as complicated as kind of systems theory wants it to be. So my, um, recommendation really in chasing, you know, don't, don't chase complexity, change, uh, chase the, the simplicities and embrace those opportunities to make little incremental improvements because those are probably the ones. 
 

That will benefit your company the most in the long term.  
 

[00:36:00] Sean Martin: Yeah, especially as these systems have been built over years, decades even.  
 

[00:36:07] Trond Arne Undheim: Some of them have some legacy to them, right? And there's a, there's a benefit to getting rid of legacy. Don't get me wrong. But the problem is when you're in the middle of it, you never think of it as legacy. 
 

And that's a real warning sign. And if you are an older company, you know this. In the 80s. What do you think they thought about mainframes? They were fantastic. They were thought of as, you know, this is obviously massive innovation. The fact that we can start having them in house, we used to have to go somewhere and buy compute power from some institution, and now suddenly you can have, you know, a big computer in your company's basement. 
 

That was an improvement. So any technology at any given point that is kind of state of the art, it obviously doesn't start out as legacy. However, even what we think of today as state of the art, uh, you know, cyber algorithms and programs and vendors, you know, they are going to have to change faster than before and they will be legacy. 
 

So there are some insurance policies against that, right? Standardization, openness, interoperability, um, not putting all your eggs in one basket, working with a, you know, a broad set of vendors, all that kind of stuff. But eventually it will be legacy, no matter how advanced we think it is.  
 

[00:37:24] Sean Martin: Even if it's new today, it's legacy tomorrow. 
 

And, uh, I'm sorry, Trond, but I, I'm going to take full advantage of having you on the show. I have another thought that's crossing my mind here. It's related, of course, but you spoke to. Setting, you said exactly time, but things maybe time, money, resources, people thought processes, energy aside to innovate and experiment. 
 

And I'm wondering, my perspective is that some organizations do that through threat intelligence and threat hunting and, and threat research and things like that. And I. I know a lot of the SOC analysts do use tools and do some coding to maybe help improve some of the processes in there. Reduce some of the MTTX measurements that they're held accountable to. 
 

But I'm wondering if you have any thoughts on how cyber innovation might look. That is, not too big to bite off, but also not too focused that you're not really solving anything. 
 

[00:38:45] Trond Arne Undheim: Well, I think cyber innovation, like all innovation, is a process where you have to be in touch with some You have to have some data to base it on. You can't just, like we said initially, you can't just kind of dream up scenarios. So you stay pretty close to where the data sources are that you have access to. 
 

Um, so I, I think that's just number one rule. Um, and then I, I guess, you know, the cyber field, like, like other fields, we were talking about AI before, and it's very, very easy to, to jump to that domain and say, well, you know, All everything that's innovation is going to come from that source. I think that is a too bold a bet. 
 

So I guess my advice would be whenever there is a leading contender to a new sort of platform technology, that's going to take over everything that may be true. And, and if you ignore it, you're obviously in trouble. But what if it isn't true? And, and look at the past, so many technologies that were viewed as very promising, there was at least a big spin on them. 
 

And if you were the first adopter, you may have learned a lot experimenting with it, so you innovated, there's nothing wrong, right? You always learn when you innovate. And as my book over there, Disruption Games, basically, you know, failure when properly understood has so many lessons. So I'm by no means saying don't innovate, but when you innovate and you truly accept the risk of failure. 
 

If you then do fail, make sure that you don't step too quickly aside from that failure. I think there's a lot more to be learned from failure, uh, than, than we realize. And then we want that more than that's comfortable also. So innovation is about not embracing the failure, but once you have failed, figuring out very, very deeply doing a postmortem analysis of what went wrong. 
 

Why did it go wrong? What can we do here the next time? Not so that you will never fail again, but so that you can figure out, um, some lessons and it's not just about speed. Innovation and speed seem to be super closely related, but I find that innovation and speed are not as closely related and there are many, many other aspects to innovation that we will see in the coming decades had a lot more to do with innovation, data. 
 

Uh, people fit with your existing system products, right? These are much more innovation than, uh, uh, and, and much more important to innovation than speed to speed to market or speed to crazy new idea.  
 

[00:41:51] Sean Martin: And, um, do you, do you think there's a role for non profits and government in this? Or do you think it's going to be completely driven by commercial? 
 

[00:42:04] Trond Arne Undheim: Oh, no, I'm a big believer in governance, but I don't think governance is something that only governments should be doing. Right. I'm a, I worked for several years in standardization in Europe, uh, and, uh, at Oracle at an IT company. And I think standardizing your interfaces. is a insurance policy for longevity, and that's something, it's a governance function. 
 

Governments should do it by all means. National standards, extremely important for security and other things, but private sector in the digital space, certainly, and also in the hardware, uh, space specifically. Good standards, like think about the container standard, right? Without that, what, you know, we already have a supply chain crisis. 
 

Can you imagine, we didn't have standardized containers? You know, the Suez and Panama canals would be even more of a chaos than they are right now. And that translates into the digital space. Right? That the new Panama Canal is something to do with, uh, secure transmission of information.  
 

[00:43:10] Sean Martin: I immediately went to Kubernetes containers and you said that. 
 

Yeah. I realize, I realize obviously you're talking about the, the physical, uh, water.  
 

[00:43:18] Trond Arne Undheim: Yeah, but it could be those containers too. I mean, the metaphor is good enough. It's, uh, you know, it's, uh, store, storage containers of some, some sort.  
 

[00:43:27] Sean Martin: Yeah. Well, Trond, I, I clearly could, uh, chat with you forever. Um. And, and perhaps you, you will join me again. 
 

We can, we can pick another topic or the same one and go deeper. Sure. Um, we'll see what, uh, see what folks have to say or if there's something on your mind, you're, you're welcome back anytime. Um, fantastic conversation. I know we got people thinking today, that's for sure.  
 

[00:43:53] Trond Arne Undheim: Oh, I'm very happy to be on your show. 
 

[00:43:56] Sean Martin: Thanks a million. And thanks everybody for listening. We'll include links in the show notes to connect with Trond and links to his books as well. I think there are some interesting topics and certainly some inspirational and relevant insights in those books to help teams lead their cybersecurity practice. 
 

Forward with innovation at the heart and with risk in mind. So Trond, uh, thanks again for joining and thanks everybody for, for listening. Thank you.