Redefining CyberSecurity

Leaning in on ISO 5338, ISO 27090/27091, and the OWASP AI Exchange to Build Secure and Responsible AI Systems: Balancing Innovation and Ethical Boundaries | A Conversation with Rob van der Veer | Redefining CyberSecurity Podcast with Sean Martin

Episode Summary

In this episode of the Redefining CyberSecurity podcast, host Sean Martin and AI expert Rob van der Veer dive into the intersection of engineering AI systems and security, discussing the risks, regulations, and ethical considerations in leveraging AI for business growth and data protection.

Episode Notes

Guest: Rob van der Veer, Senior director at Software Improvement Group [@sig_eu]

On Linkedin | https://www.linkedin.com/in/robvanderveer/

On Twitter | https://twitter.com/robvanderveer

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

____________________________

This Episode’s Sponsors

Imperva | https://itspm.ag/imperva277117988

Devo | https://itspm.ag/itspdvweb

___________________________

Episode Notes

In this episode of the Redefining Cybersecurity podcast, host Sean Martin welcomes Rob van der Veer to discuss the intersection of engineering AI systems and security. The conversation revolves around the potential risks and impacts of leveraging AI, particularly generative AI, on business growth and data security.

Rob, an expert in AI with extensive experience in the industry, shares insights on the advancements, challenges, and regulatory frameworks in the AI landscape. Rob highlights the importance of recognizing ethical and moral considerations when applying AI algorithms and emphasizes the need for governance, risk, and compliance roles, as well as security officers, to be involved in AI initiatives. He emphasizes the significance of maintaining ethical boundaries and complying with regulations, such as the European AI Act, to prevent potential harm to individuals and society.

Sean and Rob discuss the evolving nature of AI regulations, with governments setting boundaries to ensure responsible AI usage. Rob also mentions the OWASP AI Exchange, an open-source platform promoting collaboration and knowledge sharing among experts in AI security, and the need for alignment among various frameworks and standards.

The discussion also touches on the role of data scientists and the importance of collaboration with software engineers to ensure the development of secure, maintainable, and transferrable AI systems. Platform engineering is identified as the future of AI security and quality, enabling organizations to cover a wide range of requirements, including security, explainability, and unbiased decision-making.

Overall, this episode provides valuable insights into the complex landscape of AI engineering, security, and ethics, highlighting the need for multidisciplinary collaboration, adherence to regulations, and continuous improvement in AI practices.

Key Insights:

___________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

___________________________

Resources

Inspiring LinkedIn post: https://www.linkedin.com/posts/robvanderveer_ai-aisecurity-activity-7139372087177068544-EUNg/

Member states and MEPs strike deal on EU AI Act after drawn-out, intense talks: https://www.euronews.com/my-europe/2023/12/08/eu-countries-and-meps-strike-deal-on-artificial-intelligence-act-after-drawn-out-intense-t

Artificial intelligence (European Council, Council of the EU): https://www.consilium.europa.eu/en/policies/artificial-intelligence/

Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world: https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/

OpenCRE interactive content linking platform for uniting security standards: https://opencre.org

OWASP AI Exchange: https://owaspai.org

OpenCRE-chat the world's first security chatbot: https://www.opencre.org/chatbot

ISO/IEC 5338: Get to know the global standard on AI systems: https://www.softwareimprovementgroup.com/iso-5338-get-to-know-the-global-standard-on-ai-systems/

___________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring an ITSPmagazine Channel?

👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network

Episode Transcription

Leaning in on ISO 5338, ISO 27090/27091, and the OWASP AI Exchange to Build Secure and Responsible AI Systems: Balancing Innovation and Ethical Boundaries | A Conversation with Rob van der Veer | Redefining CyberSecurity Podcast with Sean Martin

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] Hello, everybody. You're very welcome to a new episode of Redefining Cybersecurity here on ITSP Magazine. And, uh, as you know, listening to my show that, uh, I'm all about taking technology and operationalizing it in a way that helps the business grow and protects that growth as, uh, as we move along. And of course, uh, AI is a, is a hot topic for, for a lot of ways to generate growth. 
 

Um, Which is rooted in data, of course, and using data to drive business isn't new. But I think some of the ways that AI is being leveraged, certainly generative AI, uh, taps into a lot of public data and potentially even some business data that maybe in the future we might go or say, I wish we hadn't done it. 
 

At least not like that. And, uh, this conversation today, I'm thrilled to have Rob Vanderveer. Rob, thanks for joining me. [00:01:00] Thank you, Shell, for inviting me. It's going to be a fun conversation, and it's spawned in the spirit of a post that you wrote. And it starts off with, sometimes the things we do with technology are like smoking on an airplane. 
 

It seemed like a good idea at the time. And you reference a few scenarios from the past in this post that, uh, You can look at them and say, yeah, I can see where that seemed like a good idea. We, we did some good things for society. We took the bad people off the streets and, and all the other stuff that you put in there and others that we'll probably touch on today. 
 

Um, but then hindsight's 2020, when we look back, we might say that we should have done it differently, perhaps. So I'm excited to get into that conversation with you before we do though, Rob, a few words about, uh, Yourself, some of the things you've done in the industry, some of the work you've done, and then we'll get [00:02:00] into why that post. 
 

Rob van der Veer: Absolutely. So I'm, uh, you could say in an AI veteran in the industry since 1992, after I finished, uh, uh, my computer science education. specializing in AI. I was a programmer as a data scientist and we, we built AI models for everything. And our clients didn't want to hear it was artificial intelligence. 
 

They found that to be quite scary. So we called it data mining like everybody in the field, uh, in, uh, in that day or pattern recognition. I was a researcher, a data scientist, a hacker. Uh, CTO, and also ran, uh, AI companies as a CEO for about, uh, nine years. And after being in, you know, the software product industry, uh, for long, I joined a software improvement group 12 years ago and made the move from making programs to, uh, analyzing programs, [00:03:00] uh, and helping. 
 

Our clients to build better software because in my career I had made so many mistakes that I wanted, uh, other organizations, uh, uh, yeah, some help, uh, offer, offer them some help to prevent those mistakes. And we help clients with building maintainable software, good architecture, secure privacy, preserving. 
 

And in that period, I set up practices for security and privacy and for artificial intelligence, because increasingly we, we had clients, uh, Yeah, that we're building AI systems and they needed help. And, uh, we started developing our own platform. Um, and after 12 years, now we are mainly, uh, a software vendor because we provide a platform with additional, uh, you know, um, Advisory services. 
 

So I'm back in the in the software industry, and, uh, I do innovation projects for S. I. G. Client projects all around the world. Um, I try to make a [00:04:00] difference by doing some what we call thought leadership, sharing insights and expertise. From our own research and our observations, and we do this through publications and taking part in standardization. 
 

We do this for the European AI Act for ISO. So I was the lead author of ISO IEC 5338 on AI lifecycle, which has just been released. We contribute to OWASP projects like SAM, which is a secure software development framework. We do research for ANISA. Um, I'm also in ISO 27090 in the working group for AI security, uh, the 91 for AI privacy. 
 

Uh, and I lead two OWASP projects. One is OpenCRE, which is a catalog integrating all security standards and the AI exchange. Uh, which is basically open sourcing the AI security discussion. And especially these days when everything is about [00:05:00] AI, uh, there's never a dull moment, so this is probably the most busy period of my career. 
 

And, uh, I, I try to stay healthy. That's, that's a challenge. Uh, but I'm loving every moment of it.  
 

Sean Martin: Yeah. And I think, I mean, so much, uh, so much work there and. What strikes me are just the sheer number of things that you're involved with, uh, either contributing to or leading and I'm, I'm thinking about that list and there, there are more, I think there's another, another ISO. 
 

I think both of those touch AI to some degree. Anyway, my point is a huge list of Guides, frameworks, standards, certainly regulations. And as a, as a business, how does one begin to even tackle understanding what those are? Um, as they're also trying to [00:06:00] tackle, well, what does AI mean? 
 

For our business. What data do we have? What data can we get access to? What systems can we build? What applications should we build? What open source should we leverage to create something cool that drives value and creates growth? That's a big effort all on its own to itself. And then you add the security and privacy stuff that you just listed. 
 

Can get pretty overwhelming. So any thoughts on, on that as we get  
 

started? 
 

Rob van der Veer: Yeah, I agree. It's overwhelming. And what is good is that, um, uh, governments. Are, you know, starting with setting boundaries, uh, just to prevent things getting, getting out of hand, like in the U S and like, uh, in, uh, in Europe with the, uh, AI act, and there's a lot of activity in trying to make things more simple, like for example, the OWASP LLM top 10, uh, the work from BSI, from Anissa, from NIST, from MITRE, from Etsy, everybody's doing the best. 
 

Uh, to try to make [00:07:00] things simple and create frameworks at the same time, it is impossible for all these initiatives to try to stay aligned because you just can't do that. So there's the pressure of the pace of the industry currently, and there is the proliferation of the frameworks and the terminology. 
 

So this alignment problem is now an issue because. Despite all these great intent, um, thing is that, uh, there is, uh, uh, there are discrepancies between these frameworks, which makes it harder for people to, um, to deal with this. And we've seen this with security standards as well, which is what we're trying to solve with open CRE by creating. 
 

a catalog of common requirements and linking from every common requirement to how it is being covered in the various standards. And we're working on doing the same thing for, uh, for AI, try to align those things. And the initiative I mentioned, [00:08:00] the AI exchange is also open sourcing that discussion because we saw that from doing the work for the, AI Act. 
 

You need a large set of experts, multidisciplinary, to deal with this complex, fast moving topic. And you, you can't have all those in one specific committee for Sensenelic. You need to, uh, use the wisdom of the crowd. Uh, in order to get the best overview, which is why we set up the AI exchange and why we made it completely copyright free. 
 

So every, uh, standard maker out there, every framework maker can, you know, eavesdrop into the AI exchange, which is a very comprehensive overview of all the AI security threats and controls and make use of it. They don't have to attribute us. It's free for everybody to use because we believe that only through this sort of altruistic approach. 
 

And sharing all those insights from experts around [00:09:00] the world who can join in on this open source, we can create the alignment that will eventually help to make things more simple for every organization.  
 

Sean Martin: I just want to have to, uh, have to take a look at that AI exchange, perhaps even have a deeper, deeper chat on that, uh, in a separate episode. 
 

Rob van der Veer: I'm sorry to interrupt Sean. I forgot to mention that you can find it at OWASPAI. org.  
 

Sean Martin: Very good. Very good. I want to go to the article because I, I think for me, it made me, made me pause and think, and I suspect it's going to make, uh, the audience listening and watching do the same. What was the catalyst for writing that piece? What prompted you to do that?  
 

Rob van der Veer: It was a short piece, uh, like I often do on LinkedIn, share some insights that I hope help people. 
 

Uh, so in my [00:10:00] work, I get the question a lot, uh, will AI regulation stifle innovation? And there are two ways that this is taking place. The first one is you need to do some administration regarding your risks and look into them, do some assessment with regards to security and with regards to responsible AI just to make sure that your AI system is not doing any harm. 
 

I think this is a good idea and recent research has shown that this is a relatively small effort. That is, um, also beneficial to the business itself because it's not only makes you comply with regulation. It only prevents you, uh, for having incidents that will eventually harm your business. So that's the sort of, uh, doing your homework, doing your risk analysis efforts that is required. 
 

Uh, and beneficial for, uh, the [00:11:00] organization. The other way that regulations can stifle innovation is by simply saying what you're doing. You can't do that because it's simply too high risk. Uh, and the European AI Act identifies a number of applications. Uh, that are, will not, uh, be allowed anymore, uh, like for example, uh, large scale biometrics, like for example, uh, face recognition, uh, in, in public space and, uh, criminal profiling in the sense of using, uh, properties of persons, of individuals, uh, as input to, uh, predicting their, uh, behavior and then acting on it, uh, from within the police. 
 

So my answer to when I get this question would always be, well, yeah, it will stifle some innovation and prevent some innovation because we now, by now believe as a society, or at least in Europe, that certain things are [00:12:00] wrong. Um, they didn't used to be wrong. And you always see that with technology. 
 

Technology comes, people start applying it for everything, and then suddenly there's a realization, whoops, it seemed like a good idea at the time, but maybe we should do that a little bit less. That was the thought that I wanted to put in that piece. And I was also referring to, um, well, basically data analysis, business intelligence, IT in general, allowing us to connect all those databases. 
 

And we first had the idea while data is the new oil, but we came back from that idea a little bit, because we saw. Uh, the harm and the potential harm that it could do to individuals when their personal data would leak, for example, uh, and well, uh, harm their, uh, their, uh, their human rights, basically their fundamental rights. 
 

And we came up with privacy regulation to, uh, protect [00:13:00] people. And in the same way, we're now coming up with AI regulation to, uh, act on our realizations of things. that we shouldn't be doing. And I mentioned one particular, uh, example of my own, uh, which is helping the police back in the nineties to, uh, reduce crime by giving, um, for example, juvenile delinquents, uh, attention depending on their profile. 
 

So based on their age and the type of things that, uh, that they had been doing, giving them proper attention and assessing, uh, their level of, for example, uh, repeated offense. And this was at the time carefully analyzed, uh, by privacy officers, by lawyers. Uh, everybody involved said, well, this is a good balance between, uh, safety of our society and the rights of the individual, which in [00:14:00] this case, uh, the delinquent. 
 

Now, uh, that was them. Now we would, we would not do that anymore. Uh, so this is what I shared in the article and it got some great positive responses, but also some quite negative responses. So that was interesting. Yeah.  
 

Sean Martin: Yeah. It seems, uh, it seems the binary is growing, growing stronger and more polarized on a number of things. 
 

Of course. Um, I'm, I'm curious, let's spend a little time on this because I believe, I guess what I'm, what I want to question is there's the, I think you said you wouldn't do that now and I'm wondering, is it. Is it because of new regulations that that wouldn't happen anymore that those those procedures wouldn't be viable or ethical or or is it a matter of.[00:15:00]  
 

We just don't think society will, will allow it. Um, cause I know it's probably 10 years ago. We did, uh, we did a podcast, uh, looking at a ride share company that collected tons of data. And, and there were laws that in California that prevented him from storing that data and sharing it with the exception of, I think, counties in California that. 
 

That had an agreement with them. So I guess my point is government entities can kind of be exempt from some of these laws. So why, why would, why would we not do that? Is it a societal thing? Is it a legal thing? What's the catalyst?  
 

Rob van der Veer: It's so you could say that. Um, specific values and ethical principles are static throughout time. 
 

But when you draw the balance with ethical dilemmas, in this case, the rights of the [00:16:00] individual versus, uh, safety, uh, safety in society, this can change over time. Um, so it's actually, uh, the, the, the values that we have today with regards to, for example, uh, uh, protecting, um, Uh, equal treatment, allowing people to get equally treated, uh, independent of their ethnicity. 
 

Let me give you a very specific example. So back in the day, we used, uh, frequency, uh, of offense as input to our model to predict whether people would, you know, offend again, uh, as a repeat offender. And the police used that. Now, um, that model didn't use ethnicity at all as, as an input, but it turns out, uh, that, um, certain ethnicities, [00:17:00] uh, have a habit, uh, through all kinds of societal, uh, and neighborhood reasons of, um, doing more frequent offenses. 
 

And this caused that our model back then had a bias towards ethnicity through this proxy, which is frequency back in the day, we said, okay, we have some bias, but it's indirect. Uh, and we believe that if we don't do anything, we cannot give the right attention, uh, to, uh, to, to certain people. And then we have to keep everybody just as long, um, uh, kept in, uh, uh, the, uh, the police office. 
 

And we draw the line beyond this. Now we would say, well, let's analyze the performance of this AI model. Wait a minute. There's way too much bias to ethnicity. We don't want that. And that, that's a change, that's a moral change. 
 

Sean Martin: And [00:18:00] so let's look at this from, from the business perspective. Now, what, what do you think, or what have you seen? Uh, if you have seen things that you can share where organizations are doing things that might be regrettable either now with the AI acts and regulations coming out, or in the future is, as we realize. 
 

Rob van der Veer: Yeah, absolutely. So when we look, uh, when we assess systems, we look really close and we want to know what you're doing there, what you're doing there. And then we find certain, uh, decisions often done by, uh, made by engineers. With no, uh, bad intent, but with large consequences, like for example, uh, video recognition system that, uh, is supposed to use blurred images, [00:19:00] but, uh, for some part of the task, uh, isn't working, uh, well with those blurred images. 
 

So they decide, well, let's unblur the images. Uh, let's, let's use the original images because it performs better. Um, and by doing so, uh, violating, you know, that intent of, uh, um, anonymizing the video material. Uh, it's so easy to do because data scientists, AI engineers are so much focused on creating a working model. 
 

And if the model doesn't work. Then there's no business advantage. Uh, so, yeah, you see some sacrifice, um, on purpose sometimes, but mostly, um, without bad intent of responsibility with AI just to get these working models. So it's good. To keep an eye on it, use, using, uh, testing frameworks, uh, doing self [00:20:00] assessment or letting somebody else have, uh, have a close look before it turns into a trial and before it turns into a liability issue and, uh, loss, loss of business or, uh, maybe, yeah, complete going bankrupt as a, as a company. 
 

Sean Martin: And of course he, you mentioned the, uh. I'll use the video example. Um, this can be audio, it can be video, it could be biometrics, fingerprints, eye scans, voice scans, uh, clearly other types of data as well. Is, Is the issue, I think I know the answer, but is the issue related to any one of those or does it become even bigger when you start to combine them? 
 

Um, how does, uh, does, do we get to a point where you just shouldn't do that anyway? Because there will almost likely be a scenario that you'll regret later. If you can, like the old, [00:21:00] the old privacy conversation, if you don't collect the data in the first place, you don't have to secure it and you don't risk losing it or exposing it. 
 

Therefore,  
 

Rob van der Veer: Again, there's, there's a good analogy with, with privacy indeed, and also, um, if you look at the European privacy regulation, it's, it, um, it says a lot about, for example, the purpose that you apply, that you use the data for needs to be the purpose that you collected it for that prevents a lot of the AI, uh, uh, purposes already. 
 

Um, so understanding those. Uh, and applying those not just from the, you know, the privacy officer, uh, point of view, but also from the engineers is important. And it's not that difficult to make engineers more aware of, of, of. Privacy aspects of AI aspects. I always [00:22:00] say to engineering teams that you need to treat personal data like it's radioactive gold. 
 

So it's valuable, but you don't want to, you know, I have, it's, you know, hanging around, you want to know where it is. You don't want to keep it too long. You want to minimize it because, um, for a system, uh, to reduce risk, um, Especially in zero trust environments, someday something is going to happen. And then you want to have as little data as possible, uh, in place. 
 

Sean Martin: This is a question that I've, it's been in my mind and funny enough, I've not asked it yet on any, in any conversation, which is there are ethics, morals, laws, standards we can all follow. And those are, those are there for whom. Want to abide by them, right? And then, and then there's the [00:23:00] nefarious, which can tap into public data, um, stolen data that maybe folks aren't aware that it's been exposed yet and use it. 
 

Maybe not for the most complete. Perfect business case, business model tried and tested by an engineer, right? Working well enough to conduct some activity that, that isn't lawful or could cause harm to somebody. So how, what are your thoughts on how we tackle that? If, uh, hopefully you have, have some ideas there. 
 

Rob van der Veer: There is very little record of it. Uh, apart from the obviously, uh, malicious applications of AI for, uh, for example, creating phishing emails, uh, But other than that, um, there's very little, there's, there's neglect definitely, [00:24:00] but mal intent. We see that we see that very little. 
 

Sean Martin: Yeah. And yeah, I guess I always think a little, a little cautiously about some of these things and perhaps slightly pessimistically just looking for those holes. Um, but always, I want to circle back now kind of with, with the aim to understand that risk so we can then mitigate it. So I want to go back to some of the work you've been doing and, and look at this. 
 

Look at that work in the context of how an organization would approach applying some of the, some of the models and the frameworks and the tools that have been developed and the knowledge that exists, like in the AI exchange, who, who, who leads that charge is that, uh, is it engineering, is it security, is it a risk team who's responsible for in the context of AI, kind of [00:25:00] grabbing a hold of all that stuff and saying, here's how we navigate. 
 

Great. Yes.  
 

Rob van der Veer: Yeah. Multiple, multiple roles are involved. Um, one role is let's call it the governance risk and compliance role. Uh, what the AI exchange refers to as having an AI program, uh, meaning that you need to be aware of where you are applying machine learning algorithms in your organization for what purpose, with what data and what the risk, what the risk category is and what the risks. 
 

are just to be able to act upon those and maybe to decide, well, maybe we should not go ahead with this, uh, this initiative. That's the governance risk compliance. The other angle, uh, closely related is the security angle. Let's say the security, uh, officer. There are a number of things that, uh, this [00:26:00] role needs to be aware of when it comes to AI. 
 

There are new assets. There are new types of risks, supply chain risks, uh, all kinds of particularities for AI that, uh, the, uh, the CISO needs to be aware of. And they're, they're documented in the AI exchange and in many other, uh, publications. Interesting is that, um, half of the controls against Um, machine learning attacks are data science controls, which means that these are things that data scientists need to do. 
 

Normally, um, uh, CSOs are working with, uh, security professionals and application security specialists and network security specialists. Now they also need to work with data scientists that need to build. pattern detections against certain attacks that need to add more noise to the training data in order to prevent certain attacks taking place.[00:27:00]  
 

Uh, so it becomes much more multidisciplinary. And the same goes for the third role, which is the development manager or the, uh, chief information officer that has a practice in software. And what we often see in organizations is that, uh, the, um, AI engineering is taking part in, uh, virtually a different place, a different room. 
 

Uh, where apparently different rules apply, mostly the rule is, uh, get me a working model. Uh, I don't care how you do it. We want the working model. And that's because, um, AI engineering is new to organizations and it's also because data scientists have been educated that way and are driven that way. Yes, they are also often managed that way. 
 

Get me a working model. But then the model works and needs to go into production and [00:28:00] maybe needs to be transferred to another team. Then often it turns out that it's quite hard to change, uh, because it has been put together in sort of a lab mode. It needs to work. So let's copy and paste some code and let's not worry too much about maintainability or testing. 
 

It needs to work. The problem is if it needs to go in production and provide, really provide that business value. It's actually too late because a new team can't understand what you've been doing, doesn't know about your experiments, all the things that you tried and that failed because you didn't document them. 
 

Um, now this sounds like really, uh, sort of, uh, me bashing a data scientist. Not at all. These are, I mean, uh, valued. Assets to organizations and, and, and increasingly rare and very important and essential to have. And what they require in practice is [00:29:00] guidance when it comes to creating future proof software that is transferable to other teams, it slows them down. 
 

It will slow them down definitely because they will need to document their experiments, but it's for a good cause. And, uh, some things may seem like slowing down. Maintainable code that I write in the morning will help them in the afternoon. And this is also what we see with data science teams that we work with. 
 

Uh, when you show them how they can do abstractions, how they can create unit, uh, uh, uh, unit testing, um, how they can set up a good architecture, they embrace it. But often this is missing from their education and this will be, uh, this is an attention point for, for, for AI education, uh, making it more of a software education and what organizations also can do is combining a data scientist with software engineers in [00:30:00] teams. 
 

And of course, measuring the test code, measuring the maintainability, creating that feedback loop and coaching these data scientists in creating a better systems, because this is a. This is a big worry and if you look at these three roles, uh, they, they all three suffer from AI often being overlooked and treated in, in, uh, in isolation, whereas it just should be included in the standard security practice in the standard software development best practice and in the compliance practice and the AI exchange discusses how to do this for, for security. 
 

And the 5338 standard discusses how to do this for, um, for software engineering and the new standard 42001, which is an AI governance standard discusses how to do this for the governance risk and compliance role. That was a long answer. But, uh,  
 

Sean Martin: I love it. [00:31:00] I love it. And, um, as we wrap here, uh, I want to throw you for another loop. 
 

But what you're describing to me sounds a lot like a platform engineering module where there's a team building something that would be used by others within the organization. One or more applications feeding security programs, feeding policies, feeding Yeah, coming in first compliance, how much in, in terms of looking at it from a security perspective, somebody presumably looks at it from a quality perspective as well. 
 

So, are there any lines you can draw or parallels from quality assurance to security or perhaps a program? Platform engineering perspective for organizations that leverage that type of,  
 

Rob van der Veer: uh, could you expand on what you refer to when you say [00:32:00] platform engineering, 
 

Sean Martin: shared service where in, in, in essence, an organ, a part of the organization builds something. That then is used organization component, component development, or some, some platform elements, something along those lines.  
 

Rob van der Veer: Yes, you don't see that a lot currently with, uh, with, with, with AI engineering. Of course, this, this whole DevOps and platform engineering idea. 
 

Is, uh, is the future for, for AI and we see an increasing number of, of, of products and frameworks, uh, coming into place to, uh, to make that happen. And just like with, um, regular software engineering, you want to, um. Uh, embed and cover as, as much of the requirements as possible in those, uh, in those frameworks, because there's so many things that you need to take [00:33:00] into account that in fact, the platform engineering is the answer to get all those covered. 
 

Uh, otherwise it's just, you just can't manage it. If you look at the AI exchange. Uh, I think we're now at, let me do an estimate, that's 60, 60 controls that need to be in place for, for AI security. You don't want to bother engineers with having those controls in mind with everything that they do. You want to cover as many as possible in the Um, in the platform. 
 

Yeah. So platform engineering is the future for AI security and quality. Definitely, including, uh, by the way, uh, taking care of, uh, other aspects than security, such as, uh, explainability, um, and, uh, unwanted bias. You want to have those covered. As well.  
 

Sean Martin: Absolutely. Fantastic. Well, Rob, I, I think we could probably take any one [00:34:00] of these points in any one of the, uh, items that you mentioned and spend another hour on each one. 
 

But, uh, for, for today, we'll, we'll kind of wrap and. I'll ask you to send, I know I have your posts and, uh, I have the AI exchange and the OWASP one you mentioned earlier, but I'll ask you to share the others with me as well, so we can make them available to everybody who's watching and listening here. And certainly happy to have you on again. 
 

And if, if folks want, want us to dig deeper into any particular area, just let me know and, uh, Hopefully Rob will come back and join me and, and, uh, maybe some of the others from the team that helps put some of these things together, that'd be great. So thanks again, Rob. Any, any final words before we go? 
 

Rob van der Veer: Thank you for the great questions. I loved it. Uh, uh, and, uh, uh, I like your relaxed style. So this was a pleasant conversation.  
 

Sean Martin: Thank you. I am relaxed. I am pessimistic and relaxed. [00:35:00] I don't know. Nah. Nah, it's very good. I appreciate your time, Rob, and all the work you're doing. Thanks for doing that for the community and society at large. 
 

And thanks everybody for listening and watching today. Share with your friends and your enemies, subscribe, stay tuned for more. I have a lot, uh, a lot already recorded and a lot planned for the next few months. So hopefully you'll stick with me as we continue to redefine cyber security here on ITSB Magazine. 
 

Thanks everybody. Thanks, Rob. Thank you.