Redefining CyberSecurity

Security, Laws, and Vulnerabilities: Unpacking the Disclosure Process to Understand the Intersection of CFAA, DMCA, and Coordinated Vulnerability Disclosure | A Conversation with Katie Noble and Harley Geiger| Redefining CyberSecurity with Sean Martin

Episode Summary

In this episode of Redefining CyberSecurity, host Sean Martin explores the complexities of vulnerability disclosure with Katie Noble from Intel Corporation and cybersecurity attorney Harley Geiger. They dig into the importance of vulnerability management programs, challenges related to different state laws and sanctions, and how AI and regulatory shifts are impacting cybersecurity research and programs.

Episode Notes

Guests: 

Katie Noble, Director, PSIRT and Bug Bounty at Intel Corporation

On LinkedIn | https://www.linkedin.com/in/katie-trimble-noble-b877ba18a/

Harley Geiger, Founder and Coordinator, Security Research Legal Defense Fund

On LinkedIn | https://www.linkedin.com/in/harleylorenzgeiger/

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

View This Show's Sponsors

___________________________

Episode Notes

In this episode of the Redefining CyberSecurity podcast, host Sean Martin is joined by Katie Noble, Director of Product Security and Communications at Intel Corporation, and Harley Geiger, a cybersecurity attorney at Venable LP. The episode provides a deep dive into the realm of vulnerability disclosure and the corresponding laws that shape its dynamics.

The insightful conversation unveiled vulnerability disclosure as a toolbox for receiving vulnerabilities from diverse sources and then subsequently identifying, mitigating, and disclosing them. Both Noble and Geiger highlighted the importance of this process in creating a more secure digital ecosystem. However, they identified some challenges which include technical literacy, uneven state laws, clarity on good-faith security research, and sanctions that restrict conversation about vulnerabilities with certain entities.

Furthering the discussion, they touched upon the implications of AI and services provided through APIs on vulnerability disclosure. They acknowledged AI as an enabler which necessitates creative thinking about new tools for infrastructure security. They also highlighted potential issues with cloud services and AI, along with the growing practice of identifying non-security harms such as bias and discrimination through similar disclosure processes.

While discussing the role of regulations and policies, the Noble and Geiger stressed these aid in setting security standards and issuing regulatory compliance. They emphasized that understanding regulation as a net good and engaging proactively with policy formulation can result in better product security.

The episode concluded with insights on how regulatory improvements could reduce liability and move the space forward. This includes improvements in state law, clarification around AI, and easing sanctions to allow dialogue around vulnerabilities.

___________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

___________________________

Resources

Hacking Policy Council - State Charging Policies for Good Faith Security Researchers: https://assets-global.website-files.com/62713397a014368302d4ddf5/64d3d1e780453a690d637186_HPC%20statement%20on%20state%20charging%20policy%20reform%20-%20August%202023.pdf

Hacking Policy Council - AI red teaming: Legal clarity and protections needed: https://assets-global.website-files.com/62713397a014368302d4ddf5/6579fcd1b821fdc1e507a6d0_Hacking-Policy-Council-statement-on-AI-red-teaming-protections-20231212.pdf

___________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit:

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring this show with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Episode Transcription

Security, Laws, and Vulnerabilities: Unpacking the Disclosure Process to Understand the Intersection of CFAA, DMCA, and Coordinated Vulnerability Disclosure | Redefining CyberSecurity and Society with Sean Martin

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] And hello everybody. You are very welcome to a new episode of Redefining Cybersecurity podcast and this is your host, Sean Martin. As you know, I get to talk about all kinds of cool things with cool people who know much more than me about, uh, lots of things cyber and, uh, today is certainly no different. 
 

This is a topic that's. I've had a conversation on it, uh, a number of times from a number of different perspectives, and it's, uh, a topic I wanted to chat with Harley about for a while. And we finally connected and Harley connected us with Katie. And here we are. We're gonna be talking about, uh, the state of vulnerability. 
 

I. I'll say assessments and then, uh, ultimately the disclosure process. So how, how are organizations looking at that, how to laws more importantly, impact how organizations do that. And even more importantly, how researchers are able or not able to help organizations do that. So, um, [00:01:00] we're gonna get into that topic. 
 

I'm really excited about it. Before we do though, I'd like to welcome Katie and Harley. Thank you so much for, for joining me today.  
 

Harley Geiger: Thanks for having us.  
 

Katie Noble: Yeah, thanks.  
 

Sean Martin: Yep. It's gonna be fun. And, uh, maybe a few words from each of you before we get started. Maybe a little bit about, uh, your role, what you're up to, why this topic is I important or at least of interest to you? 
 

So Katie, we'll start with you.  
 

Katie Noble: Okay, wonderful. I always, I always prefer it when Harley starts, um, but I'm, I'm happy to, I'm happy to jump in. So, I'm Katie Noble. Uh, I am the director of Product Security and Communications at, um, Intel Corporation. I've been with Intel for about four years. Previous to that, I, uh, was in the US government for about 15 years, where I was responsible for all of the US governments. 
 

Um, portfolios on vulnerability disclosure. So that's the MITRE CVE program. The NIST NVD Program, the Carnegie Mellon Cert CC Program, and the ICS Cert Vulnerability Handling Program. So during my [00:02:00] tenure, um, I have disclo, uh, coordinated and disclosed about 20,000 cyber security vulnerabilities. So I, I. 
 

Absolutely love vulnerability disclosure. It's sort of key to who I am as a person and, um, security is something that I, I think is a, is a defining, um, defining point in, in the ecosystem that we live in. So that's  
 

Sean Martin: me. I love it. 20,000. My goodness. That's a, that's a few Harley.  
 

Harley Geiger: So I'm Harley Geiger and I'm a cybersecurity attorney at Venable LP, which is a law firm. 
 

And I work there on, uh, cybersecurity laws policies. I help folks manage cybersecurity incidents and state compliant with, uh, cybersecurity and data protection laws. Um, I also coordinate the Hacking policy. Which is a group of companies that, uh, are driven by creating a better policy and legal environment for things like vulnerability management, vulnerability disclosure, uh, uh, bug bounties, and, uh, vulnerability assessment. 
 

[00:03:00] Um, I've been working on this issue for. Um, more than more than 10 years, uh, at this point. Um, back before things like ethical hacking or uh, vulnerability disclosure were nearly as common as they are now. Um, and I care about it because ultimately I think that this is about protecting consumers. I. I think that having a decentralized way of, uh, finding and disclosing vulnerabilities that they can be patched before criminals get at them is, uh, is ultimately gonna benefit users. 
 

And on the flip side of that, for the folks that are finding the vulnerabilities and disclosing them, I think that it protections for them are important because ultimately this is about whether or not we are able to use computers in ways that are unexpected. Uh, and if we're trying to do the right thing, I think that it's important that we continue to have that ability as, uh, people who are growing up with computers all around us, increasingly. 
 

Sean Martin: I love it. So I, I wanna start with, I. Maybe a definition of a few things. [00:04:00] So, uh, you both mentioned coordinated vulnerability disclosure. Um, there are a few laws I want to get into as well. Um, maybe some that have passed, maybe some that are in the works. Who, who wants to kind of set the stage with what coordinated vulnerability disclosure is and its role within protecting, uh, our society and the businesses that, uh, make our society run. 
 

And governments, I guess, as well.  
 

Katie Noble: I can, I can start if you want. Um, yeah. So, so when we look at vulnerability disclosure, it's helpful to kind of take a step back and say, what is a cybersecurity vulnerability? We use this term vulnerability kind of in normal everyday English conversation. Um, and, and I, I sort of think that we use it as a, as a noun. 
 

Um, but in the cybersecurity context, we're actually using it more of as an adjective. And so, um, vulnerability in, in the cybersecurity context, um, is a. A weakness in a device that allows an attacker to do something the developer [00:05:00] didn't originally intend. There are lots of definitions that are very, um, formal. 
 

Uh, ISO has a definition of it. CVE has a definition of it. But I think ultimately it comes down to, uh, a device is being used in a way that was not intended by the original developer, which can cause harm. Um, so that's, that's your structure of what a vulnerability is. Um, there are physical vulnerabilities and there are cybersecurity vulnerabilities, and so we. 
 

We live mostly in the cybersecurity realm. Um, so then when we say coordinated vulnerability disclosure, sometimes you hear coordinated vulnerability disclosure or vulnerability disclosure programs. VDP, these terms are often used interchangeably. Um, and they are for the most part, similar. Um, coordinated vulnerability disclosure is your overall umbrella. 
 

It's a, um, it's a process that has international standards assigned to it. Most big industry, uh, companies comply to those international standards. What it truly means is that it is a toolbox for receiving vulnerabilities from anywhere. Internal, external, doesn't matter. Um, [00:06:00] sometimes we have friendly hackers who send us stuff. 
 

We get a lot of, uh, data that way. Um, sometimes we have internal red teams that find vulnerabilities. So vulnerability disclosure is a way of receiving those vulnerabilities, processing them. So we're identifying, mitigating and disclosing your ultimate output of a vulnerability disclosure program or coordinated vulnerability disclosure program is a disclosure. 
 

Uh, and that is usually accompanied by a patch. And so when you see on your iPhone that your phone needs to be updated 'cause you've got a new patch or um, any kind of device, that is ultimately the process has worked from the beginning to the end. So that's, the whole process is represented there and you're, you are, as a customer being presented with the output of that, uh, of that process. 
 

Sean Martin: And Harley, I'd, I'd like your, your take on this as well, maybe, maybe from an organization perspective. Um, 'cause I heard Katie describe two things. One is internal evaluations. There is public that come [00:07:00] through CVEs perhaps. Um, and then I know you discussed, uh, or mentioned bug bounties as well. Um, which could be a third, third piece of the pie are, are there others and maybe a, a picture of how well organizations are addressing each of those areas? 
 

Are they, are they good at Some do only do some only look at CVEs and they think they're good, or kind of, you know what I'm saying? What, what's that picture look like?  
 

Harley Geiger: So I think vulnerability disclosure and coordinated vulnerability disclosure are a subset of vulnerability management, right? So organizations are, uh, just required by law, many of them to, uh, search for vulnerabilities and test for vulnerabilities in their, in their assets. 
 

Those kinds of sort of internal testing and, and discovery and patching that stays in-house are not necessarily vulnerability disclosure. Um, uh, but qualifies as vulnerability management. But as part of that vulnerability management program, if a, a [00:08:00] vulnerability is identified by somebody that is external from that normal process, it can be a, an employee within the same organization, but it could also be somebody outside of that organization, like a, a security researcher. 
 

Um, what is the channel for communicating that to the team that can do something about it, that can assess. The disclosure, um, figure out how serious it is. Uh, communicate back out and establish a patch, um, that is vulnerability disclosure. In addition, if the organization determines this is not, not my vulnerability, this belongs to another company, you know, or another organization, then that organization may have a process for disclosing that to that other organization. 
 

So this is also within the bucket of coordinated vulnerability disclosure policies. Um, and so in increasingly. Uh, organizations are adopting, uh, the disclosure aspect of their vulnerability management program. I think it's becoming more, more accepted, uh, in terms of [00:09:00] how organizations, how well they're doing it. 
 

Uh, you know, it really varies, um, there, and it doesn't necessarily vary by size or resources either. Um, it, it, I think it's, it's just. Kind of whether, whether the organization, uh, takes it seriously and has put through, put the effort forth to, uh, to establish a, a, a good process. Um, importantly, it's it's not just the organization having that channel I. 
 

It's not just can, can we receive vulnerabilities? Uh, you know, it, it also has to be that backend assessment and, uh, and triaging and communication. If you don't have that backend process in place, which is usually part of the vulnerability management, uh, uh, program, uh, then your vulnerability disclosure policy is not going to be very impactful. 
 

Um, you could, you know, uh, disclosures may languish. Um, in terms of the sort of the legal landscape of it. There, there are an increasing number of laws that require it, and even [00:10:00] more guidelines, best practices, standards, things like that, that encourage, uh, vulnerability disclosure policies. Uh, we can get into some of the specifics about that. 
 

Um, but I think that the broadly adoption is increasing. We're not taking steps backwards.  
 

Katie Noble: And I will add one little thing to it as well. Um, vulnerability disclosure. The intent of a coordinated vulnerability disclosure program is really to reduce adversarial risk. So the idea being that the vulnerability is identified from wherever it's identified, it's communicated to the vendor who has the most. 
 

Or the vendor or the end user I guess. Um, the person who has the most ability to, um, to say, lemme rephrase that. The vendor or the owner operator, the end user is different. Um, the person who has the most ability to fix it, and then there's somewhat of an agreement that happens and that's called embargo. 
 

And so what happens there is that the identifier and the person who's doing the fixing both agree. That no one will talk about this until everyone can talk about this. And the idea being [00:11:00] that that reduces adversarial risk if vulnerability is, is disclosed. That's called a zero-day vulnerability disclosure. 
 

And it's usually not a good thing because it means that an adversary has the ability to leverage that vulnerability before. The ability to, the end user has the ability to patch it. Um, and so that's the idea between, of coordinated vulnerability disclosure. Everyone kind of agrees to this process so that everyone can reduce the overall risk for the entire community, and that's the defensive nature of coordinated vulnerability disclosure. 
 

Sean Martin: Yeah, I appreciate you adding that. Go ahead, Harley.  
 

Harley Geiger: We're sort of talking about it in dry terms, but I think that it's a, it's a really interesting process because we're, we're talking about. You know, people who are finding vulnerabilities in, in systems sometimes by testing it and uh, and putting themselves at some legal risk by coming forth and talking to this organization like, Hey, I was messing around on your system and I found that you are vulnerable. 
 

And I found the social security numbers of a bunch of. Let's just use a random example, educators from the Department of [00:12:00] Education and, uh, and this results in the, the governor of the state telling you that you've been hacking and, uh, and, and that this is a crime, right? So it, it can lead to some really interesting outcomes. 
 

Um, and, uh, but. You know, if you're doing it right, there is a whole bunch of dry processes that are, that are involved, that are, are based on standards. Um, you know, that make sure everything goes smoothly. Um, but it is actually a pretty wild space, which is one of the reasons I find it, uh, uh, fun to work on. 
 

Katie Noble: And it's changed a lot in the last 10 years. It used to be the regular, kind of the norm for security researchers, hackers, um, as we, we lovingly call them, we tend to call them security researchers when we're being polite. Um, and hackers is kind of the. You know, friendly hackers, ethical hackers is sort of the phrase that they often use to define themselves. 
 

Um, but there is a been a big change in the last 10 years. Um, you, security researchers used to be served, you know, legal notices and cease and desists and was a regular kind of thing. And if you're just a, a curious person, I mean, security researchers come in multiple different psychological [00:13:00] flavors. Um, and so their motivations are identified by a lot of different pieces and. 
 

Of that malicious nature is usually not one of them. Um, and so to be confronted with this kind of sea synthesis or legal, legal, uh, ramifications is, is very chilling. Um, and so we've seen over the last 10 years a very, you know, big change in mindset where a lot of big companies have adopted and said, you know, Hey, this is good for everybody. 
 

This is good for us, it's good for our competitors, it's good for our end users, it's good for everyone. If we can raise that noise floor. If we can lift the, lift the tide, then all ships go as well. Um, and that ultimately makes everyone safer. Um, and it sets a standard that others will, will you set the bar high and others will rise to it. 
 

Um, and so that's kind of been the change that we've seen. Uh, we've also seen the big push in, uh, legal, regulatory, um, the requirements. Um, when the Department of Defense in 2015 launched Hack the Pentagon, that was a huge watershed moment for this kind of vulnerability disclosure. Bug bounty coordinated vulnerability disclosure, push. 
 

Um, it [00:14:00] said the US government is serious about this and is going to do it to ourselves. You know, don't just do as I say, not as I do. And they were saying, we're gonna do this and you should too. Um, and I think that was a huge moment where other companies kind of realized, you know, there's a lot of benefits to this and this security research is happening whether we want it to or not. 
 

So why don't we take advantage of the opportunity? As many sets of eyes on a product can only make things better. Um, so we have seen that big change recently, but there are still some who, who are uncomfortable with the idea and, um. I understand.  
 

Sean Martin: Yeah. I always, for years I've said, you're, every organization's running a Bug bounty program, whether they know it or not. 
 

Harley Geiger: So I would caveat that just a, a little, the, the Bug bounty program, 'cause that's the bug. The nature of the Bug Bounty program is such that it reward is given for, you know, for finding the vulnerability. And that can be very valuable. Uh, you know, if it's deployed correctly. It can't take the place of a lot of other testing. 
 

[00:15:00] Um, but, uh, but it can find vulnerabilities that might otherwise be missed by, you know, your in-house testing. But that is very different from a normal, sort of just a bare bones coordinated disclosure policy because, uh, because of the reward aspect, if a, if a researcher comes to you and demands a reward, uh, which sometimes happens, um, you know, there's I think an a growing expectation among. 
 

Researchers that they would be compensated for this effort, even if a bug bounty program is not in place. And then we start to get into territory where it doesn't look as much like good-faith research. It looks, it can be misconstrued as, as extortion. Um, and this is one of the things that we, we end up counseling both organizations and researchers on, as you know, the, the approach and the expectations have to be aligned so that there's not a misunderstanding about whether this is a bug bounty program or whether, you know, this is just a co coordinated disclosure. 
 

Sean Martin: Yeah. I think that's why it's important for every organization to have a CVD, right? So that it's, it's [00:16:00] clearly defined what the rules are of engagement.  
 

Katie Noble: So, yeah. And there's a lot of, there's a lot of different motivations for why a security researcher may do the things that they do. You know, we, we kind of break them down into archetypes 'cause that helps us identify, which. 
 

You know, from a company perspective, it helps us identify which kind of reward is going to be the most valuable to what kind of security researcher. It's a sliding scale. You know, people are, can be more than one type, but you, you have your professional bug hunters, and those are the folks that, that make a living off of, uh, bug bounty programs. 
 

They may hack on lots of different programs and they're trying to find the best, um, the best bang for their buck there. The easiest thing to find with the highest payout, right? So you, you're gonna, you're gonna build a program. For them, that targets them a little bit different than you may, uh, find a, a recognition seeker. 
 

So recognition seekers often tend to be academic institutions and they don't care about money and or oftentimes don't care about money. Um, and in most cases can't accept a reward anyway. Um, the thing that they're looking for is to be able to publish. So they want that recognition. They wanna be able to say, I [00:17:00] got a CVE was issued by X company and this is the disclosure. 
 

They put that in there. Um, their portfolio and that helps them in their career aspirations, helps them in their, their security research for the academic institution that they're part of. So their motivations might be very different. They're very motivated by being able to talk about it, where a professional bug hunter may not care if it's ever been disclosed. 
 

They just got their payment and they move on. You also have your, uh, your friendly organizations, your organizational helpers, we call them, um, that may be a company is doing their own internal security research and they find a, a vulnerability that impacts another company and they just kind of throw it over the. 
 

Over the fence and say, here you go. Other company. But you also have government agencies though they tend to be friendly helpers too. They find things they would notify the company. Um, so you do have several different kinds of security researchers and it, it's often valuable whenever you're figuring out the mechanics of your vulnerability disclosure program, how to target which type type of researcher, and how to make sure you're not missing any, uh, potential security research that may be valuable for you. 
 

Sean Martin: Yeah. And Harley, Katie [00:18:00] mentioned a lot of change in the, I'll, I'll say the mindset and culture of, of this, where I think you even said it's more acceptable as well. Mm-Hmm. So there, there's a, there's more willingness to understand the value and more, more willingness to. Put a CVD program in place and to participate in bug bounty programs and things like that. 
 

That that's the organization side. And certainly we continue to see lots of activity on the researcher side. Um, what about the laws that kind of sit in the middle? And Mo most specifically, like BMCA I think is one of the, one of the big ones that kind of. Potentially puts researchers in jeopardy. Um, I think it still exists or there hasn't been much change with that. 
 

Right. But are, are there other changes that make things better or worse for the researcher community? 
 

Harley Geiger: I think most of the changes are in favor of the researcher community over the past 10 years, uh, in terms of law and regulation, not just, [00:19:00] you know, business culture and, uh, you know, in community practice. 
 

But the law and regulatory, uh, changes over the past 10 years have almost entirely favored, uh, more. Protection for good faith security research. Um, there are still gaps. So, uh, you mentioned Section 12. Oh, one of the DMC is the Digital Millennium Copyright Act. This is, it's a, it's a mouthful to say, right? 
 

Section 1201 of the Digital Millennium Copyright Act, but it is in fact one of the United States most important anti hacking laws. It is also a law that has, I think, proven how, how, how. Uh, perilous it is to have like a broad technology law, uh, because it, it, I think if it were introduced in Congress today, I don't think it would even get a hearing. 
 

It is, it is aged, I think very, very poorly. Um, what it says is that it is illegal to circumvent a technological protection measure to a copyrighted work, which [00:20:00] means, uh, any sort of security safeguard on software, uh, without the permission of the copyright owner of the software. So that includes copyright or, uh, programs that you yourself own. 
 

Um, so if you have an IoT device, for example, and there's software that's on the IoT device, you might think, oh, well I can tinker with this a little bit for security research or for, um, you know, a, uh, uh, abil, uh, disability access, uh, things like that then, or space shifting. Um, it's essentially a DRM protection rule. 
 

Um, but there are some exceptions and those exceptions have slowly. Like over the course of nine, 10 years have been, uh, uh, expanded upon and built upon to where we now have better protection, I think, uh, not perfect, but quite, quite good for security research. Um, one of, one of the gaps though is that that is just for section 1201, that exception, um, it doesn't apply to other areas of the law, but that was where, [00:21:00] weirdly enough, in this obscure copyright. 
 

Uh, a proceeding is where a lot of the cutting edge conversations were happening on security researcher protection. The definition that they ended up coming up with in those proceedings for legal protections for security research have now been replicated in other parts of the, uh, regulatory ecosystem. 
 

So the Computer Fraud and Abuse Act, which is far more famous. Than DMCA, but used, I think less for security research. Um, uh, actually now has, uh, a, uh, uh, a charging policy associated with it. The Department of Justice, uh, announced this, uh, around the, the time that the Supreme Court ruled on CFAA and the Van Buren case, uh, where they said that they would. 
 

They urge prosecutors to decline, to prosecute good-faith security researchers, if they're meeting this definition of good-faith security research that's drawn from that copyright proceeding, the DMCA proceedings. Um, so there's a lot [00:22:00] of, a lot of better, I think that the, the environment is better for researcher liability in those areas, gaps that exist. 
 

Um, some of them are private lawsuits, uh, others are, um, uh, state laws. Those two areas have not evolved the same way that criminal liability has in the United States. Um, there's also folks, I think we're still trying to figure out, um, how those laws apply to research into ai. And we, we can talk about that in more depth I think later. 
 

But I want to, I want to also talk about on the company side, so on the, on the software owner side. Um, there are increasingly, as I mentioned, uh, best practices, guidelines, and regulations, um, that encourage or outright require, uh, the adoption of vulnerability disclosure. And so, uh, federal con, uh, federal agencies right now, as an example, um, all are supposed to have, uh, vulnerability disclosure policies. 
 

Um, if you look up, [00:23:00] sort of pick your favorite federal agency and look up, you know, vulnerability disclosure policy, you should be able to find one. Um, this is new and in a lot of ways, you know, Katie mentioned Hack the Pentagon, but I think the federal government has been, uh, very forward leaning on, on adopting coordinated vulnerability disclosure. 
 

Uh, there's also, uh, executive orders and other laws related to federal contractors that require, uh, vulnerability disclosure. Um, so outside of that, in, in Europe and a couple of other sectoral laws like medical devices. Having coordinated, uh, vulnerability disclosure policies are increasingly becoming required, not just encouraged. 
 

Um, so yeah, the landscape is, is definitely, definitely changing. I think it's changing in favor of sort of institutionalizing and accepting, uh, this process of good faith security research and vulnerability disclosure. Um, and we, we just, you know, we're in a place where we're trying to, to plug some of the gaps now. 
 

Sean Martin: And I wanna, I wanna go [00:24:00] to Katie in a second, but I wanna stick with the, you mentioned state, state laws haven't quite caught up, and maybe Katie's shaking her head, maybe she has some thoughts on this as well. But can you give an example of where, is it where the company is? Is it where the researcher is? 
 

Does it matter? Is and what is the, what's the, the, the hampering part of the state law, if you have one handy to, to reference.  
 

Harley Geiger: So in in state laws, so it, it's, remember that state laws are not necessarily affected by changes to federal law. Not always. So if you pass a federal law or change a federal law, unless you say this preempts state laws. 
 

Then they are separate. So changes that happen to things like the Digital Millennium Copyright Act or the CFAA, those will stay within the realm of federal law, not necessarily affect state laws. Now, each state has their own computer crime law. I. And, uh, [00:25:00] so even though we've seen sort of the federal space evolve to get, um, I dunno, more modern as it comes to when it comes to security research or vulnerability disclosure, the state's laws have largely stayed stagnant. 
 

Um, and in many cases there, those laws are more broad than the things like the Computer Fraud and Abuse Act. So, for example, uh, in my, I I was joking earlier in the podcast about a, you know, finding educators, uh, social security numbers and the governor calling it hacking. That happened, that happened in the state of Missouri. 
 

Um, and technically in the state of Missouri for some of the laws. That, uh, prevent malicious hacking. There is not an intent requirement the same way that there is under the Computer Fraud and Abuse Act. It's broader. So sort of regardless of your intent, uh, for some of those laws, uh, in the state of Maryland, uh, home of the NSA, if you are trying to find a password without [00:26:00] authorization. 
 

Uh, like the act of seeking that password, trying to identify it is, is technically against their computer crime law. Um, which implicates of course, like, you know, good faith IoT research where trying to find a hard-coded password on a device that you yourself own is, is a pretty common. Uh, form of, of research, um, you know, what is the sticking point? 
 

I think a lack of, uh, understanding and a lack of priority for the, for the states. Um, but that is, that is one area where I think we, you know, are hopeful that we'll see states take up a charging policy similar to what, uh, we saw with the Department of Justice, with the Computer Fraud and Abuse Act as the culture keeps changing. 
 

Katie Noble: I have some things I'd like to add to that, uh, because I, you know, I just love it so much. Um, I think a couple things. Um, I. We, even, even speaking from when I was in the U.S government, a Department of Homeland security, when we had a case where we had a security researcher come to us and say, Hey, I'm really concerned about, uh, medical device and I'm [00:27:00] really concerned about, um, information leakage, uh, over Wi-Fi, essentially, uh, over the communications spectrum. 
 

And, uh, but I can't test it because I live in X state. Uh, and I can't test it because I may end up going against, uh, some, uh, recording, um, dual. Was it dual? Both parties have to consent to be recorded. Um, full consent and that, yeah. Yeah. And it's amazing to me that one a Hacker knows about a law that may get them that's pretty, pretty high level there. 
 

But even as the US government, I couldn't test it, um, because of the place that we were located in. So I had to go to another state that doesn't have the same requirements, and we, we were able to use a, um. An FCC testing range, uh, to, to test this vulnerability to see if we could replicate it and if it was actually legitimate. 
 

And so there is, uh, there are cases where this is, this is common and it's, it's these little things that you don't, how would you know [00:28:00] if you didn't know? Um, and so I think that's, that's kind of where the, where the. The laws kind of need to, to, to pick up, uh, to, to really kind of modernize themselves. I do think there's been a lot of progress in this. 
 

Um, so last week, uh, the state of Nebraska put forth a bill, um, saying that they were going to, um, they're going to hire security researchers, hackers to actually test their state infrastructure. Infrastructure. And this is a, once, you know, this has never happened. And of course. You put forward a bill, like you can imagine how that went down on, you know, the floor there, they're, they're putting forth a bill to allow hackers to test their infrastructure. 
 

And everybody in the room was probably, oh, oh my gosh, I got aghast. How, how, what, what are we doing? Um, and, and that's, that's because there is, there's the disconnect and I, and you said, is it the, is it the researchers, is it the companies? Is it the states? I think it's the population. I think it's the community. 
 

Um, I don't think that that, that the technology I. Uh, literacy has, has gotten to a level, um, where it, it's not shocking, uh, to folks to, to hear, oh my God, we're gonna have hackers test the state infrastructure. [00:29:00] Um, this is a, in my mind, an example of a positive progress in this space because it's saying there are bad actors who are testing all the time. 
 

So if we have the ability to leverage people who, even if they're. Even if they are bad actors, I, I am not here to decide who's a goodie and who's a baddie. I'm here to decide whether or not the information they provide me is technologically viable. And if it is, then I wanna fix it, right? And so if we remove the who is and who is not, um, from our calculus completely, and we say somebody is willing to give me information that may help me secure, um, a network that may help. 
 

An end user be more secure at the end of the day. Uh, I'm willing to accept that, um, because I think the net benefit is there. And so I think once we're, we're making progress, right? But it's still shocking. It's still something that's, oh my gosh, uh, we're gonna have hackers test our network. And I, I think that that's a very bold move, uh, on, on the state of Nebraska and I commend them for that. 
 

I think that was a very, it's way to be a good [00:30:00] leader.  
 

Harley Geiger: I think some of the, the, that reaction to your point is the, is the word hacker and the, like, the, I don't know, the, the mythos that's built up around it as, as being, um, you know, at least somewhat criminal. And I think that this is also encouraged by the hacker community. 
 

Katie Noble: They haven't done themselves any favors in that space.  
 

Harley Geiger: Yeah. Kind of like the, the image of sort of skating, the edge of legality. Um. Uh, but it does, I think, and also, uh, hamper, hamper accept acceptance of, of the, you know, the, the, the ethical and beneficial aspects of hacking. Hacking is supposed to be a neutral term. 
 

You know, we are, we are out to stop malicious hackers, but the work of good faith hackers is, is beneficial to us all.  
 

Sean Martin: I completely agree. Completely agree. We, we, uh, try to make the distinction between criminal and and hacker, whatever we can. Um, I, I want to look at, [00:31:00] um, I don't know some, some of the trends in how things are delivered in terms of technology. 
 

So. Clearly cloud has taken over a lot. Most stuff is run as a service. Um, and we, we see that now, uh, through shared services via APIs. Uh. Applications being built with a lot of stuff, including open source, um, I dunno if we talked about it before, mentioned during, but, uh, certainly ai right? As a way to, another way into perhaps an organization and its, and its systems and its data. 
 

Um, how, how have those changes impacted both sides of the fence? Uh. Opportunities for researchers to find more things, uh, in an increased, um, exposure field, if you will, [00:32:00] and then an organization in terms of how they define the scope within their  
 

disclosure program. 
 

Harley Geiger: So I think that for organizations that are defining the scope of their program, if, if it's an asset that they want. 
 

Uh, to, you know, to be protected and one that they have a, that backend vulnerability management process, uh, overseeing the, the, uh, you know, it, its security. Then it's worth their while to have a CBD process that encompasses those assets. Um, the, and I think that's, again, it comes down to the individual organization, whether or not they're. 
 

Putting forth the effort to keep it, keep that CVD program aligned with their expanding range of services, um, that includes cloud, that also includes artificial intelligence as they are, as they're taking those on. Um, I'd say that, uh, at least, you know, two trends I would point out. [00:33:00] One is that, um, there are increasing increase, an increasing number of regulations that require it. 
 

So organizations I think are getting more. Uh, used to applying this to their, to their, uh, services. Um, one example is the Cyber Resilience Act in the EU, which is going to apply to, uh, all software, um, and IoT and will require a Coordinated Vulnerability Disclosure policy. Uh, the UK just put out something similar for smart connected devices. 
 

Um, the, for ai, I think,  
 

Sean Martin: I'm sorry, I think that hits this weekend, in fact.  
 

Harley Geiger: That's right. Um, and it's a, and these are, you know, these are, you know, touching broad swaths of, of the consumer market, uh, including associated cloud services, right? So it's not just the, the physical device you're holding in your hand, but also like the mobile app that it's attached to, connecting remote processing for the regulated device. 
 

The other trend that I would point to on, on AI is that we are, [00:34:00] uh, seeing the rise of, uh, disclosure of not just security vulnerabilities, but uh, things that I would characterize as algorithmic flaws. Um, so bias, discrimination, uh, toxic, and, uh, other harmful output like, um. Synthetic child pornography. Uh, and, and, you know, o other, other items that, you know, I think we are, we recognize as important for, uh, to mitigate for the responsible deployment of ai, but don't necessarily sit within the sort of security and safety, uh, uh, real realm that we have built up over the past nine years. 
 

Um, but we're seeing similar processes being put there. So coordinated disclosure that involves, uh, you know, bias discrimination. Um, but also, uh, bias bounties. So instead of a bug bounty, it's a, it's a bias bounty. Um, in some ways the law I think needs to catch up to, to that practice. Um, and, and I think it will. 
 

Sean Martin: Is it, does the DMCA come back into play there? [00:35:00] 'cause I'm thinking of copyright information again, producing a new Mickey versus an old Mickey that's in public domain as an example, right?  
 

Harley Geiger: It sure does. Um, but it, it comes into play in ways that, um. A different way than you just described. So, you know, so DMCA back to, back to DMCA sections, global one. 
 

Um, that exception, which, uh, in many ways, you know, helped, really helped to, uh, uh, further the movement to, uh, to get protections, uh, across the ecosystem is limited to security testing. So if you are testing AI for something like racial or gender discrimination or synthetic CSAM, is that a security vulnerability? 
 

And I think at best, let's just say it's not clear. And, um, and that lack of clarity means that we need to have a discussion about the scope, uh, of the security testing protections and whether or not they cover these other non-security harms that we agree have to be [00:36:00] addressed for trustworthy ai. And that is, that conversation is happening with the Copyright office right now. 
 

Um, literally wrote something on it today. Uh, there's, there's a hearing that's taking place. They're considering a petition for, uh, providing protections to researchers that are engaged in that generative ai uh, research. So jury's out on that. We, we kind of have to see. Um, but certainly there is, uh, movement on, uh, the, the testing aspect of it. 
 

So generative red teaming and testing A.I for both security and non-security. But we don't have yet, uh, clarity and specific legal protections for both testing on an independent basis and information sharing the way that we do for, uh, security. Go ahead.  
 

Katie Noble: I would just, I would just add in that, like, I, I think that I, I see a lot of, you know, um, a lot of fear around ai. 
 

[00:37:00] Um, and, and I think maybe that fear is originated in the idea that it provides a, a, an unfair advantage to an adversary. And while I will admit that that's correct, I also think that. I'm not yet convinced that that's accurate. I don't necessarily think that AI provides a, an asymmetric kind of, uh, capability to an adversary. 
 

I think that a and i is an, or AI is an enabler. Um, and if we, if we take, I hate to be overly rosy and, and, and positive about it, but if we take AI as a, as a net benefit, I mean, machine learning has been in existence for a very long time. There's a very cute little meme recently about, uh, about, uh, AI and, and little, you know, Microsoft Clippy. 
 

Um, so machine learning's been around for. Pretty long time. It, it's becoming more and more advanced, but generative AI kind of was a watershed moment, so it's a little bit more shocking 'cause it's getting more, um, more into the public sphere. Um, but I think the capabilities there are are startlingly wonderful. 
 

Um, and, and it, it really does enable a defender. [00:38:00] Equally as much as it enables a, a, a, an adversary. Um, so from the defender perspective, uh, AI allows you to do adversary emulation at a much faster rate. And I think ultimately everything that you've said that damned from, um, the original question, you know, looking at services and delivering of services, um, and, and what the trends are, um, I think that everything boils down to it's happening faster. 
 

It's happening faster than we thought it would before. So services are at your fingertips now where you used to have to go and, you know, go through several steps in order to access a, a portal. Now it's just an app on your phone. Um, these things are all being delivered much faster, which means we have to keep up with them faster. 
 

Um, and I think that tools like AI allow us to secure the infrastructure faster. Um, so I, I think it's a, it's about how you think creatively, um, from a security perspective. It's about how you think creatively about using these tools to help protect the infrastructure just as much as it's about thinking about how an adversary would use these tools to, um, to cause [00:39:00] harm to the, to the ecosystem. 
 

Sean Martin: And Kay, I wanna stick with you 'cause it. We kind of painted a picture that there's a bit of navigation required on the researcher's side. Um, you said how impressed when use these words, but it's impressive that a, that a hacker knows the laws and, and how and where to engage and not, um. I don't think it's all rosy on the, uh, on the service provider side though, either. 
 

Um, what's feasible, what's not feasible, and it, it brings to mind to me maybe counter countering laws or, or I'm thinking things like privacy laws, that you might have a coordinated vulnerability disclosure program that now. You can't have because of some statement in a privacy law that's, that, uh, prevents you from doing something. 
 

So are there things like that, that make, [00:40:00] make you reevaluate your current program? Think differently about how you create a new one?  
 

Katie Noble: So I think that you've hit on something that's very important and it's the evolution of, uh, of legal and regulatory compliance. And I think that, um, ultimately. The, the laws of the nation are set. 
 

Um, we can have some, some participation in, um, identifying implementation and helping to influence the way that the, the policy is written. Um, I, I spend a lot of my life being what I refer to as a cybersecurity activist, and I, I work with policymakers, um, about twenty-five percent of my life as working with policymakers to help inform conversations, um, and inform policy. 
 

So if we can write a better policy, then that can be given to an agency to implement. Safer and better and more efficiently. Um, from an industry perspective, you are always trying to evolve your, uh, your policies or your programs and, and, and, um, uh, your product to make sure that you're staying within. 
 

What is the, the accepted, um, kind of legal and regulatory [00:41:00] compliance and. It's, it's about doing those things, um, efficiently and at, at the best value and delivering the best product to the customer ultimately. Um, so I, I think oftentimes people see regulatory, um, regulatory kind of compliance, uh, laws and standards as. 
 

As somewhat punitive. Um, and I think it's important to switch our mind around and say like, there's a net good here. This is why this, this, uh, this law or this regulation was created and I need to see the net good in it. And it may take a little bit for, for me to adapt my product to be able to comply. Um, but that's always the goal, right? 
 

The, the goal is to deliver a product to a customer that is efficient, valuable, and safe. Um, and, and you wanna do that by? Um, by, by being in the conversation and by enabling the conversation and by working cooperatively with the people who are making those, uh, those policies in the beginning. Um, if you can do that, if you can be active, um, active with the people who are writing the standards and the regulations and [00:42:00] the artifacts, then you have a much better chance of. 
 

Enabling the correct words. Right. And I, I say this because we started with my definition of the word vulnerability, and the reason that I start with that is because when I was in the government, it was a regular occurrence that I would have to say, I appreciate that you're thinking of vulnerability in this sense, but when you write it like this, it equals that. 
 

Um, and that is something that has to be understood. And if we can enable those conversations and enable that writing to be more, um, more balanced and more receptive to the needs of today and the needs of tomorrow, more evergreen, um, then I think we have a much better case and a much better capability to deliver a secure product. 
 

Sean Martin: And as we, as we wrap here, I want to um, I think it's great points, Katie. 'cause we're talking about finding the good in it, right. And, and really embracing that. And Harley referenced a couple of, couple of laws that kind of nudge us in that direction, right. We're gonna, we're gonna help you find the [00:43:00] good in it. 
 

Katie Noble: Sometimes that's needed. You know, sometimes there's a balance between a carrot and a stick. And you gotta think past the, how this impacts me right now, and think how does this impact the good of, you know, humanity later. And I think sometimes there's that, there's that disconnect. We think it's me right now, but in reality, the ecosystem is so much bigger than me. 
 

Um, and sometimes I think people lose sight of that. And I, that's, it's very important.  
 

Sean Martin: And so where, where I wanna go is to wrap is I know there are organizations and standards and frameworks for a lot of stuff in cyber. Are there any things here that organizations can leverage? I. Things that come to mind are Disclose.io or maybe SBOM stuff that, or CMMC stuff that the government's doing. 
 

Are there any, any things you can reference to help our, our audience kind of get a handle of how to get started and, and do this [00:44:00] properly?  
 

Katie Noble: I have a couple off the top of my head. Um, so, okay, wonderful. So, um, the form of incident response and security teams is an international organization. So First.org, they're an international organization and they are the organization that often sets standards for, um, for cybersecurity principles. 
 

So, for instance, CBSS, the common Vulnerability, Scoring system is set at first. Um, they have I think about twenty-five special interest groups. Um, and they range from everything from, uh, product security, incident response to an ethics. Um, special interest group to a, um, uh, supply chain. Um, so there's always the opportunity to interact with them. 
 

They're a membership. Um, so there is a fee sometimes, but a lot of the SIGs also are pretty flexible and you can join a SIG without being a first member. Um, the other thing is from the bug bounty perspective, there is an organization that is. Bug Bounty platform Agnostic, and that's the Bug Bounty Community of Interest. 
 

Um, and they're made up of managers from multiple different companies that all, um, work in the Bug Bounty space. And so [00:45:00] they're trying to understand and learn from each other about challenges and how to evolve the, evolve the community in a positive way. Um, those are the ones that come to mind really quickly. 
 

So there are, um, other industry groups. I would say there's always, there's always somebody who has, uh, who started down the path. And so, um, don't try to recreate the wheel. Um, leverage the assistance that's, that's already started for you.  
 

Harley Geiger: Oh, well, I mean, those are, those are excellent examples. I think that the, there's, there's also plenty of written guidance out there by both, uh, uh, government sources, uh, like the Department of Justice or Department of Commerce, um, but also, uh, from, uh, MITRE. Uh, Carnegie. 
 

Mellon. Have they? They have, I. Uh, uh, good guides on, uh, establishing coordinated vulnerability disclosure processes. Um, this is, these practices have been around for, uh, for quite a long time. There's some excellent standards on them. Um, like, uh, two ISO IEC, two-nine-one-four-seven, um, and, uh, three-zero-one-one-one. 
 

Uh, they are, [00:46:00] uh, frequently referenced in, uh, regulation and best practices. So I think if, if you're looking for looking for resources, you're not gonna come up short.  
 

Sean Martin: Nice one. Well, you, you, you probably think you're done once we end this recording, but you'll have one action item at the end, which is to provide a, provide a few of those for our folks and I'll include them in the show links, uh, show notes so everybody can, uh, get a head start on it without doing a search. 
 

But, um. Fantastic conversation. I know we covered, uh, a lot of things from a few different angles. Is there anything we didn't touch on that, that you think folks should know before we wrap sanctions?  
 

Katie Noble: Oh God. All right. That's the sanctions is my vendetta. That brings it up. Do you want me to bring it up, Harley? 
 

Harley Geiger: Well, there's, there's two, two aspects. One, I'll, so I, I'll cover one, which is Okay. Uh, in, you know, you're, you're, if you're [00:47:00] going to be paying a researcher. Uh, just like if you're gonna be paying a ransom, uh, make sure that, uh, there's no indication that it is a sanctioned entity, uh, to avoid OFAC sanctions. 
 

Uh, that is, that is one aspect of your, uh, you know, your due diligence in, uh, working with, uh, with researchers. Um, Katie, I'll leave the, the second aspect to, uh, to you.  
 

Katie Noble: Right. So, um, the U.S government sanctions regime, I'm not gonna get into how, or why or the motivations. Um, they have a job to do and I understand that and I respect that. 
 

Um, the problem becomes that there is no legal mechanism for, um, companies. To speak with individuals who may be tangentially associated with, uh, organizations on the SDN list, um, or those that are in embargoed countries. So you have security researchers, especially in an embargoed country. Those folks are often taking a great personal risk on themselves to even be willing to, uh, to communicate that vulnerability to a company outside of, uh, that embargoed [00:48:00] country. 
 

Um, and, and so the problem becomes. That companies do not even have the ability to, um, to receive that information and to, uh, respond to that information. Um, uh, so individuals, so separately, individuals who are tangentially associated to those on the SDN list. So you can take, uh, what this means is some folks are, are named by name on the SDN list. 
 

Um, those folks are not. Often the kind of people who are reporting cyber security vulnerabilities, they tend to be very high level individuals, high in, uh, foreign governments. Um, and they're on a sanctions list for a very, probably a very, very good reason. And that's been an entire investigation has put them there. 
 

But oftentimes there are companies that are also on the SDN list and those companies, um, that won't get into the why they're on the SDN list. But if you have an employee who used to work at a company that is on the SDN list, and I have an email address that has, you know, employee one at. 
 

Sanctionedcompany.com from a company perspective, from a vendor perspective, I can't speak to that [00:49:00] person, um, because I know that they're on the SDN list. And you can have, um, you can have a normal conversation. You can say, Hey, how's the weather in? Whatever. That's not a sanctioned conversation, but as soon as you say, Hey, I received your vulnerability, uh, submission that you sent me the proof of concept, and I'm going through it, and I'm looking at it, and are you talking about version one or version 1.1? 
 

That's now considered a sanctioned conversation. Um, because you're engaging in what's called technology exchange. Um, and Technology Exchange does a forbidden act. And so there is no short of going to the Department of Treasury and securing a license, um, which takes a lot of time. I. There is no mechanism for, uh, companies to speak with good faith security researchers, uh, to clarify and, uh, and receive that vulnerability information. 
 

Um, and I think that there's room for clarification in U.S government regulations, uh, to, to enable a legal path to, uh, be able to receive that good faith security research.  
 

Harley Geiger: And just to be [00:50:00] clear, the situation that Katie's just described does not involve a payment, right? You just wanna receive the vulnerability information you're not asked, talking about paying anybody that is in any way associated with sanctions. 
 

So I think these are, we covered three big areas that could be improved for, uh, for coordinated disclosure, for to reduce liability, both for the organizations receiving them, um, but also for the people who are performing them. State laws. Uh, figuring out the intersection of AI and, uh, and these laws, and then sanctions, um, you know, if I had to identify a top three, those are, I think my top three right now to, uh, to move this space forward. 
 

Sean Martin: Nice. And of course, uh, it, it takes all of us, right? A lot of it will be driven by, by policy and regulations. So, uh, share your thoughts with the appropriate folks for that, and understanding how your own business is impacted by these. Uh, certainly be prepared for that. It'd be my advice. Um. [00:51:00] Thank you Katie Harley. 
 

Really appreciate it. And, uh, thanks everybody for listening and watching. Of course. I appreciate you subscribing and sharing and, uh, staying tuned for many more. Um, absolute pleasure having you both on and, uh, hope to catch up with you again soon. As the landscape, uh, changes, as new trees grow, we'll come back together. 
 

Thanks everybody.