Innovating at Speed: Control & Risk Management Guidance

Overview

Innovating at speed requires ensuring enterprise readiness, from development of a modern technology stack and data program, to identifying and addressing any bottlenecks or barriers that could prevent you from reaching your AI objectives. Control measures and risk management practices need to be developed, communicated, and monitored. Auditing of GenAI results to ensure accuracy, and minimize bias, also need to become standard practice. This discussion will provide guidance to create a checklist of challenges to watch out for so that you can lead effectively through the generative AI disruption with confidence.

Register Now

Transcript

00:00  
[This transcript was auto-generated.]
 
00:08
Hi, my name is Mary Pratt. I am a writer with cio.com. Today I am speaking with Mary Carmichael, and Goh Ser Yoon. We are going to be talking about the controls and governance measures needed around AI and particularly generative AI. Both are members with the governance Association, I Sokka. Mary is a member of both of the emerging trends Working Group and the risk advisory committee with Aisaka. So Young is a member of the Aisaka emerging trends work working group. So both Thank you very much for taking the time to speak with me today. I really appreciate it calling in from many most but multiple locales here. So coordinating across multiple time zones. Um, so I'm gonna jump right in to maximize our time here together. We know that CIOs CISOs, tech leaders all have to innovate at speed. And they have to ensure enterprise readiness from development of a modern technology stack, robust data program, and identifying the addressing the barriers that can prevent them from reaching the organization's objectives. This is all clearly true as these executives work to harness AI's power, and particularly the transformative potential potential of generative AI. Now, despite all the excitement and buzz around Gen AI, and AI in general, we also know executives are talking about the risks from hallucinations, to data security and privacy concerns. So it's a big ticket items on that concern list. In fact, we've all talked about these issues. For cio.com article we wrote about the the the issues facing organizations as they delve into their AI strategies. So today, I want to focus a little bit more specifically on how AI HOW IT leaders and security leaders should manage these issues. What are the control measures, risk management practices that really need to be developed and communicated and then monitored to ensure AI delivers value, does that as the organization wants and does not perform in ways that are problematic to the organization? So I know there's a lot there. So let's first focus on the few a few of the biggest risks that organizations need to manage with AI. So I guess I'll turn it over to either of you to jump in. What are those top of mind risks that organizations need to be mindful of? There's probably a whole bunch, but what are the ones that really bubbled up to the surface?
 
02:46
I'll jump in here. So I think Tim, Mike asked that question. We're focusing on a different dimension of risk management, we're focusing on strategic risk management. So from a corporate strategy point of view, a business model perspective, how does AI affect the way we compete? And are we able to use general AI or similar technologies to achieve a competitive advantage? So I think traditionally, with risk management, we tend to focus on more technology, restaurant appliance or ESP. And now we're focusing on big picture risks. So what are the big threats that can impact our company moving forward, and understanding the capabilities of AI the issues we need to manage and deliver and doing continuous risk monitoring to ensure that we have aI treatments in place to manage, but end of the day, it's all about knowledge, you need to understand what genre of AI is, in terms of its capabilities, understand its use case, especially from a communications coding perspective, just understanding where its strengths are, but the same time as when we talk about risks, risks, from a definition point of view means it has yet to happen, this route risks have occurred. So now there are issues. So it's well documented about hallucinations, misinformation biases. So these are issues are real. So part of that's understanding these AI systems. And this is where we need to look at responsible AI, understanding in terms of the data sources, data quality, the biases in the data sources, and understand how we're going to cleanse that information to make sure that once again, we don't have a garbage in and garbage out approach to how content is generated through AI. So it's a very meaty question. But I like to focus from a top down perspective, once again, strategic risk management, because this is an exciting area for me for risk management is once again working with the executive team on once again, our capabilities to be successful as we move ahead with this new frontier of technology,
 
04:37
I think a slightly different perspective in terms of the impact and what are the things that generative generative AI actually brought to the table when it's first created? I, in my perspective, I think it's it's amplifying some of the existing risks. that the industry may have seen. I think each organization's when they approach the development of their respective AI models, there is the risk that this models are being misused. And that could be amplified further by the abuse of that model to actually to the society. So it's not just a particular organization with perspective now, but it has the potential to actually be amplified to affect even to a country level. I think there have been examples whereby AI is being used to create a deep fakes and impersonations. So I think in the traditional perspective, you can see that technologies in the past have been used to target specific individuals. But now it can be massively amplified to target society level to country level, or even regional. So so the very potential of the AI being misused. And as well as, of course, the benefits and advantage that it could bring is very real. And that's why the big debate that came about when this technology is actually being now put into place by a lot of organizations,
 
06:24
and I'm sorry, I think that's what makes it so complex is depending on what lens you're viewing this problem from, whether it's from a society point of view, cybercrime, law enforcement, or from an organizational point of view, yeah, there's different pros and cons. And I think it kind of goes back to controls measure. So once again, using the AI act, there's different governance models you can use to understand what are the risks of this technology, and what controls do we need to implement. So with the AI act, it's a risk based approach. So let's say you're looking at using AI for your email filters, that's considered to be a low risk approach of AI. So that influences what your requirements standards and support is for that technology. However, if you're using AI for medical purposes, like diagnostic diagnostic purposes, then under that act, that influences you have to produce certain requirements, standards and oversight to support that model. So I think that's where we can learn from the Europeans and other regulatory acts in terms of how to do this risk assessment and understand what controls we need to be in place, depending on the level of risk that system has in terms of the role of AI.
 
07:29
So let me pause here, Mary and ask AI act. And I know, there is a lot of discussion around legislation, and there's a lot of talk around needed guidelines. Can you tell me just quickly for our audience a little bit more about the AI act, where it stands, what it does, and how organizations anywhere could use this Potentially,
 
07:46
yes, with the AI act, and this is where the Europeans are ahead of us. So part of that is, there's gonna be a two year cycle for businesses to get ready AI act so poorly that it has been approved by the EU, but for that is the state of readiness in terms of organizations, setting up the regulatory framework to support the AI act. So within the AI act, once again, it's risk based support that there's detailed steps you need to take to under understand what your system is, how does AI is involved in it and the level of impact in our decision making the AI system has within that application. So you have there's a continuous risk assessment approach, from there determine the level of impact. So once again, from a healthcare point of view, or if it's making decisions about applications, whether who gets a bank loan, or potentially, it gets accepted to a university, that act will mandate that you need to have certain prerequisites in place in terms of how you design your system, how you build, how do you test sign off, verify and validate that the AI model is operating as designed? And what I mean by operating by design is, once again, it's explainable. So understanding what the role of the AI system has in terms of decision making. And then how did it come to the decision and whether the decision is free from bias. So it kind of goes back to transparency, understanding once again, what rules are in place to make that decision, and whether or not the decision is appropriate, given
 
09:11
the outcome. So let me put that aside for a minute. And we could be here for hours talking just about that act alone and all the pieces to it. Three, I want to go back to something you were talking about. And that is some of the risks to the societal level some of the risks of the organization and one of the expert I talked to you said there's risk in not using AI at all right there's there's risks across the board the spectrum of AI use, I do want to ask those for you for most organizations, is the risks that they need to consider much more about their internal operations or external facing use cases like with their customers, do they have to worry about these kind of bigger societal impacts and how they could bring risk into their organization and are some of the controls either from the AI act or some of the the marine using best practices ready to address that full spectrum of risks that an organization has? And then kind of there's so many questions here, I could keep going through. But I'll ask one more question. Do you think organizations I'll put this first this are young, but then married, please respond as well? Do you think most organizations are getting their hands around the full spectrum of risks that are that are being presented by AI? Especially the ones that people are thinking maybe potentially, this is risk, but we haven't seen something yet. The first time I heard about AI hallucinations. I was like, wow, I didn't even know that was gonna be a be an issue. And here we are. So I know I get a lot there. But so yeah, maybe you can jump in around the the range of risks that organizations need to think about and how some of those frameworks like the AI act, ones are going to help with that?
 
10:48
Sure, sure. Okay, so I think the approach that right to address that there's the internal perspective and external perspective, I think, eventually external influences on the controls and the risks that will affect each organizations that have the interest to implement AI, that will pre ultimately be influenced by regulations and also compliance. Eventually, for me, in my opinion, I think they will be regulations that are coming has will have been demonstrated by the EU EC, and they will be more countries that will follow suit. And I think that would be a good path. Eventually, that will help manage the potential society societal risk that the AI may may pose that will help guide organizations towards having a proper framework, right to implement when they want to deploy AI generative AI. From an internet perspective, I think the foundational baselines or the operational controls are still the same, whereby when organizations decide to embark on AI development, AI model development, that is needs to be the especially the transparency that comes into play. And I think that there are also benefits from the respective organizations to band together. But given that this is actually a very new emerging technology, we still don't know what's the impact, what is the new consequences, how this technology may be abused by different threat actors or even it can be an unintended consequences that spark a new risks that the society the organizations may not be aware of. So companies should actually form together. I think they are different forums that have been created by the various leaders and companies that are pioneers in these AI models. And they are discussing about the guardrails, the basic guide rules that most organizations should actually consider when they are embarking on these journeys. So guardrails such as being transparent in how they have gathered the data in building their AI models, how are they going to secure those data from being leaked out or breached, and also having the proper sanitization ones, even they can be adaptations that is required into how they want to secure basically the AI securing the cyber, the cybersecurity program to protect these AI models. Mary,
 
13:48
do you have anything to add to that? Any ideas on moving forward with with once organizations understand the risks, how to implement some of those controls those measures?
 
13:58
I guess the question is, what are we working towards? So as a saqa is promoting digital trust? Are these organizations working towards digital trust support that's making sure our customers, our employee stakeholders have a level of confidence when they're engaging with especially using these generative AI systems. And also, back in April, the g7 nations met and agreed towards working with towards a responsible AI direction. So the question is, what are we working towards? And are we on the same page? So when we throw around terminology like responsible AI or Explainable AI, what does that mean, and how do we get there? So for example, with open AI in the past few months, they've been releasing a number of changes to chat, GBT. Are they using responsible AI to support those releases? What is the level of testing or transparency in that model? So there's things right now that organizations can be doing to adhere to those elements of responsible AI, but I'm not seeing that. And also I do want to highlight the role of the cybersecurity community. So DEF CON, a large cybersecurity conference happened a few weeks ago. had over 2000 hackers looking at AI chat bots and exploring its vulnerabilities and sharing the results. We have Oh Wasp, which is a application security group, they shared the top 10 vulnerabilities a large language models. So within the cybersecurity community, there's a lot of sharing information, no testing, providing the systems and sharing the results. So I would highly recommend reading the large language model vulnerability from OWASP. Because once again, it talks about what is the impact how prevalent this issue is, and what is the potential remediations. So I think the fact we have this community and especially ISACA, sharing this information and capabilities, hoping I'm hoping that this will give organizations a leg up,
 
15:41
let me kind of talk about this kind of turn the conversation a little bit from some of these extremely big picture issues to the organization, and how do they begin to get their arms around this. And one of the things you mentioned about the cybersecurity people brought to my who in the organization needs to take charge of this. And I do want to begin to talk about some real kind of probably at a very high level, some very specific practical measures that an organization can take. So what team needs to be addressing this? And are there some if you had a bullet point down or create a very high level checklist of the action items that an organization needs to take to begin to put controls and measures in place? What are some of those, again, very high level, almost universal kind of items organizations need to do? And then I want to bridge into monitoring, right? Once those are in place, how does an organization keep pace with regulatory changes, changes in vulnerabilities or other issues arising and monitoring to make sure those controls are working? Again, a lot of big picture or a lot of big questions there. But again, maybe start with that practical piece, who's in charge of what are some of those bullet point items that either view, whoever wants to jump in first, would put on the list of action items that those that team needs to take and take soon.
 
17:01
So I think, currently, at least from what I've seen, most companies, I think such initiatives are led by the CIO or the CTO from the IT side in developing such AI models. Of course, they will be also the involvement from I think, product teams as well. But eventually, I think they will be also role changes or evolution, such as I think previously, when you don't have the CISO role, but because of the evolution of cybersecurity, such rule appears. So eventually they could be a lead or a role specifically designed for to met to manage the AI specifically. Right, I think this can this was also one of the highlights from the isiaka recently released white paper as well. I think, nevertheless, even though if there is a particular responsible PICU are stakeholders for the AI program, eventually, they will still be the the there is still a model that is required to involve all the stakeholders that the management level from the C suite to make sure that everyone is aligned on the roadmap for the AI. And hence why the north point whereby what does the organization intend to achieve when they embark on this AI program, or building this AI model where they want to intend to go to? Right. So that, I think would be mainly what I feel that it's still a collective approach whereby we require a collective responsibilities across the board
 
18:53
for the risks and the controls of the AI, not just for developing the models. Yes, that's right. What about Mary, if you want to jump in on are there some bullet point high level items, you would put on a checklist to ensure that there are appropriate risk controls measurements in place to control against risk?
 
19:12
I think that it goes back to who's accountable. So I know I'm from Vancouver, and our mayor announced that he wants to hire a chief AI officer. So, the question is, are we launching to these AI programs now, so you now you have an executive lead? And as part of that, that executive lead would be responsible for understanding AI capabilities, where they fit into the value chain of products and services. And once again, with that, you would do a risk assessment. So that executive position will be responsible for continuous risk monitoring. So understanding where potentially AI may fit into a specific user case, you scenario, such as customer service could be upsurge knowledge management, and from there do the risk assessment and the cost benefit analysis to determine what controls are in place. So for controls can be an accept Trouble usage policy, could be staff training, it could be technical controls, such as limiting what type of searches or queries you can do with the gender of AI solution. So that's the key thing is you need to strengthen your risk management capabilities, have a risk assessment process, NIST came out with a risk, AI risk management process that you can use for your organization. But part of that is moving forward with the rest of mindset. So how can we deploy these technologies, but also minimize potential vulnerabilities? But as for metrics that kind of goes back to what do you measure? So with this technology, what do you hope to gain? So there's productivity time savings, quality, but also from a risk point of view? So what are the risks we're trying to minimize as well, especially in terms of false positives or false negatives with the outputs of the model? So I think part of that is, yeah, it's gonna be an AI program, similar to I think of digital transformation programs in the past. So kind of the same step. But once again, the benefits are probably a bit more larger.
 
21:04
Well, you referenced a digital transformation program and brought to mind one of the questions I wanted to ask and very similar, again, to digital transformation, the balance between managing risk and allowing innovation and not being a barrier to that innovation. Do you feel that organizations are? Is there a is there a risk of having too many risk controls that impact innovation and advantages of using this technology? Or do you think the balance right now is about having too few risks? And we're going to not just allow innovation, but a lot of the negatives that that we're fearful of with AI? So is there a need for balance? And is there a way that how do you feel people are doing that balance right now? Are they hitting it spot on? Or they're falling to one side or the other?
 
21:55
Yeah, I think you're right, then maybe, as what the EU EC, actually pointed out, right? Different risk levels, are actually paying to the based on the risk categories of the industry, right? Certain industries, is a critical industry. To me, I think it's more sensitive, requires more guardrails and more controls, that hence why those industries may need to take more controls and more risk assessments upfront before they actually deploy because that may affect the they have bigger impacts to it, it might affect the safety, it might affect jeopardizing perhaps more financial losses. So those industries will need to especially those that involve personal data, right. And those industries will need to take more steps to put in place and perform risk management versus something like perhaps a more generic use of it. That may not cause less harm than then perhaps they could be the balancing act the in in ruling out such AI program. What
 
23:09
do you think, Mary? are we hitting the balance well, or do we need some work on I think
 
23:13
we need some more work? I think part of that goes back to your risk culture, your risk appetite. So once again, now you have to strengthen your risk management practices. So people are like, what's the risk assessment? What's my risk appetite? What's my risk tolerance? What's my metrics? A lot of organizations don't have this type of sophisticated risk management program. So I think that's gonna be the challenge is you need to develop this capability, but also the conversation to talk about risk, because I found when I do my risk assessments, lots of people don't want to share what's what potentially can go wrong, in case it does happen again, in trouble. So part of that is you need to be honest and transparent. share that information, have a healthy conversation about once again, cost benefit, or how do we prioritize this risk? Are we comfortable with this risk? So what's been your appetite? So these are pretty advanced topics that organizations need to need to move towards given what's going to act and this upcoming regulation?
 
24:08
So let me begin to wrap up, I this conversation went very fast, and we're just scratching the surface on AI and AI risk. So I want to wrap up with a question to each of you and it is a tough question, but you're gonna have to give me like a 10 or 22nd response. I want to ask marching orders. If you had to tell organizations they're kind of overarching job that they have now when it comes to AI and managing risk. What is that one or two lines that you would send them away with? Sorry, I'm why don't I start with you? What's that quick go away and do X what would you say?
 
24:43
I think though, take a step back and trust but verify in each step that they want to roll out their AI program with. And I think here is the the tricky part whereby we are in a new generation of ethical mindset, because AI is a is a different animal altogether. So we are back at step one here is just like when we first embark with cybersecurity, there needs to be these new ethical principles that set upon for organizations that are going forward. Yeah, Jenny. Good points.
 
25:20
Mary, what would you send away? What are your marching orders for people?
 
25:24
Welcome to the fourth industrial revolution. Yeah, accept it. And we need to start planning how we're going to use it and be successful as organization. Good points.
 
25:33
Good ways. So I think that is our time for today. I want to thank you both for being on the line with us. And I'm sure we'll have many more fascinating conversations as AI moves forward. It's a very it's a very interesting feature in front of us, I think. So. Thanks.