Governance, risk and compliance in times of generative AI: Forrester

By

Excitement plus lack of clarity on rules is leading to many unknowns. 

With the fast adoption of generative AI mixed with the lack of standardised global guidelines, a multitude of unknowns are starting to emerge according to the latest Forrester report. 

Governance, risk and compliance in times of generative AI: Forrester

The findings, Generative AI: What It Means For Governance, Risk, And Compliance call for businesses to not fall “victim to these key mistakes” as generative AI rewrites governance, risk and compliance (GRC) guidance. 

“GRC pros are already focused on mitigating the downsides, and many have tools and frameworks at their disposal to address the emerging risks of AI,” the report stated.

“GRC pros can’t achieve generative AI success if they: fail to keep abreast of global AI regulations. forget that use cases can run afoul of existing privacy and security regulations and neglect to update employees’ acceptable use policies to include generative AI.”

The report also highlighted the failure to align contractual language with the AI strategy and the overlooking recalibration of risk-reward trade-offs.

Despite this, the report also said there are potential benefits that generative AI provides such as the chance for risk management to reinvent itself. 

Cody Scott, senior analyst, security and risk for Forrester told Digital Nation that modern GRC professionals can achieve success with tackling generative AI by locating “where to start”.

“Every organisation has different goals when it comes to generative AI. Some want to use it internally for employee productivity, others may want to build their models for commercial or internal R&D and innovation purposes,” Scott said.

Scott said regardless of the use case, “GRC will have a critical role in enabling how the business adopts/uses generative AI because GRC should be helping organisations”.             

This is done by articulating a strategy, assessing the risk, executing safeguards, controls, and policies to mitigate the risk plus developing a continuous risk evaluation strategy to ensure AI use stays within the organisation's risk tolerance and appetite.  

“The risk management process stays the same, but determining the use cases for generative AI first will determine the level of effort, skills, resources, and budget needed to balance risk and compliance effectively,” Scott added. 

 He added some early benefits of implementing generative AI for GRC is efficiency.

“GRC processes and resources tend to be extremely (if not painfully) manual in all but the most advanced organisations. 

“This spans everything from conducting a risk assessment, completing security questionnaires, or simply employees trying to review policies to determine acceptable practices. 

“By design, GRC functions collect lots of data across the organisation, but very little of that data is mined to drive decisions,” said Scott. 

“Some of generative AI’s early benefits begin with allowing users to ask questions about this data. For example, a user can ask “What are our agreed-upon timeframes for incident response with vendor X in our service level agreement (SLA)?”.

He said, “A simple question to get a simple answer saves time from having to manually find policies and controls when creating a plan”. 

Scott added Australian GRC professionals keep up with changing regulations by “it’s a great time for organisations to dust off their risk management strategies and to assess the current state of their risk control frameworks”.

“For example, in cybersecurity and data privacy, it’s common for organisations to develop in-house, custom control frameworks and testing procedures that relate to a few industry standards like the NIST Cybersecurity Framework or ISO 27001 controls.

“However, by overly customising their control frameworks – they make it hard for themselves to pivot when new regulation comes into play.”

Scott said that GRC professionals “don’t usually have a seat at the table when it comes to emerging tech…this is a missed opportunity for the business.”

“Given generative AI’s meteoric rise and importance, we’re going to see many GRC programs struggle to keep up on a cultural, process, and resource level. 

“Australian organisations will need to be as open to elevating the GRC function as they are to adopt generative AI if they want to implement it safely and efficiently without potentially substantial downside issues in the future. 

“If we aren’t balancing risk and opportunity with the right guardrails, we’re opening ourselves up to significant operational, compliance, and regulatory challenges around this technology.”

Scott said, “There’s never been a more positive time to be a GRC professional, but it’s going to require strong organisational processes to determine the right balance between risk and reward.” 

Got a news tip for our journalists? Share it with us anonymously here.
© Digital Nation
Tags:

Most Read Articles

The Northern Beaches Women's Shelter hones focus on tech-enabled abuse

The Northern Beaches Women's Shelter hones focus on tech-enabled abuse

Lawyers face sanctions for citing fake cases with AI

Lawyers face sanctions for citing fake cases with AI

King & Wood Mallesons Australia to give Gen AI tool to 1200 lawyers

King & Wood Mallesons Australia to give Gen AI tool to 1200 lawyers

Transport for NSW expands SAP Ariba usage

Transport for NSW expands SAP Ariba usage

Log In

  |  Forgot your password?