IT leaders grapple with shadow AI

Feature
27 Jul 202310 mins
Data and Information SecurityGenerative AIIT Governance

Generative AI promises dramatic productivity gains, but rogue use must be curtailed until control is gained.

Programmer sitting at the desk in his office and trying to solve problem of new code.
Credit: Shutterstock / BalanceFormCreative

Max Chan knew he had to do something. Soon after ChatGPT burst on the scene in November 2022, Chan realized generative AI would amount to far more than the just the latest technology flash-in-the-pan.

With the ability to instantaneously ingest reams of data using large language models (LLMs), generative AI technologies such as OpenAI’s ChatGPT and Google’s Bard can produce reports, contracts, and application code far surpassing earlier technologies in speed, accuracy, and thoroughness. Result: dramatic productivity gains and potentially game-changing business advantage.

“Employees are going to use this. If we don’t do anything about it, they will have no choice but to use it on their own,” says Chan, CIO of Avnet, a technology parts and services provider.

Michele Goetz, vice president and principal analyst at Forrester Research, agrees. “There is a fear of missing out. Even if you say, ‘Don’t use it,’ your employees or customers are going to use it,” she says. “The Pandora’s box has been opened, so it’s best to partner with your employees so they don’t have to hide what they’re doing.”

Despite its immense promise, generative AI can expose sensitive and proprietary information to public view. That could lead to compromised intellectual property and regulatory penalties. Moreover, generative AI results can sometimes be wildly erroneous, resulting in “confabulation,” or “hallucination.” And because the generative AI models pull from myriad sources, incorporating generative AI output in an organization’s corporate content could lead to copyright infringement.

Some of those dangers were realized in April 2023 when Samsung employees inadvertently leaked sensitive internal data to ChatGPT, leading the company to temporarily ban employees’ usage of generative AI technology — an incident that put IT leaders on high alert about the impending rise in shadow AI that may soon take hold at their organizations if they don’t get in front of it.   

Two-track strategy

With those stakes in play, taking a hands-off approach was unthinkable for Chan. Instead, he’s implementing a dual-track strategy: to limit generative AI utilization through strict policies, while rapidly developing and piloting approved and safe applications.

“If someone wants to try it, they have to submit a request and we have to review it, and we will work with them to build a minimum viable product,” he explains. The MVP could in turn evolve into a proof-of-concept (POC), and from there, usually with the help of a strategic partner, to a production implementation. Those early applications are now nearing fruition. “We will definitely be in production with a couple by the end of the year,” Chan says.

Other CIOs have adopted similar strategies. “Our approach is one of cautious interest,” says Robert Pick, executive vice president and CIO for Tokio Marine North America, a multinational insurance provider with headquarters in Japan. While Pick is encouraging employees at the insurance company to experiment, he insists their activities be monitored.

“In insurance, we live in data all the time — and in third-party data — that’s different from some industries. We have some comfort with the idea of our data going somewhere to be processed and then coming back. If we give professionals the right tools and guidance they will make the right decision,” says Pick.

Despite the best efforts of Chan and Pick, Gartner foresees that unsanctioned usage will be impossible to prevent. The consultancy predicted in March 2023 that by 2026, 5% of employees will engage in unauthorized use of generative AI in their organizations. “If anything, 5% is conservative. I get calls every day from people wanting to know how to stop their employees from using ChatGPT,” says Avivah Litan, distinguished vice-president analyst at Gartner. 

CIOs realize that unless they quickly implement policies that allow and even encourage use of generative AI for some purposes, they will lose control over a transformative technology in their organizations.

According to IDC, CIOs have gotten off the sidelines and are now getting out in front of the parade. In March 2023, 54% said they were not yet doing anything with regard to generative AI, but in June 2023, only 23% made that admission [see chart]. “In some cases, people are blocking; in other cases, they are adopting policies; and in still other cases they are conducting intentional pilots,” says Daniel Saroff, group vice president for consulting and research at IDC.  

IDC: Generative AI use cases and investments in North America

IDC

Hackathon exposes vulnerabilities

At Parsons Corp., a global solutions provider in the national security and critical infrastructure markets, early instances of shadow AI spurred a conversation between Karen Wright, vice president of IT strategy, products, and commercialization, and her cybersecurity counterpart at Parsons. This followed a ChatGPT hackathon to identify security risks. “It was a really good approach to understanding the implications of the technology,” says Wright.

The hackathon showed Wright and her fellow IT leaders at Parsons that ChatGPT was not qualitatively different from some web-based tools that employees were already using, such as Adobe Acrobat online services, in which data is sent outside an organization to be processed. Consequently, Parsons settled on the use of data-loss prevention (DLP) tools to prevent data exfiltration via generative AI.

“Our focus is embracing and accelerating the use of smart artificial intelligence, while managing it with DLP tools to ensure security,” says Wright.

Education also will play a critical role in taking control over generative AI at Parsons. “Our focus is educating employees on the best practices and tools to accomplish their goals while protecting the company,” Wright says.

Insurers understand risk

As a global insurer with a presence in many countries, TMG’s international units have been experimenting with generative AI. “We did see a tremendous amount of personal experimentation going on. But because we are risk-aware, there was not a rush to put everything on ChatGPT. The reaction was quick and clear: education and monitoring,” says Pick.

TMG has set up working groups within its various companies to examine use cases such as drafting letters and marketing content to give humans a headstart on the process, according to Pick. Another prospective generative AI use case is for the various business units to draft reports on market conditions and performance.  

“Any company with many business units can benefit from generative AI’s ability to summarize information,” notes Pick. “To take an underwriting manual and summarize it in plain language could take seconds or minutes to get to a first draft, rather than days or weeks,” he says. “That will enable us to focus our people resources more efficiently in the future.”

In addition to ingesting and generating written content, generative AI shows great potential in application development, according to Pick. The ability to translate in near real-time a stored procedure from one language into another with an accuracy rate of perhaps 60%, while including comments, will increase developer efficiency greatly, he asserts. “It could take weeks for a programmer to do the same thing. That will pay dividends for years,” Pick says.

In addition, the use of private LLMs is immediately attractive for an insurance provider such as TMG. “There is the hope that it might find things humans would not notice. We’re also interested in ‘little LLMs,’ if we can get to that state, because you would not need a cloud data center. Instead, we would use sandboxes that are cordoned off so that we are stewarding the data,” says Pick.

But even with private LLMs, regulation comes into play, says the CIO. “For a global company such as TMG to use a private LLM, the data would need to be loaded into a tenant system that is within the area governed by specific regulations, such as GDPR in Europe,” he explains.

Building on POC

Chan’s pursuit of both safety and opportunity shows promise in several POCs. “We are training Azure OpenAI with all the product information we have, so a business person can do a quick search to find a particular connector and can get back several examples, including which ones are in stock. It saves time because people no longer need to call the materials team,” Chan says.

Azure OpenAI also generates custom contracts quickly. “By loading the last 10 years of contracts into the repository, we can say, ‘I need a contract for a particular project with such and such terms,’ and it comes up with a full contract within seconds,” says Chan. Sales executives can then review and tweak the contract before sending it to the customer. The quick turnaround is expected to result in quicker conversions of prospects to sales as well as happier customers.

The process is similar with requests for proposals (RFPs), in which business analysts specify what they need and generative AI creates the RFP within seconds. “The business analyst just reviews and makes changes. This is a huge productivity gain,” says Chan. Engineers can also call upon generative AI to come up with possible solutions to customer demands, such as reducing the physical footprint of a circuit board by replacing certain components in the bill of materials, while shortening the go-to-market lead time. “It will return options. That is huge in terms of value,” Chan says.  

A challenge worth taking on

In general, CIOs are finding the upside of generative AI productivity justifies grappling with the challenges of controlling it. “We make sure the company data is safe, yet the AI is not lacking in capabilities for IT and business employees to innovate,” says Chan.

According to Pick, generative AI will not make human workers obsolete, just more productive. “We don’t view it as a people replacement technology. It still needs a human caretaker,” he says. “But it can accelerate work, eliminate drudgery, and enable our employees to do things of a higher order, so we can focus people resources more acutely in the future.”

Most important, Pick says, generative AI has much more potential than earlier much-hyped technologies. “This is not the next blockchain, but something that will really be valuable.”

To extract that value, Goetz of Forrester says setting policies for generative AI is a matter of establishing clear dos and don’ts. She recommends, like Chan, following a two-track strategy in which approved generative AI applications and data sets are made available to employees, while AI applications and use cases that might put data in jeopardy are prohibited. Following the guidelines, according to Goetz, will make possible safe, self-service usage of generative AI in an organization.

In the meantime, when developing or deploying gen AI capabilities, Saroff of IDC recommends assessing the controls that generative AI tools implement, as well as the unintended risks that might arise from the use of those AI tools.

IDC: AI technology controls checklist and AI unintended risks checklist

IDC

Stan Gibson is an award-winning technology editor, writer, and speaker, with 36 years of experience covering information technology. Formerly executive editor at eWeek and PC Week, he is currently principal at Stan Gibson Communications, where he continues to write and speak about all aspects of IT.

More from this author