Contributing writer

5 ways CIOs can help gen AI achieve its lightbulb moment

Tip
09 Feb 20246 mins
Artificial IntelligenceCIOData Management

Despite the current level of hype and mainstream adoption, gen AI still needs to experience the trough of disillusionment before embarking on a path to peak productivity.

A user in thought rests his chin on his hand. [thinking / consideration / intelligence / planning]
Credit: Getty Images

The rapid adoption and democratization of generative AI has been compared to that of the lightbulb, which did the same for electricity nearly 150 years ago. Much as its invention in 1879, which came decades after the invention of electricity (1831), brought practical use cases to the masses and businesses, generative AI is doing the same for AI.

When technology moves from the lab into everyday life, mainstream adoption typically rides on increasingly powerful and proven initial use cases. With such rapid adoption comes excitement about the art of the possible. This is part of the reason gen AI is now at the peak of inflated expectations in Gartner’s hype cycle.

In fact, ChatGPT gained over 100m monthly active users after just two months last year, and its position on the technology adoption lifecycle is outpacing its place on the hype cycle. We’re at mainstream adoption (now nearly half the general population is using gen AI), but we’re still at the peak of inflated expectations. So looking closer, it might be that we’re still at the gaslight moment for generative AI with the lightbulb moment still to come. And this isn’t a bad thing.

In the generative AI world, we’re discovering how the computer can get things wrong in surprising ways. As we experiment with gen AI applied to both public and private data, we’re learning in real-time what works well and what doesn’t.

Here are five recommendations for CIOs to navigate generative AI’s hype cycle and prepare for a swift transition from the trough of disillusionment to the slope of enlightenment.

Be realistic with customers, employees, and stakeholders

While evangelizing the transformational nature of gen AI and related solutions, be sure to point out the downsides as well. Consultancies and tech vendors often play up the transformative power of gen AI but pay less attention to its shortcomings. Although, to be fair, many are working to help address these issues and offer various platforms, solutions, and toolkits.

Being realistic means understanding the pros and cons and sharing this information with customers, employees, and peers in the C-suite. They’ll also appreciate your candor. Make an authoritative warts-and-all list so they can be clearly explained and understood. As AI advisors have pointed out, some downsides include the black box problem, AI’s vulnerability to misguided human arguments, hallucinations, and the list goes on.

Establish a corporate use policy

As I mentioned in an earlier article, a corporate use policy and associated training can help educate employees on some risks and pitfalls of the technology, and provide rules and recommendations to get the most out of the tech, and, therefore, the most business value without putting the organization at risk. In developing your policy, be sure to include all relevant stakeholders, consider how gen AI is used today within your organization and how it may be used in the future, and share broadly across the organization. You’ll want to make the policy a living document and update it on a suitable cadence as needed. Having this policy in place can help to protect against a number of risks concerning contracts, cybersecurity, data privacy, deceptive trade practice, discrimination, disinformation, ethics, IP, and validation.

Assess the business value for each use case

In the case of purely textual output, we tend to believe answers from gen AI because they’re written well with excellent grammar. Psychologically, we tend to believe there’s a powerful intelligence behind the scenes when actually gen AI has no understanding of what is true or false.

While there are some excellent use cases for gen AI, we need to review each one on a case-by-case basis. For example, gen AI is typically bad at writing technical predictions. The output often tells us something we already know, and it may also be plagiarized. Even using a rewriting or rephrasing tool can make matters worse, and teams can end up spending more time using these tools than if they wrote predictions themselves. It’s best to pick your battles and only use gen AI where there’s a clear benefit to doing so.

Maintain rigorous testing standards

With gen AI most likely being utilized by a large number of the workforce in your organization, it’s important to train and educate employees on the pros and cons and use your corporate use policy as a starting point. With so much adoption of gen AI, we’re all effectively testers and learning as we go.

Inside your organization, whether within the IT department or business units, be sure to emphasize and allow considerable time for testing and experimentation before going live. Setting up internal communities of practice where employees can share experiences and lessons learned can also help raise overall awareness and promote best practices across the organization.   

Have a plan for when tech goes wrong

We saw with the long-running UK Post Office scandal that even non-AI-enabled systems can make serious, life-changing mistakes. When we mistakenly assume these systems are correct, it can lead to hundreds of workers being falsely targeted. In the UK Post Office case, over 700 postmasters were wrongly accused of fraud over the course of 15 years, leading to damaged reputations, divorces, and even suicides.

So it’s critical to have a plan for when AI gets it wrong. Your corporate use policy sets the guardrails, but when things go wrong, how can IT’s governance processes monitor and react to the situation? Is there a plan in place? How will your governance processes even distinguish a right or wrong answer or decision? What is the business impact when mistakes are made and how easy or difficult will they be to remediate?

Generative AI will have its lightbulb moment and it’s not too far away, but not until we get through the trough of disillusionment first, ascend the slope of enlightenment, and finally get to the plateau of productivity. The gaslighting, experimentation, and learning along the way are all part of the process.

Exit mobile version