Contributing writer

Prioritizing AI? Don’t shortchange IT fundamentals

Feature
14 Feb 20249 mins
Artificial IntelligenceBudgetingCIO

AI isn’t the only thing on the CIO’s plate, but continuing to invest in familiar IT essentials will pay off there, too.

Male mature caucasian ceo businessman leader with diverse coworkers team, executive managers group at meeting. Multicultural professional businesspeople working together on research plan in boardroom.
Credit: Ground Picture / Shutterstock

Generative AI continues to dominate IT projects for many organizations, with two thirds of business leaders telling a Harris Poll they’ve already deployed generative AI tools internally, and IDC predicting spend on gen AI will more than double in 2024.

But the usual laundry list of priorities for IT hasn’t gone away. Fundamentals like security, cost control, identity management, container sprawl, data management, and hardware refreshes remain key strategic areas for CIOs to deal with.

It’s easy to view these as competing priorities vying for CIO attention and budget that are unfairly dwarfed by boardroom interest in the new and shiny opportunities promised by gen AI. But when it comes to implementing those projects successfully, they turn out to rely on how well the IT organization has implemented basics like connectivity, permissions, and configuration management.

“Getting the basics of IT right today — an agile multi-cloud foundation, strong cybersecurity, effective data privacy and IP control, and perhaps most important for eventually unleashing the promise of AI, building a strong, open data foundation across app silos — are the planks and nails that keep the ship afloat,” says Dion Hinchcliffe, VP and principal analyst at Constellation Research. 

“One may liken these IT fundamentals to ‘eating your greens’ — not always glamorous, but utterly essential for long-term health and strength of IT,” he says. “Just as a balanced diet fortifies the body, a robust and modern IT infrastructure lays the groundwork for AI and other advanced technologies to flourish.”

Independent research analyst Sageable founder Andi Mann agrees, pointing out how many internal operations and basic functions of IT infrastructure turn out to be the ‘picks and shovels’ to power an AI gold rush. “CIOs need to do all the things to make AI workloads run well and in a disciplined and hygienic way,” he says. “When you think about all the blocking and tackling a CIO needs to do for regular applications, that especially applies to AI.”

Data due diligence

Generative AI especially has particular implications for data security, Mann says. “How do you achieve data loss prevention when you’re telling your AI to go suck up all this data and reuse it?”

In fact, for security, compliance, and efficiency reasons, CIOs will want to carefully manage which data generative AI has access to. For example, Retrieval Augmented Generation (RAG) is emerging as a key technique to make LLMs useful to work with your own data — but you don’t want to feed in all your data. That’s not just about the cost of preparing a larger data set than you need, which takes expertise that’s still uncommon and commands a high salary, but also what you’re teaching the model. Feed in your entire Slack or Teams history and you may end up with responses like, “I’ll work on that tomorrow,” which would be perfectly appropriate from human employees but aren’t what you expect from a gen AI system.

AI tools like Copilot will expose the flaws in an organization’s approach to information management, cautions Microsoft MVP and Rencore director of partner management Christian Buckley, such as structuring the data and metadata, information architecture, cleaning up and understanding what’s there, and how lax many organizations are with both privilege management and data cleanup.

As the cost of data storage has fallen, many organizations are keeping unnecessary data, or cleaning up data that’s out of date or no longer useful after a migration or reorganization. “People aren’t going back and decluttering because there’s no cost to that — except in your risk profile and your decreased search performance,” says Buckley. Introduce gen AI capabilities without thinking about data hygiene, he warns, and people will be disillusioned when they haven’t done the pre work to get it to perform optimally.

The same issues were revealed when Microsoft launched Delve, and before that when the FAST integration brought powerful search to SharePoint in 2010. “When we started to see search actually working inside SharePoint, people would complain it wasn’t working properly,” he says. “But it was. What it’s doing is surfacing your lack of governance around your data. I heard people say, ‘It’s breaking all my permissions.’ No, it’s surfacing where you have holes. Do you want to have an even more powerful search capability with AI in your data, and to be unsure about how you’ve organized that data?”

The other problem is gen AI tools and users not seeing information that should be included because the metadata tagging and sensitivity labels haven’t been correctly applied to the data.

Either way, poorly managed data can raise compliance and confidentiality issues, like an external partner having access to a gen AI tool that exposes information that should only be available internally. After all, projects that involve more external users need careful scrutiny of what information is being accessed and whether that external access is still appropriate. Even internal usage can cause confusion if you’re a multinational and employees in France, for example, get information from an HR bot based on pension or parental leave policy in Australia.

“How can you say you’re ready with governance for AI if you don’t know what content you have, where it is, who has access to it internal versus external, what’s being shared and how it’s labelled,” Buckley says. “From a governance standpoint, you need to keep track of what people are doing, where they’re doing it, and how they’re doing it. And that’s going to constantly change.”

Containers and certificates

Privilege management depends on identity management, another area that demands continued attention. Increased adoption of Kubernetes needs new skills as the container ecosystem develops, and many CIOs are still getting up to speed on managing environments for containerized apps, which have significant differences from virtualization —a transition that may be accelerated this year as organizations decide how to respond to the major changes Broadcom has made to how VMware is licenced.

It also means many more machine identities to manage, says Murali Palanisamy, chief solutions officer at IAM platform provider AppViewX. “Digital transformation as a whole has driven a significant increase in the use of connected devices, cloud services, and native cloud and containerized applications,” he says. “All of these additional machines, workloads, and services require trusted identities, which is amplifying the need for machine identity management.”

IoT, software supply chain security — especially the need to mitigate that with code signing — and using your data for gen AI are increasing use of the TLS certificates and private keys that secure access relies on. “Whenever applications or machines communicate with each other, the vast majority of them use TLS certificates to establish trust, identify each other to systems, as well as securely authenticate and encrypt communications,” Palanisamy adds.

Protecting these machine identities is critical, and managing them can no longer be an ad hoc, manual process, he argues, especially with Google’s proposal to reduce TLS certificate validity from 398 days to 90, necessitating much faster turnover. There are other regulatory changes to be aware of, too: new SEC cybersecurity rules for the US as of December 2023, the expanded Network and Information Systems Directive (NIS2) in the EU, and a general shift to make security standards risk-based rather than prescriptive like the Payment Card Industry Data Security Standard (PCI-DSS) update to PCI 4.0.

Managing machine identities needs to be a core security focus area as well that relies on automation, auto-enrolment, and deprovisioning to control access to private and sensitive data, Palanisamy adds. “When data needs to be secured in transit, machine identity plays a vital role,” he says. “As AI projects ramp up, managing machine identities is essential to ensure trust, and secure authentication and encryption so only the right access to the right data can be managed and controlled, keeping sensitive and private data secure.” It can be tempting to view cloud projects at lower priority than shiny new AI initiatives but he says they’re actually foundational. “Speed and agility are required for AI projects to be successful, so security needs to be built into the underlying cloud infrastructure of AI projects from the start,” he adds.

Cost control

Finops and cost control for cloud services continue to be a priority, and with so much gen AI usage relying on cloud AI services and APIs, CIOs will want to think about budgeting and automation, especially for AI development and experimentation.

“If you’ve got a hundred people doing experiments with AI and just one forgets to deprovision their instance, you’ve got a bill coming,” Mann says. Production workloads should also be monitored to see if you can scale back to a smaller instance, a cheaper LLM, or a lower level of licencing, using the same policies and tools used to manage the cost of other workloads. “Managing Copilot isn’t just about permissions management,” he adds. “People want to know if the licences they’re paying for are being used.”

Just as other cloud workloads need to justify the costs of licences and API calls by the value they provide to the organization, gen AI projects also need to be assessed on whether they actually deliver the productivity improvements and innovation they promise.

“I’m waiting for the first CIO who gets sacked for letting the AI run too fast, too long,” Mann says. “This is a basic blocking and tackling discipline for a CIO: what is my portfolio, what’s valuable, what am I spending, what am I getting back, as well as managing the utilization and quality of those workloads. This ITSM, ITIL style of discipline and portfolio management is going to come back because you definitely need that level of discipline for this new workload.”

But in other areas, IT teams will look to increase budgets and spending.

Hastening hardware refresh

With Windows 10 reaching end of life in 2025, CIOs will plan to migrate to Windows 11 over the next 18 months, and getting the promised security improvements by default means investing in new PCs with newer generations of CPU that have the right instructions to support security features without compromising performance.

An increasing number of those devices will include a neural processing unit (NPU) or similar dedicated hardware to speed up on-device AI workloads, whether that’s live-editing video calls or running Copilot in Windows 11. But rapid hardware advances may mean CIOs need to budget for much shorter hardware refresh cycles in future to stay up to date. Asset management that tracks which employees have suitable PC hardware may be key to get the promised productivity improvements from AI.

Getting your data centres ready can mean even more investment, and that’s not just about how much GPUs cost if you can find them. While the majority of LLMs will run in the cloud and be accessed through APIs, making gen AI tools useful for business requires connecting them to your own data sources.

Evolving your network architecture to reduce latency and securely deliver better connectivity — whether you do that with 5G, Wi-Fi 6 and 7, or emerging satellite connectivity — is key to support hybrid and remote work, but AI will further drive secure edge computing and network requirements.

Plus, falling prices are driving the transition to all-flash object storage systems, offering database performance, which will be beneficial to handle the large datasets integral to AI workloads that necessitate rapid throughput and scalability, and will also cater to insatiable data appetites and rapid access demands of AI-driven operation, says Steve Leeper, product marketing VP at data management software company, Datadobi.

In general, he adds, CIOs need to think about the hardware infrastructure for the AI processing pipeline, starting with the amount and class of storage, interconnecting networks, and GPU-farms for AI processing. And also data handling: identifying suitable datasets, quickly and accurately relocating data between points along the processing pipeline — meaning no silent data corruption — and ensuring AI processing results are also relocated to appropriate locations and classes of storage.

Datasets for gen AI won’t always be huge, Leeper says. “There’ll be a mix of large and small datasets,” he says. “Some of these datasets contain critical data that, according to an organization’s governance policies, will need to be processed using on-premises resources.” Managing AI access to those datasets relies on the kind of traditional IT infrastructure management that CIOs are very familiar with, so making investment in those a priority this year will pay off for both.

“These are solved problems only if we apply the discipline we have,” adds Mann. “But too often they’re not solved because no one’s thought about the longer term implications so there’s no ownership.” But that might be changing. At the beginning of 2023, Gartner reported only 15% of organizations already have data storage management solutions that classify and optimize data. But by 2027, the analyst firm expects that to rise to at least 40%.