CIO The voice of IT leadership Fri, 01 Mar 2024 02:12:53 +0000 http://backend.userland.com/rss092 Copyright (c) 2024 IDG Communications, Inc. en-US 179690362 By enabling “ask and expert” capabilities, generative AI like Microsoft Copilot will transform manufacturing Thu, 29 Feb 2024 11:48:09 +0000

Manufacturers are increasingly looking to generative AI as a potential solution to these and other challenges. Research from Avanade, a technology expert that specialises in the Microsoft ecosystem and partner solutions, suggests that 92% of manufacturers aim to be AI-first within a year. This is an ambitious target given that just 7% currently use AI on an hourly basis to inform their real-time operations.

The growing sophistication and relevance of AI is fuelling interest in its use. Microsoft’s Copilot is a case in point. The tool enables businesses to extract intelligence from across a diversity of different data sources using large language models (LLMs – a type of generative AI capable of producing human-like text).

Microsoft Copilot enables a range of game-changing use cases for manufacturers, such as its “ask-an-expert” capability. According to Brendan Mislin, General Manager for Industry X at Avanade: “Microsoft Copilot enables manufacturing professionals to navigate complex tasks, automate repetitive processes, and improve work efficiency using simple, natural-language prompts.”

The ask-an-expert tool enables manufacturers to increase productivity, drive down costs, and improve employees’ work-life balance. Mislin explains: “Quality control is often a time-consuming and inefficient process in manufacturing. With Microsoft Copilot, quality control managers no longer need to trawl through different manufacturing execution, materials management, and workforce scheduling systems to troubleshoot an issue. Instead, they can use one simple interface for an immediate answer. For the employee, that’s the difference between working overtime to solve a complex problem and getting home promptly to see their family.”

Using Microsoft Copilot, workers can also better avoid quality issues that in can cause safety issues and put lives at risk. In the automotive industry, for example, there were recalls of 300 different makes and models of car in the US in 2022 alone. In one case involving air bags, 60 to 70 million vehicles were recalled worldwide, across at least 19 manufacturers, costing close to €25bn. Ask-an-expert tools can help through real-time insights on safety standards and potential defects, operators proactively identify and mitigate risks before a product gets to market.

Preparing data for an AI-centric world

The benefits of Microsoft Copilot and other generative AI tools can only be achieved with a strong data foundation. This can be a major challenge. Manufacturing generates immense volumes of structured, semi-structured, and unstructured data, thought to be double to four-times larger than industries such as retail, media and financial services. What’s more, manufacturing data is growing at exponential rates, estimated at 200-500% over the next five years. Much of this data is siloed, which makes it difficult to use in large language models.

Mislin believes that industrial-grade data fabrics can help solve this challenge. “The data fabric is a key,” says Mislin, “it delivers a seamless management and integration layer, combining all relevant data from diverse factory automation, enterprise resource planning and supply chain management systems, in addition to external sources. When integrated with digital ecosystems that enable data sharing across supply ecosystems in privacy-preserving ways, Microsoft Copilot and data fabrics will supercharge the end-to-end manufacturing value chain.”

As manufacturers prepare to use generative AI, technology experts like Avanade have an important role to play in helping them prepare their data, understand key use cases, and deploy applications effectively. Doing so, manufacturing firms will be able to increase productivity and efficiency and put themselves in the best possible position to thrive.

Avanade is attending Hanover Messe 2024. Register here to meet with Avanade and discuss the potential of Microsoft Copilot in manufacturing.

You can also learn more about the use of Microsoft Copilot in manufacturing here.

Generative AI, Innovation]]>
https://www.cio.com/article/1309307/by-enabling-ask-and-expert-capabilities-generative-ai-like-microsoft-copilot-will-transform-manufacturing.html 1309307
Captive centers are back. Is DIY offshoring right for you? Thu, 29 Feb 2024 10:01:00 +0000

Captive centers are on the rise. You’d be forgiven if you’re wondering whether you’ve stumbled on an article from 2016, but, in fact, the practice of launching an offshore IT center wholly owned and operated by the enterprise it serves is back in vogue with notable twists.

Everest Group, which monitors 8,500 captive centers around the world, counted 452 new setups in 2023. “Despite economic and political disruptions, offshore and nearshore captive market growth accelerated with many enterprises expanding and setting up new centers,” says Everest Group partner Rohitashwa Agarwal.

Captive centers are no longer just means of value creation, providing cost savings and driving process standardization. They are driving organization-wide innovation, facilitating digital transformations, and contributing to revenue growth.

Unlike earlier generations of what are increasingly being called “global capabilities centers,” which tended to be large operations set up by multinationals, more than half of last year’s new centers were launched by first-time adopters — and on the smaller side, with less than 250 full-time employees; in some cases, less than 50.

The desire to build internal IT capabilities amid a tight talent market is at the heart of the trend. As companies have grown comfortable with offshore and nearshore delivery, the captive model offers the opportunity to tap larger populations of lower-cost talent without handing the reins to a third party. “Eroding customer satisfaction with outsourcing relationships — per some reports, at an all-time low — has caused some companies to opt to ‘do it themselves,’” says Dave Borowski, senior partner, operations excellence, at West Monroe.

What’s more, establishing up a captive center no longer needs to be entirely DIY. “An ecosystem of partners that specialize in turn-key captive standups or ‘virtual captives’ has emerged,” Borowski says. “Companies no longer need local knowledge and presence or to cobble together a complex web of in-country experts.”

But not every captive center is successful — and captive centers aren’t the right choice for everyone. While a widening range of companies are pursuing the benefits, there are challenges and risks to consider and key questions to ask before going captive.

The upside of captives today

Some benefits of captive centers are the same as ever. “When established and managed effectively, a captive can generate savings comparable to outsourcing — if not greater at scale — but with a higher degree of flexibility and control,” says Borowski.

Most IT leaders have managed offshore or nearshore delivery in one form or another, so there’s a level of comfort with the concept and understanding of how to make it work. That these same IT leaders face strategic talent shortfalls is a key motivator for the model’s resurgence. “It has been increasingly necessary for companies to look beyond their traditional hiring boundaries to find qualified candidates,” Borowski says.

Troubled by turnover at offshore service providers, some IT leaders seek to build greater loyalty and cultural cohesion as employers rather than clients. “Employees actually feel like they’re part of the company — because they are, which is not always the case with outsourcing,” says Borowski.

In some cases, companies see the captives as hubs for talent development as well, says Agarwal, who has seen several examples of key global roles now based within captive centers, especially as, having retained responsibility for the captive center’s day-to-day operations, some IT leaders are growing more comfortable with moving higher-level IT work offshore. 

“[It] also mitigates the risk of knowledge erosion, which can be catastrophic when more strategic ‘secret sauce’ activities are performed offshore,” Borowski says.

Everest Group researchers find that some mature captives have built high-value capabilities to support key innovation, transformation, and revenue-generating initiatives. Captive centers can offer more control, not only over talent, but intellectual property, security, regulatory compliance, and “their overall IT destiny,” says Forrester principal analyst Bill Martorelli.

An easier way in

Today, many service providers are willing to work with enterprises in a build-operate-transfer (BOT) model to help facilitate launching a captive center. With this approach, the service provider builds the captive center and then runs it initially, adopting the client enterprise’s processes, tools, and methodologies. The ultimate goals is then to hand over operations to the client during the “transfer” phase. As such, the BOT model helps smooth out the challenges of setting up a center in an unfamiliar location, and it gives enterprises an opportunity to test-drive the value of the center, typically over a period of years, before fully investing in it. Not all BOTs transfer in the end, notes Martorelli.

Beyond BOT, there are firms that specialize in helping companies stand up captive operations, Borowski says. And some service providers have created a hybrid service — the virtual captive — whereby the provider manages the technology and talent infrastructure while the client controls how work is done operationally.

Buyer beware: The risks of ownership

Despite the lower barrier to entry, setting up a captive center still comes with significant risks. First, they don’t all last. From 2020-2023, while 1,450 captive centers were created, according to Everest Group, 50 companies sold their captives to third-party providers. In some cases, divestitures were due to cost pressures, business restructuring, or new executive mandates, says Agarwal. Others resulted from issues with the captive center itself — performance problems, leadership challenges, and lack of cost competitiveness.

“A key challenge for captives continues to be scale,” says Borowski. “For smaller captive operations, it becomes difficult to operate as cost effectively as an outsourcer.”

Fixed and overhead costs are inherently greater when you’re doing it yourself, driving up the cost per resource. It’s also more challenging to scale services up and down. While it’s not unusual for an IT service provider to tweak its employee base and operating structure, including layoffs when necessary, that’s much harder for an enterprise to do, notes Martorelli.

Captives can also face retention challenges, particularly if they don’t provide opportunities for advancement, says Borowski, as they risk losing employees to other captives or IT service providers who offer more upward mobility.

Productivity can also decline over time, particularly for organizations used to working with third parties. “Whereas outsourcing providers often have contractually committed productivity improvements forcing them to continuously improve processes, productivity, and cost efficiency to achieve profitability targets, there’s no similar burning platform for captives,” says Borowski. “Captives can become stagnant with marginal improvements over time.”

7 questions to consider before going captive

Certain situations aren’t conducive to establishing a captive center. For example, if your enterprise faces financial or operational challenges, lacks funding to sustain operations, or has legal or regulatory constraints that preclude offshoring. Moreover, captives are not advisable for temporary services or ones that experience significant variability in demand. 

However, a captive center can make sense if the organization is clear about its intentions and aware of the risks and work involved. “When captives experience challenges, it’s typically some combination of a flawed service delivery strategy, poor planning and design, or poor execution,” says Borowski. “This should be a thoughtful, strategic decision that enables an organization’s long-term business direction and objectives.”

The following questions can help you assess whether a captive center is right for your organization:

What are the key business outcomes we’re seeking? Like any IT decision, the primary focus should be on the problem to be solved, not the potential solution under consideration. “The services model, location model, talent model, governance model, performance model, all need to [be] anchored [in] the core objectives,” says Agarwal.

What’s the business case and is it viable? Lay out realistic costs and benefits for both the build (captive) and buy (outsourced) options to determine which approach is most likely to help achieve your desired objectives. The IT organization should also confirm that the business case is achievable for their specific organization, Borowski says. Here, common mistakes include overly aggressive cost savings targets; lack of investment in captive center leadership; moving work offshore too quickly; insufficient investment in knowledge management, learning, and development of captive staff; and a myopic focus on SLAs as metrics of success. “This is a strategic capability for the company if built right, and companies need to approach it accordingly,” says Agarwal.

Can we operate a center that will deliver on our business case? The benefits of captive center ownership hinge on how well you can manage the center’s talent and costs over time. More nuanced questions Martorelli suggests asking include: What makes the enterprise more capable of attracting and retaining talent than potential outsourcing partners? How will the captive center remain competitive with third-party alternatives? What are realistic expectations for managing costs over time?

Are our leaders committed to the strategy? “No model is failproof,” says Agarwal. “You have to commit to it and make it work.” Here, buy-in is vital, especially when challenges arise. “The very factors that propel [captive centers] forward — desire for cost savings and access to talent — can sow the seeds of their eventual decline if expectations are not met or sustained,” says Martorelli, adding that captive center momentum can quickly sag as a result of executive turnover or loss of interest.

What location makes the most sense? Research locations with an eye toward whether it possesses the labor and infrastructure necessary to support the scope of the center and any future growth.

Who will help us set up the center? “Engaging specialists and local resources to support activities such as site selection, recruiting, permitting, and facility buildout helps to navigate pitfalls and accelerates implementation,” says Borowski. Those considering the BOT approach should educate themselves on its pros and cons. “Customers should be realistic about the service provider’s real interests,” says Martorelli. “Don’t expect the service provider to be anxious to simply transfer all of their best people to you.”

How will this affect the rest of the IT organization? Onshore talent could be displaced as a result of the captive center. At the very least, roles will change, and oversight of the captive facility will be needed. “IT leaders cannot forget to consider the impact — real or perceived — on the resources directly or indirectly affected by the offshoring action,” Borowski says. “Developing a retention strategy is key so that resources displaced by the offshoring have an incentive to stay through transition and stabilization.”

Overall, as the trend continues, with companies expanding existing captive strategies, new adopters entering the market, and more companies moving outsourced work to captive operations, the IT leaders most likely to succeed with the model will be those who have experience in driving results from remote operations and take an intentional and selective approach to determining which business outcomes can be achieved via the captive center model.

IT Strategy, Offshoring, Outsourcing]]>
https://www.cio.com/article/1309580/captive-centers-are-back-is-diy-offshoring-right-for-you.html 1309580
What is a chief data officer? A leader who creates business value from data Thu, 29 Feb 2024 10:00:00 +0000

The chief data officer (CDO) is a senior executive responsible for the utilization and governance of data across the organization. While the chief data officer title is often shortened to CDO, the role shouldn’t be confused with chief digital officer, which is also frequently referred to as CDO.

“The chief data officer is the senior person, with a business focus, who understands the strategy and direction of the business, but their focus is on how to underpin that with data,” says Caroline Carruthers, director at consulting firm Carruthers and Jackson, former chief data officer of Network Rail, and co-author of The Chief Data Officer’s Playbook and Data-Driven Business Transformation: How to Disrupt, Innovate and Stay Ahead of the Competition.

Capital One appointed the first CDO in 2002 and only a few organizations followed suit in the decade that followed. But the appointment of CDOs has accelerated in the past few years.

Chief data officer salary

According to compensation analysis from Payscale, the median CDO salary is $168,679 per year, with total pay, including bonuses and profit share, ranging from $103,000 to $290,000 annually.

Chief data officer job description

CDOs oversee a range of data-related functions that may include data management, ensuring data quality, and creating data strategy. They may also be responsible for data analytics and business intelligence — the process of drawing valuable insights from data. Or some data management functions may fall to IT, and analytics may belong to a chief analytics officer, a title some say is interchangeable with chief data officer.

Although some CIOs and CTOs find CDOs encroach on their turf, Carruthers says the boundaries are distinct. CDOs are responsible for areas such as data quality, data governance, master data management, information strategy, data science, and business analytics, while CIOs and CTOs manage and implement information and computer technologies, and manage technical operations, respectively.

“The difference between the CDO and CIO in my mind is quite clear, and I often use the bucket and water analogy,” Carruthers says. “The chief information officer is responsible for the bucket, making sure it’s the right size, that there are no holes in it, it’s safe, and in the right place. The chief data officer is responsible for the fluid that goes in the bucket and comes out; that it goes to the right place, and that it’s the right quality and right fluid to start with. Neither the bucket nor the water work without each other.”

More than half of the respondents to CDO Agenda 2024: Navigating Data and Generative AI Frontiers, a joint survey by AWS and MIT, said their focus was delivering a “small set of analytics or AI projects” as a value creation approach. Almost half said they instituted data literacy training, and have organized data, analytics, or AI councils. Improving data management, including improving data infrastructure, is also a focus, but CDOs say they deliver these improvements in the context of analytics and AI use cases, not as separate initiatives.

Chief data officer responsibilities

According to Gartner, the CDO is responsible for a firm’s enterprise-wide data and information strategy, governance, control, policy development, and effective exploitation.

Initially, the CDO role focused on compliance and data governance, and 52% of CDOs say ensuring data security remains their most critical responsibility, according data gathered by the IBM Institute for Business Value and Oxford Economics for IBM’s 2023 study, Turning data into value: How top Chief Data Officers deliver outsize results while spending less. But today’s CDO is also focused on using data to drive business outcomes.

According to IDC, chief data officer responsibilities include:

  • Governance: Advising on, monitoring, and governing enterprise data
  • Operations: Enabling data usability, availability, and efficiency
  • Innovation: Driving enterprise digital transformation, cost reduction, and revenue generation
  • Analytics: Supporting analytics and reporting on products, customers, operations, and markets

61% of CDOs in Deloitte’s Chief Data Officer survey 2023 said their top priority over the coming year was creating, updating, or implementing their data strategy, which coincides with the AWS/MIT study where just over 60% of respondents said they devote their attention to enabling new business initiatives based on data, analytics, and AI. Around 69% of respondents in Deloitte’s survey said they want to spend more time providing leadership for data activities, and less on assessing and designing data technology platforms.

However, many CDOs feel their responsibilities are poorly understood. The AWS/MIT survey found nearly 73% of CDOs feel their role is less understood than other C-level positions in their organizations. The survey also suggests the lack of a standard list of responsibilities may be a contributing factor.

According to the AWS/MIT study, it also says some of the most common CDO responsibilities include:

  • Improving data quality
  • Establishing clear and effective data governance
  • Having clear initiatives regarding analytics, AI ethics, cybersecurity, generative AI, and data monetization
  • Building and maintaining capabilities regarding advanced analytics, business intelligence, AI, data security, and privacy controls
  • Managing and improving the data infrastructure
  • Data monetization initiatives
  • Cybersecurity

Chief data officer jobs

An online sampling of posted chief data officer job descriptions across a range of industries shows key areas of responsibilities such as: evangelizing and communicating a data vision a critical part of growth strategy; creating strategic data access policies; leading the design of analytics infrastructure; developing and executing a central data strategy to drive revenue; overseeing data governance, data investment and partnerships; and strategizing with C-level colleagues. Attributes companies are looking for in a CDO include highly motivated, experienced innovators who have produced tangible results, as well as those with senior-level leadership who have overseen data and/or analytics departments for seven or more years.

Chief data officer resume

Landing a job as highly specialized as a CDO requires a strong résumé. For tips from experts on how to write the ideal résumé for chief data officer positions, see “CDO resumes: 4 tips for landing a chief data officer role.”

CDO vs. chief analytics officer

Chief data officers manage data while chief analytics officers lead data analytics. Even though chief data officer and chief analytics officer are two distinct roles, they should both reside in the same person, argues Guy Gomis, partner at the recruiting company BrainWorks.

“I’m finding the best in class are combining the two,” he says. “Most leaders in analytics want to own the data strategy and how the company treats data, and they want to own analytics.” It makes sense if you think about it: analytics is how data provides value, so that’s an essential function. At the same time, you need a good data strategy and good data management, or you won’t get quality data to analyze. Thus, Gomis says, “Best practice is having a chief data strategy and analytics officer who owns both data and analytics, and works closely with the CIO.”

To whom should the chief data officer report?

CDOs are still finding their place in many organizations. CIO’s State of the CIO Study 2024 found that 24% of CDOs report to the CEO, 12% to the CFO or top finance executive, and 14% to another line-of-business executive. According to the AWS/MIT survey, almost 20% of CDOs currently report to the CEO or chairman, over 19% to the CIO, CTO, or some other C-level executive that reports to the CEO, and just about 16% report to a non-C-level executive.

While organizations are rapidly adopting the chief data officer role, NewVantage Partners says there’s still a lot of confusion and disagreement on the mandate and importance of the position. Its Data and AI Leadership Executive Survey 2022 found that 52% of participants identified the chief data officer as the executive with primary responsibility for data strategy and results. In 2022, 48% of respondents said other C-level executives had primary responsibility or claimed there was no single point of accountability.

PwC’s global strategy consultancy, Strategy&, believes the CDO role should sit at the C-suite level or one level below because, “appointing a senior-level CDO is essential for leadership teams seeking to maximize the potential of data as a strategic asset throughout the organization.”

Carruthers agrees. She says the chief data officer could report to various places in the organization, but she favors the CEO or COO, not the CIO.

“As the role evolves and matures, it’s reporting into other places in the business,” she says. “It’s moving toward more of a seat at the top table, which it should be. For me, the CIO and the CDO should work hand-in-hand as a partnership, and a partnership doesn’t work when one partner works for the other partner.”

What to look for in a chief data officer

According to the NewVantage Partners’ survey, 51% of executives at Fortune 1000 firms feel a successful chief data officer must be an external change agent with fresh perspectives. Meanwhile, 14% held the opposite view: they felt a successful chief data officer must be a company veteran who understands the culture and history of the organization. Respondents were also split on the chief data officer’s background: 10% believed a successful chief data officer should be filled by a line-of-business executive who’s been accountable for financial results, while 19% said the CDO must have a data scientist or technologist background.

Gomis says he’s seen chief data officers come from marketing backgrounds, and that some are MBAs who’ve never worked in data analytics before. “Most of them have failed, but the companies that hired them felt that the influencer skillset was more important than the data analytics skillset,” he says.

Good people skills certainly could be useful for getting out of the bind many new chief data officers find themselves in. “One of the biggest mistakes is not understanding what it’ll take to succeed, in terms of expectations,” Gomis adds. “If you look at a lot of the people who’ve had the title of chief data officer and chief analytics officer over the last three years, there’s a tremendous amount of turnover because company and candidate expectations weren’t aligned.”

The problem is often unrealistic expectations from an employer. “The biggest mistake companies make is to expect that because they’ve hired someone, the problem is solved,” says Justin Cerilli, who heads the data and analytics practice for consultancy Russell Reynolds Associates. “Actually, you’re just starting to solve the problem — the tough decisions are still to come. That’s when you start asking who your people are, what your processes are, and how to change culture. CEOs tell chief data officers to change everything to get the end results they want, but don’t want to change the way they do anything.”

Careers, IT Leadership, Staff Management]]>
https://www.cio.com/article/230880/what-is-a-chief-data-officer.html 230880
The trick to better answers from generative AI Thu, 29 Feb 2024 10:00:00 +0000

Generative AI offers great potential as an interface for enabling users to query your data in unique ways to receive answers honed for their needs. For example, as query assistants, generative AI tools can help customers better navigate an extensive product knowledge base using a simple question-and-answer format.

But before using generative AI to answer questions about your data, it’s important to first evaluate the questions being asked.

That’s the advice Lucky Gunasekara, CEO and co-founder of Miso.ai, has for teams developing generative AI tools today.

Miso.ai is the vendor partner for the Smart Answers project here at CIO.com and four of our sister sites. Smart Answers uses generative AI to answer questions about articles published on CIO.com and Foundry websites Computerworld, CSO, InfoWorld, and Network World. Miso.ai also built a similar Answers project for IDG’s consumer technology websites PCWorld, Macworld, and TechHive.

Interested in how Smart Answers surfaces its insights, I asked Gunasekara to discuss more deeply Miso.ai’s approach to understanding and answering users’ questions.

Large language models (LLMs) “are actually much more naive than we may think,” Gunasekara says. For example, if asked a question with a strong opinion, an LLM will likely go off and look to cherry-pick data that confirms the opinion, even if available data shows the opinion is wrong. So, if asked “Why did Project X fail?”, an LLM might scare up a list of reasons why the project was bad — even if it was a success. And that’s not something you want a public-facing app to do.

Evaluating questions is a step typically missed in so-called RAG (retrieval augmented generation) applications, Gunasekara notes. RAG apps point an LLM to a specific body of data and tell it to answer questions based only on that data. 

Such apps usually follow this (somewhat simplified) pattern for setup:

  1. Split the existing data into chunks, because all the data would be too large to fit into a single LLM query. 
  2. Generate what are known as embeddings for each chunk, to represent the semantic meaning of that chunk as a string of numbers, and store them. Update as needed as data changes.

And then per question:

  1. Generate embeddings.  
  2. Find text chunks that are most similar in meaning to the question, using calculations based on the embeddings. 
  3. Feed the user’s question into an LLM and tell it to answer based solely on the most relevant chunks.

Here is where Gunasekara’s team takes a different approach, adding a step to check the question before searching for relevant information. “Instead of asking that question directly, we first ask if that assumption is correct,” explains Andy Hsieh, Miso CTO and co-founder.

In addition to checking assumptions inherent in questions, there are other ways to enhance the basic RAG pipeline to help improve results. Gunasekara advises going beyond the basics especially when moving from the experiment phase toward a production-worthy solution.

“There’s a lot of emphasis on ‘Get a vector database, do a RAG setup, and everything will work out of the box,’” Gunasekara says. “It’s a great way to get a proof of concept. But if you need to make an enterprise-grade service that doesn’t create unintended consequences, it’s always context, context, context.”

That can mean using other signals besides semantic meaning of text, such as recency and popularity. Gunasekara points to another project Miso is working on with a cooking website, deconstructing the question: “What’s the best bake-ahead cake for a party?”

“You need to separate out what you really need signals on” for the query, he says. “Make-ahead” cake means it doesn’t need to be served right away; “for a party” means it needs to serve more than a few people. Then there’s the issue of how an LLM can determine what recipes are “best.” That might mean using other website data, such as which recipes have the highest traffic, top reader rankings, or were awarded an editor’s pick — all of which is separate from finding and summarizing related text chunks.

“A lot of the sort of spooky magic of getting these things right is more in those context cues,” Gunasekara says.

And while quality of LLM is another important factor, Miso doesn’t believe it’s necessary to use the most highly rated and pricey commercial LLMs. Instead, Miso is fine-tuning Llama 2-based models for some client projects in part to keep costs down, and because some clients don’t want their data going off to a third-party. Miso is also doing so due to what Gunasekara calls “a huge ground force happening right now in open-source [LLMs].”

“Open source is really catching up,” Hsieh adds. “Open-source models are very, very close to surpassing GPT-4.”

Generative AI]]>
https://www.cio.com/article/1310514/the-trick-to-better-answers-from-generative-ai.html 1310514
How to succeed at digital transformation in India Thu, 29 Feb 2024 05:32:41 +0000

Digital transformation refers to using technology to fundamentally change how your organization operates and delivers value to your customers and stakeholders.

However, it goes beyond simply acquiring new technologies. It also requires a rethink of your business strategy to embrace advances in cloud computing, analytics, AI, IoT and automation.

You may not have started your digital transformation at all and feel unsure where to start. Or, you may have begun migrating to the cloud but now need edge computing and IoT to streamline your operations, or you may want to use AI to supercharge your business analytics.

Either way, it can be tough and expensive – especially if you lack the relevant internal skills and resources. There certainly isn’t a one-size-fits-all solution.

The value of partnerships

This is why organizations opt to work with managed service providers (MSPs) that can manage change while sharing their expertise in these new technologies.

The MSP can enable your organization to embrace the new technology as quickly and efficiently as possible, with little or no business interruption, and empower your employees to adopt new, technology-enabled ways of working.

At NTT, as an MSP with expertise from the edge to the cloud, we’re always advancing our own ongoing digital transformation even as we help our clients innovate.

With the benefit of hindsight, what decisions might we have made differently in this regard 10 years ago? Put differently, if you’re a CEO who is now expanding your organization in India, how should you plan for the digital journey that lies ahead?

1. Design for the next decade

Accept that your organization will look completely different in 10 years’ time. Implement business and technology designs that give you flexibility in an ever-changing environment – and create a scalable, agile core data lake that can accommodate new types and volumes of data in future. 

Today you might deal only with your distributors, with no direct connection to the end consumer, but this can change over time. How would you then capture that data, perhaps to roll out personalization at scale?

For instance, the automotive industry is building ever more software into cars – which, in effect, become edge-computing devices. How do car makers collect data from their vehicles once they’re on the road, and how will they analyze that data to improve their products and services?

2. Always prioritize cybersecurity

Security and compliance must be at the core of your digital transformation – for both IT and operational technology.

Configure the best possible cybersecurity for your organization even before you start digitalizing your operations – and pay attention to legislative developments in data privacy.

India’s new Digital Personal Data Protection Bill is similar to the EU’s General Data Protection Regulation and other legislation around the world. What are the implications for your digital transformation? How will you keep your customers’ data safe?

3. Focus on skills development and retention

For now, labor in India is still relatively affordable, with enough skilled people in the market compared with many other developed economies where automation has become an efficient cost-cutter that reduces the need for certain categories of skilled employees.

Automation can also enhance CX – for example, by reducing the scope for human error – but the risk is that broader skill sets disappear along with the employees whose jobs are made redundant.

Meanwhile, many skilled employees in India are also shifting their focus to futuristic technologies such as generative AI or space exploration, forcing their current employers to digitalize and automate some functions to replace them. This trend will continue.

4. Create one view for all your technology

Many organizations have a degree of customization within their operations, with business units using a range of technologies from different vendors. Half of your applications might be on-premises while the rest are software as a service, involving multiple cloud providers. Your digital transformation may end up limited by the constraints of your technology providers.

Bringing these vendors and technologies together to create a single, easily accessible view of the business for the C-suite can be a challenge, but this is where MSPs excel as they tend to have close relationships with multiple vendors.

5. Be a champion of sustainability

Sustainability is another consideration that will only increase in importance in the years ahead. Consider how setting sustainability goals will benefit your organization and your community, and how it might attract certain skills.

When you make business decisions or buy technology, always factor in sustainability. Ensure your sustainability goals are aligned with your overall business and technology strategies.

Today’s planning for tomorrow’s success

Acknowledge that technology will keep changing everything – fast.

You have to stay on your toes in all aspects of your business, and don’t hesitate to work with third parties to access the expertise and innovation that will give you a competitive edge. It will save you time, money and effort along the way and make your digital transformation a digital success.

Digital Transformation]]>
https://www.cio.com/article/1310638/how-to-succeed-at-digital-transformation-in-india.html 1310638
Navigating the future: the rise of SD-WAN in India Thu, 29 Feb 2024 05:27:25 +0000

In the realm of Wide Area Networks (WANs), traditional routers have long been the backbone of network infrastructure, albeit with their limitations. The conventional approach involves configuring and maintaining each router individually, which often lacks the flexibility required for the dynamic needs of modern businesses. However, a transformative technology known as Software-Defined Wide Area Network (SD-WAN) is making significant waves in India and globally. In this article, we explore the insights shared in an interview transcript to understand the impact of SD-WAN in India and how it addresses the unique challenges faced by the country’s networks.

Revolutionizing WAN Architecture

The key differentiator with SD-WAN lies in its centralized control and configuration, enabling a more dynamic and adaptable network infrastructure. The traditional method of managing routers at each branch is replaced by a central controller overseeing all routers. This revolutionary shift allows for centralized configuration, categorization of branches, and a comprehensive view of WAN status from a central location. The advantages are manifold, offering operational ease and enhanced security compared to conventional routers.

SD-WAN Adoption in India

The interviewee highlights that SD-WAN adoption in India is rapidly gaining momentum across various business segments. Most large and medium organizations are either of the categories which are: those who have already embraced SD-WAN, those in the process of implementation, and those contemplating the transition. The overwhelming consensus is that SD-WAN presents a compelling solution, addressing both technical and operational challenges. The expert predicts a substantial majority of Indian enterprises adopting SD-WAN within the next three years, showcasing the technology’s robust growth in the market.

Addressing Network Challenges in India

India’s unique network challenges, including congestion, infrastructure projects, fiber cuts, and performance degradation due to repairs, necessitate innovative solutions. SD-WAN emerges as a game-changer in this context, offering intelligent traffic management and enhanced high availability. The technology allows for dynamic rerouting of traffic, prioritization of critical applications, and performance-based routing to optimize network efficiency. The centralized view provided by SD-WAN facilitates on-the-fly configuration changes, ensuring adaptability in the face of varying network conditions.

Flexibility, Scalability, and Implementation Challenges

The flexibility and scalability of SD-WAN are underscored as crucial benefits for organizations embarking on digital transformation journeys. However, the interviewee emphasizes the need for proper implementation and management skills to harness SD-WAN’s potential fully. While SD-WAN may represent a higher initial investment compared to traditional routers, the long-term return on investment (ROI) is significant. The ability to save costs on connectivity, optimize security measures, and streamline network operations contribute to the overall cost-effectiveness of SD-WAN.

Strategic Considerations for SD-WAN Implementation

For organizations in India contemplating SD-WAN adoption, meticulous planning is imperative. SD-WAN is not a one-size-fits-all solution, and aligning the technology with a comprehensive digital strategy is crucial. As part of this strategic planning, organizations may explore local breakout options, saving costs and enhancing performance at the branch level. Collaboration with a knowledgeable partner, such as NTT DATA, is highlighted as essential for successful SD-WAN implementation and ongoing management.

Beyond Cost Savings: User Experience and Cloud-first Strategy

While cost savings are evident, the benefits of SD-WAN extend beyond the financial realm. The improved user experience resulting from network excellence is a significant intangible advantage. SD-WAN facilitates a cloud-first strategy by providing the necessary infrastructure agility and compliance with cloud requirements. The technology becomes an integral part of the digital transformation narrative, ensuring that the network aligns seamlessly with evolving business needs.

Conclusion

SD-WAN has emerged as a transformative force in India’s networking landscape, offering a centralized, flexible, and scalable solution to address the country’s unique challenges. As organizations increasingly recognize the potential benefits of SD-WAN, its adoption is expected to surge in the coming years. The key lies not only in embracing cutting-edge technology but also in strategic planning, effective implementation, and ongoing management to unlock the full spectrum of advantages that SD-WAN brings to the table. In the dynamic world of digital transformation, SD-WAN stands out as a cornerstone technology shaping the future of networking in India and beyond.

SD-WAN]]>
https://www.cio.com/article/1310635/navigating-the-future-the-rise-of-sd-wan-in-india.html 1310635
Why cloud evolution needs a cohesive approach to succeed Thu, 29 Feb 2024 05:21:03 +0000

Many organisations in India are migrating to the cloud, and there is no shortage of cloud providers. But if you want cloud to revolutionise your business, it won’t help to get stuck with a basic cloud configuration that works by default but doesn’t keep pace with your evolving goals.

This is what Mobicule Technologies, an independent software vendor (ISV) in fintech, realised as they expanded their client base in India’s financial services industry.

Loan management simplified

Large banks have tens of thousands of loan customers. Managing these accounts is an operational burden, as it involves constant follow-ups on monthly instalments, account maintenance and timely collections, especially when customers default on payments.

Mobicule has developed a comprehensive, cloud-based platform that automates the end-to-end management of various loan types, including consumer, vehicle, home and business loans. Their clients transfer the full management lifecycle of their loan accounts to this platform, which blends a range of digital functionality with a customer-focused call centre to streamline debt collection and resolution.

While other loan-management software vendors typically charge a fixed monthly fee per loan account, Mobicule only bills their clients once instalments have been recovered. This allows banks to minimise the risks associated with their loan accounts in a flexible, cost-effective way.

Cloud by default doesn’t cut it

Before Mobicule started working with NTT DATA, they had already sourced cloud services from a large hyperscaler and were doing development in the cloud.

However, they lacked a sense of ownership of their cloud environment, and they found themselves having to fit square pegs in round holes while demand was rising for their services. They needed a provider who could tailor a solution to their needs, with an emphasis on cost efficiency because they deliver their software-as-a-service (SaaS) offering in a hypercompetitive market.

In financial services, security and compliance are as important as reliability and responsiveness. Mobicule needed help with their security information and event management (SIEM) approach: combining security information management (collecting, analysing and reporting on log data generated from all their technology infrastructure) and security event management (monitoring, correlating and analysing security events generated by hardware and software, in real time).

This level of reliability and security had to be scalable across multiple clients, each of which needed a guarantee that, despite being part of a cloud-native, multitenant environment, their data and infrastructure would remain private and protected.

“As a pioneer in a competitive market, we need to be nimble in order to maintain our early-mover advantage. To achieve this, we needed a cloud partner who could assume the role of a trusted adviser and deliver a cloud landscape that would enable us to create a robust, secure and cost-efficient cloud landscape for our SaaS offerings,” says Siddharth Agarwal, Founder and MD of Mobicule.

The road to true cloud transformation

NTT DATA went the extra mile to help Mobicule, starting with cloud discovery and analysis sprints to define clear objectives. Working with the Mobicule team, we selected our SimpliCloud public-cloud platform as the optimal execution venue for their application landscape.

With SimpliCloud, Mobicule can deploy infrastructure-as-a-service and platform-as-a-service solutions, containers and microservices, and connect to other hybrid or multicloud platforms as needed.

The combination of SimpliCloud (an on-demand enterprise public cloud) and SimplyVPC (an agile and secure hosted private cloud) allows us to offer a seamless hybrid and multicloud solution for organisations navigating the complexities of SaaS delivery.

Mobicule now consumes cloud capacity in an on-demand, pay-as-you-go model, with access to a range of cloud-native microservices, all managed around the clock by NTT DATA. In this way, they have realised savings in the form of a 40% cloud cost optimisation compared with their previous public-cloud setup.

A winning approach

Combined with our portfolio of managed services, which span everything from the application layer to people, tools and processes, this cloud solution is a compelling proposition not only for Mobicule and financial service providers but also for organisations in other industries.

“NTT DATA’s cloud platform has been instrumental in enabling us to be efficient and agile. Their approach to cloud transformation allows us to focus on our core ISV offerings rather than worry about our cloud landscape,” says Agarwal.

We want to help our clients grow because their success is our success. This sets us apart in the cloud space, and we hope to help many more innovative organisations like Mobicule advance their digital transformation.

Read more about NTT DATA’s Managed Cloud Solutions in India

Managed Cloud Services]]>
https://www.cio.com/article/1310632/why-cloud-evolution-needs-a-cohesive-approach-to-succeed.html 1310632
8 network trends shaping India’s digital landscape Thu, 29 Feb 2024 05:03:24 +0000

India, like the rest of the world, is witnessing profound digital transformation in business – and networks play a key role in enabling this transformation. So, it is in your organization’s best interests to evolve your network to make it smarter and faster.

As we head into 2024, let’s explore 10 network trends that are shaping the digital space.

1. Intent-driven networking

You should align your applications closely with your business objectives to ensure they deliver what you need. But the performance of your applications depends hugely on the strength of your network: a better network means faster and more valuable output from your applications.

This is why more adaptive networks are in demand, and wide-area networks (WANs) are becoming software-defined so that they can be controlled from a single location and use automation to self-adjust as needed.

Not all traffic is equal, after all: you want to be able to specify which kinds of traffic, which applications and which users to prioritise. By setting performance parameters and rules, you can guarantee a certain level of network speed and capacity for particular applications.

2. Edge computing

In industries like manufacturing, the drive towards Industry 4.0 – and talk of 5.0, where people work alongside robots and smart machines – is creating demand for smarter factories, shop floors and devices.

Edge computing, which brings computing closer to the user – be it a computer, a human or an IoT device – is a key solution in this context.

But to gain the full benefits of edge computing, you need an adaptive network with low latency and high performance. Private 5G and similar technologies complement this shift.

3. Software-defined networking

Network intelligence takes centre stage with software-defined networking giving administrators centralised control through a single interface. This makes it easier to adapt the network to changing business requirements and allows for more dynamic and flexible configurations, as well as the automation of network tasks and workflows.

It also facilitates network virtualisation, allowing you to create multiple virtual networks on the same physical infrastructure, and offers improved visibility into network performance and traffic patterns.

The expectation is that a significant portion of the world’s networks will be software-defined in the coming years.

4. AI and machine learning

As organisations seek to reduce manual intervention and improve efficiency amid an escalating volume of network data, there is a rising focus on automation – which requires a certain amount of intelligence.

AI and machine learning have become imperative for swift decision-making, and present a new avenue of data analysis for organisations that have reached a certain level of network-infrastructure maturity after years of digitalization. It’s a natural next step along the digital transformation journey, together with finding uses for generative AI.

The degree to which an industry will adopt AI depends on the use cases developed for that industry and the maturity of its underlying infrastructure.

5. Cybersecurity

As networks become more software-defined with centralised control, having robust cybersecurity measures in place becomes critical – especially when you add edge computing, hybrid working and IoT devices into the mix.

Software-defined networks simplify security implementations, making it easier to protect data at all levels with less reliance on human intervention.

6. Network automation

Automation in network management is about increasing speed and reducing errors in response to the need for quick, proactive responses to network failures.

Manually monitoring all aspects of a network has become impractical, making automation a key component in remediation and proactive responses.

7. Open networking technologies

Open networking technologies  have gained prominence because organisations need interoperability among diverse network components for better adoption and expanded use case..

This greatly lessens the need for a range of application programming interface integrations along with isolated network components.

8. Network visibility, the cloud and security: dealing with complexity

Visibility refers to constantly monitoring parameters such as traffic performance and security across your network. It’s a key requirement, but getting it right can be complicated.   

And when you’re also adopting cloud-native applications, you introduce new traffic patterns that necessitate adjustments to your network configuration. Add to that the convergence of the networking and security domains, with security now an integral part of network design, and the complexity just keeps increasing.

This emphasises the need for organisations to collaborate with managed service providers (MSPs) with expertise that spans networks, cloud, security and more.

Step into 2024 with expert help

As technology adoption accelerates, it’s clear that the greatest challenge lies in managing and deriving real benefits from these innovations despite a scarcity of skilled resources in the market.

It’s no surprise, then, that India’s digital landscape is poised for a significant emphasis on managed services. In 2024, leave it to the experts and focus on making your business a success.

Managed Service Providers, Network Administrator]]>
https://www.cio.com/article/1310615/8-network-trends-shaping-indias-digital-landscape.html 1310615
Ecco come i CIO stanno ripensando alle strategie sul cloud Thu, 29 Feb 2024 05:00:00 +0000

Dopo anni di marcia al ritmo della migrazione al cloud, i CIO stanno diventando sempre più cauti riguardo al mantra cloud-first, comprendendo la necessità di estromettere alcuni carichi di lavoro dal cloud pubblico verso piattaforme in cui verranno eseguiti in modo più produttivo, più efficiente e più economico.

“La ‘Cloud exit’ è diventata un tema importante nel 2023 e ci sono buone probabilità che si trasformi in una vera tendenza per il 2024. I risparmi sono troppo grandi per essere ignorati da molte aziende”, afferma David Heinemeier Hansson, sviluppatore danese di Ruby on Rails, nonché co-proprietario e CTO di 37signals, che ha completato un’uscita totale di sei mesi dal cloud lo scorso giugno [in inglese]. “Un numero sufficiente di persone si sta rendendo conto che il marketing del cloud non corrisponde necessariamente alla realtà”.

Ed è proprio l’esperienza accumulata nella “nuvola” a spingere molti CIO a ripensare il loro approccio incentrato sulla piattaforma, a favore di un attitudine specifica che pone l’enfasi sul carico di lavoro. L’infrastruttura che ne deriva – una combinazione di piattaforme on-premise e hybrid-cloud – avrà l’obiettivo di ridurre gli sforamenti dei costi, contenere il caos del cloud e garantire un finanziamento adeguato per i progetti di intelligenza artificiale generativa.

David Linthicum, ex Chief Cloud Strategy Officer di Deloitte, osserva che molti CIO aziendali che si sono lasciati prendere dalla corsa al cloud stanno ora rimediando alle loro “disavventure”, cercando le piattaforme ideali per le varie applicazioni, indipendentemente dal fatto che si tratti di un cloud privato, di un cloud industriale, dei propri data center, di un provider di servizi gestiti, dell’edge o di un’architettura multicloud.

“La motivazione più comune per il dietro front che ho potuto constatare è il costo”, scrive Linthicum [in inglese], che ipotizza como “la maggior parte dei carichi di lavoro aziendali non sono esattamente moderni” e quindi non sono adatti al cloud.

C R Srinivasan, EVP dei servizi cloud e cybersecurity e Chief Digital Officer di Tata Communications, vede molte aziende “diventare più sfumate” nell’uso e nelle strategie del cloud, nel tentativo di bilanciare prestazioni, costi e sicurezza.

“Dato che le aziende cercano di sfruttare molto di più l’intelligenza artificiale, stanno e staranno riesaminando i carichi di lavoro e li collocheranno nell’infrastruttura giusta, che sia nel cloud pubblico o nell’edge, oppure li riporteranno nel proprio cloud privato o nei server interni”, evidenzia Srinivasan. “Queste decisioni sono in gran parte guidate dalla necessità di massimizzare le prestazioni e i benefici aziendali, senza perdere di vista i costi”.

Il perno del cloud pubblico

Questa mentalità sta prendendo piede, poiché i CIO cercano di applicare le lezioni apprese dalla loro spinta iniziale verso la nuvola.

“Qualsiasi organizzazione di queste dimensioni che abbia a che fare con tecnologie diverse rende un cattivo servizio alla propria azienda se l’obiettivo finale è una strategia di solo cloud pubblico”, spiega Brian Shields, SVP e CTO di Boston Red Sox e Fenway Sports Management.

“Come molte aziende complesse, siamo un modello ibrido in evoluzione, il quale mantiene capacità di calcolo e di archiviazione nel cloud pubblico, on-premise, con il nostro partner di co-locazione e con i partner cloud del settore”, aggiunge Shields.

Questo perfezionamento del pensiero sul cloud arriva mentre all’orizzonte si profilano costi elevati per l’IA. Per i CIO che hanno bisogno di accedere ai dati in tempo reale, per esempio quelli che riguardano la produzione o i controlli industriali, il caricamento dei dati sull’edge è una soluzione migliore rispetto al cloud pubblico.

“L’edge fornisce l’elaborazione del calcolo in tempo reale, per esempio la computer vision e il calcolo in tempo reale degli algoritmi per il processo decisionale”, dice Gavin Laybourne, CIO di Maersk. “Invio i dati al cloud, dove posso permettermi un ritardo di 5-10 millisecondi nell’elaborazione”.

In occasione del CDO Summit di dicembre a Boston, Mojgan Lefebvre, Chief Technology and Operations Officer di Travelers, ha osservato che il cloud dispone di un’infrastruttura scalabile e adattabile alle varie esigenze, oltre ad avere accesso a strumenti di intelligenza artificiale più avanzati, come Large Language Model.

Ma “è importante che questo affidamento alla tecnologia cloud non richieda una migrazione completa di tutte le attività verso un ambiente basato sul cloud”, ha detto Lefevre.

Il gigante delle buste paga ADP, per esempio, utilizza AWS per la maggior parte delle sue applicazioni net-new [in inglese], oltre a Microsoft Azure e Cisco Cloud, ma “abbiamo ancora molto carico in esecuzione nei nostri data center”, sottolinea Vipul Nagrath, responsabile dello sviluppo prodotti di ADP ed ex CIO dell’azienda.

Alcuni Chief Information Officer scelgono di ospitare i carichi di lavoro in cloud privati, come le piattaforme Greenlake di HP Enterprise o APEX di Dell, per ottenere una maggiore sicurezza e costi inferiori rispetto al cloud pubblico.

Richard Semple, CIO della Contea di Williamson, in Texas, dove è in fase di sviluppo il nuovo stabilimento di produzione di chip di Samsung, ha preso in considerazione tutti i cloud pubblici per la crescente infrastruttura digitale del governo. Alla fine, ha optato per la sicurezza di mantenere i dati in sede, ma su una nuvola privata progettata da Dell.

Il segreto per effettuare una rivalutazione: un carico di lavoro alla volta

Per i CIO già impegnati nel cloud, l’esame approfondito di tutti gli aspetti di un’applicazione prima di aggiungerne un’altra al proprio patrimonio sta diventando la norma,.

“Non entriamo nel cloud se non sappiamo che ci sono dei risparmi, e continuiamo a misurarli per avere costantemente la certezza che sia così”, tiene a precisare Jamie Holcombe, CIO dell’US Patent & Trademark Office.

“So per esperienza che le applicazioni ‘chatty’ sono spesso le più costose nel cloud, quindi o le rifattorizziamo o le manteniamo in sede”.

Non tutti i CIO governativi stanno spostando i carichi di lavoro dal cloud o sentono la necessità di tornare indietro. “Io sono al 100% nel cloud e non potrei fare altrimenti”, afferma Gerald Caron, CIO dell’International Trade Administration.

E sebbene il ritorno ex ante sia una tendenza reale, non è ancora universale.

“Questo dimostra che i CIO stanno effettivamente pensando a dove vogliono impiantare i loro portafogli di applicazioni”, racconta Steve Randich, CIO della Financial Regulatory Authority (FINRA), un’azienda privata. “Il cloud ha senso in alcuni casi, ma non in tutti”.

Per quanto riguarda FINRA, il cloud rimane centrale.

“Nel nostro caso, costerebbe il doppio costruire internamente l’infrastruttura che utilizziamo ogni giorno su AWS”, aggiunge il manager. “Inoltre, perderemmo l’opportunità di aumentare e diminuire rapidamente e in modo flessibile l’infrastruttura in base all’espansione e alla contrazione del volume delle transazioni. Può darsi che molte aziende abbiano un volume altamente prevedibile e stabile. Non FINRA”.

Se un determinato carico di lavoro sia più o meno adatto al cloud è una questione di contesto. Più saggi ed esperti, i CIO di oggi sono intenzionati a determinarlo in modo più accurato per garantire, caso per caso, che le applicazioni siano ospitate nel contesto migliore.

Cloud Computing, Data Center, Edge Computing]]>
https://www.cio.com/article/1310348/ecco-come-i-cio-stanno-ripensando-alle-strategie-sul-cloud.html 1310348
CIOがAIに光を与える5つの方法 Wed, 28 Feb 2024 21:00:00 +0000

ジェネレーティブAIの急速な普及と民主化は、約150年前に電気と同じことをした電球と比較されてきた。電気の発明(1831年)から数十年後の1879年に発明された電球が、大衆や企業に実用的なユースケースをもたらしたように、ジェネレーティブAIはAIにも同じことをもたらそうとしている。

テクノロジーが研究室から日常生活へと移行するとき、主流への採用は通常、ますます強力になり、証明された初期のユースケースに乗る。このような急速な採用には、可能性の芸術に対する興奮が伴う。これが、ガートナーのハイプ・サイクルにおいて、AIが現在、期待のピークに達している理由の一部である。

実際、ChatGPTは昨年、わずか2ヶ月で1億人以上の月間アクティブユーザーを獲得し、テクノロジー採用のライフサイクルにおける位置づけは、ハイプ・サイクルにおける位置づけを上回っている。私たちはメインストリームでの採用(現在、一般人口の半数近くがジェネレーティブAIを使用している)に達しているが、私たちはまだ膨らんだ期待のピークにいる。つまり、よくよく考えてみると、私たちはまだジェネレーティブAIのガス灯の瞬間にいて、電球の瞬間はまだ来ていないのかもしれない。そして、これは悪いことではない。

ジェネレーティブAIの世界では、コンピューターがいかに驚くべき方法で物事を誤ることができるかを発見している。公的データと私的データの両方にジェネレーティブAIを適用して実験する中で、我々は何がうまく機能し、何がうまく機能しないかをリアルタイムで学んでいる。

以下は、CIOがジェネレーティブAIのハイプ・サイクルをナビゲートし、幻滅の谷から啓蒙の坂道への迅速な移行に備えるための5つの提言である。

顧客、従業員、利害関係者と現実的に向き合う

ジェネレーティブAIや関連ソリューションの変革的性質を伝道する一方で、必ずマイナス面も指摘すること。コンサルタント会社や技術ベンダーは、AIが持つ変革の力を誇示する一方で、その欠点にはあまり注意を払わないことが多い。しかし、公平を期すために、多くの企業がこれらの問題への対応に取り組み、様々なプラットフォームやソリューション、ツールキットを提供している。

現実的であるということは、長所と短所を理解し、この情報を顧客、従業員、C-suiteの同僚と共有することを意味する。彼らは、あなたの率直さを高く評価するだろう。明確に説明し、理解できるように、権威ある弊害と欠点のリストを作成する。AIアドバイザーが指摘しているように、ブラックボックス問題、人間の誤った主張に対するAIの脆弱性、幻覚など、マイナス面は枚挙にいとまがない。

企業としての利用方針を定める

以前の記事で述べたように、企業利用方針と関連するトレーニングは、従業員にテクノロジーのリスクや落とし穴について教育し、テクノロジーを最大限に活用するためのルールや推奨事項を提供するのに役立つ。ポリシーの策定にあたっては、関連するすべてのステークホルダーを必ず参加させ、組織内で現在どのようにAIが利用されているか、また将来どのように利用される可能性があるかを検討し、組織全体で広く共有すること。ポリシーは生きた文書とし、必要に応じて適切な周期で更新することが望ましい。このポリシーを導入することで、契約、サイバーセキュリティ、データプライバシー、欺瞞的取引行為、差別、偽情報、倫理、知的財産、検証などに関する多くのリスクから守ることができる。

各ユースケースのビジネス価値を評価する

純粋なテキスト出力の場合、私たちは、優れた文法で書かれたAIからの回答を信じる傾向がある。心理学的に言えば、私たちは背後に強力なインテリジェンスがあると信じがちだが、実際には何が真実で何が誤りなのか、AIは全く理解していない。

ジェネレーティブAIには優れた使用例がいくつかあるが、ケースバイケースでそれぞれを検討する必要がある。例えば、AIは一般的に技術的な予測を書くのが苦手だ。出力される内容は、私たちがすでに知っていることを教えてくれることが多く、また盗作である可能性もある。リライトツールやリフレーズツールを使うことさえ、問題を悪化させる可能性があり、チームは自分たちで予測を書くよりも、こうしたツールを使うことに多くの時間を費やすことになる。戦いを選び、そうすることに明確な利点がある場合にのみ、ジェネレーティブAIを使うのがベストだ。

厳格なテスト基準を維持する

ジェネレーティブAIは、組織内の多くの従業員によって利用される可能性が高いため、従業員に長所と短所について教育し、企業の使用ポリシーを出発点として使用することが重要である。これだけ多くのAIが採用される中、我々は皆、事実上テスターであり、学びながら行動している。

組織内では、IT部門であれ事業部門であれ、本番稼働前にテストや実験を行うことを重視し、かなりの時間を確保すること。従業員が経験や学んだ教訓を共有できる社内実践コミュニティを立ち上げることも、全体的な意識を高め、組織全体でベストプラクティスを推進するのに役立つ。 

技術的な問題が発生した場合の計画を立てる

私たちは、長く続いた英国の郵便局のスキャンダルで、AI非対応のシステムでさえ、人生を変えるような重大なミスを犯す可能性があることを目の当たりにした。これらのシステムが正しいと誤って思い込むと、何百人もの労働者が誤って標的にされることになる。イギリスの郵便局の事件では、15年の間に700人以上の郵便局長が不正の濡れ衣を着せられ、評判を落とし、離婚や自殺にまで至った。

そのため、AIが誤った行動をとった場合の対策を立てておくことは非常に重要だ。企業の使用ポリシーはガードレールを設定するが、物事がうまくいかなくなったとき、IT部門のガバナンス・プロセスはどのように状況を監視し、対応できるのだろうか?計画はあるのか?ガバナンス・プロセスは、どのようにして正しい答えや判断を区別するのだろうか?間違いが生じた場合のビジネスへの影響はどのようなもので、その修復は容易なのか困難なのか?

ジェネレーティブAIが光明を見出す瞬間はそう遠くないが、まずは幻滅の谷を乗り越え、悟りの坂を登り、最終的に生産性のプラトーに到達するまではない。ガス灯も、実験も、途中の学習も、すべてプロセスの一部なのだ。

Careers]]>
https://www.cio.com/article/1310265/cio%e3%81%8cai%e3%81%ab%e5%85%89%e3%82%92%e4%b8%8e%e3%81%88%e3%82%8b5%e3%81%a4%e3%81%ae%e6%96%b9%e6%b3%95.html 1310265
Bio digital twins and the future of health innovation Wed, 28 Feb 2024 18:59:28 +0000

Healthcare technology innovation is poised to revolutionize the medical landscape. At the forefront of this transformation lies biological digital twin (bio digital twin) technology. This technology will help to improve personal, social, and economic outcomes, and help to build a healthier, more prosperous and sustainable future for all.

The promise of bio digital twin technology

The groundbreaking work underway in the development of bio digital twin technology holds immense promise in both saving and enriching lives.

Essentially, the technology involves the replication of the human body in software models. Using these models, healthcare providers can test drugs and therapies with unprecedented speed and accuracy, reducing risks for both patients and physicians.

This technology incorporates the analysis of biological, physiological, genomic and health records data, and it represents a whole new era of digital transformation in the healthcare industry.

By harnessing AI and advanced analytics, healthcare providers will be able to better predict and improve a patient’s health performance over their lifetime, paving the way for precision medicine.

Precision cardiology: A focus on the heart

NTT’s Medical and Health Informatics (MEI) Lab is dedicated to pushing the boundaries of medical science through their bio digital twin initiative. Their initial focus is on precision cardiology, with the development of a Cardiovascular Bio Digital Twin (CV BioDT) at the forefront.

This technology aims to model acute and chronic cardiac conditions, enabling healthcare providers to tailor treatment plans with unprecedented accuracy.

The CV BioDT holds the potential to transform the ways in which healthcare providers diagnose and treat heart conditions, from acute myocardial infarction to chronic heart failure.

By representing the complex interactions among organ systems, it addresses unmet medical needs and empowers healthcare providers to make informed decisions in real-time. Ultimately, this technology promises to improve patient outcomes and reduce the burden on our healthcare systems.

Data-driven insights for better healthcare

Central to NTT’s approach is the integration of vast amounts of data. By gathering data on patient responses to various treatments and leveraging AI and machine learning, NTT’s bio digital twin technology allows for the creation of personalized therapies tailored to individual patients.

This data-driven approach not only improves medical care but also enables early detection and prevention of diseases, ultimately enhancing the patient’s overall health and well-being.

Shaping the future of healthcare

Bio digital twin technology holds immense promise in revolutionizing healthcare as we know it.

Better outcomes for patients worldwide are on the horizon as industry partners share knowledge and work together to develop this revolutionary healthcare technology. From predictive analytics to personalized medicine, bio digital twin technology carries the promise of a healthier, more prosperous, and more sustainable future for all, for generations to come. Through innovation and collaboration, the future of medicine looks brighter than ever.

Learn more about bio digital twins.

Innovation]]>
https://www.cio.com/article/1310439/bio-digital-twins-and-the-future-of-health-innovation.html 1310439
The role of data centers in building a sustainable future Wed, 28 Feb 2024 18:51:01 +0000

Data is the fabric of our connected world.

The rise of streaming and enterprise cloud adoption have driven an explosive surge in computing demand, giving rise to data centers around the world.

Now, a new wave of demand driven by data-hungry generative AI applications is arriving, and it’s bringing with it increasing environmental pressures.

From 2020 to 2025, data usage is expected to quadruple. As data centers grow, so does the demand for power. To ensure a sustainable future for all, it’s now vital to increase energy efficiency and find creative ways to cut down on power consumption while still supporting the world’s expanding need for data.  

Data centers are changing to support sustainability

With NTT being one of the world’s largest data center companies, with more than 100 data centers in over 20 countries around the world, we recognize the important responsibility to help create a sustainable future and have developed systems and solutions to support energy efficiency and environmental sustainability. 

  • In Japan, the Mitaka data center’s building structure was designed to curve outwardly so hot air can escape more easily while cold air enters from below the building. Inside the data vaults, a secondary, highly efficient cooling system is deployed in addition to traditional conservation techniques to create the most highly efficient data center possible.  
  • In Santa Clara, California, one of the big challenges is the use of water. California has been in a drought for many years, so NTT is very conscious of how resources are used. A highly efficient cooling system has been designed that moves cold water down across fan walls adjacent to the data center. This blows the air over the cold water and creates chilled air that runs into the data vaults. At the same time, once the water has warmed up after running across the coils, it is run up to the roof and uses outside air to re-cool the water, then push it back down in that system. It runs a continuous cycle, so no fresh water is needed.   
  • In Germany, at the Berlin data centers, a system has been developed to handle the heat that’s created within each center. Rather than simply exhaust it into the environment, it is captured and reused to heat the offices adjacent to the data center. This delivers free heat to neighbors and enables part of the solution for carbon-free energy use. 

NTT Data Centers in Germany, Japan, and California and all over the world include unique solutions deployed today but are all small pieces of a much bigger effort. New solutions and technologies are continuously under development to address environmental concerns and support global sustainability goals.

Other technological developments are underway – including transitioning from electronics to photonics for networking, computing and other devices with low power, high capacity and ultra-low latency characteristics, or putting data centers in space for less climate impact. This is not being done alone.  NTT believes that collaborating with partners around the world on innovative solutions like these will make things better today for a more sustainable tomorrow. 

Learn more about the future of data centers.

Green IT]]>
https://www.cio.com/article/1310372/the-role-of-data-centers-in-building-a-sustainable-future.html 1310372
How technology is reshaping the college student experience Wed, 28 Feb 2024 16:05:43 +0000

Question: Is higher education worth it? 

Only 59% of students who say they’re likely to re-enroll at their four-year university seem to think so, according to a study recently conducted by RNL, and that number is only slightly higher for community college students. The rest report being dissatisfied with their overall experience at the institution they attend. Interestingly, the group with the highest satisfaction rate is online learners. Over 75% are thriving and believe their tuition paid is a worthwhile investment, proving a clear link between student satisfaction in higher learning and technology.

Retention, graduation rates, alumni giving, and so much more hinge on student satisfaction. In a world where every aspect of life is digitally connected, and as tuition prices and the student debt bubble soar, higher education institutions are expected to be cutting-edge hubs of technological advancement. A McKinsey report found that students and faculty are eager to continue using new learning technologies, but institutions could do more to support the shift. 

Artificial Intelligence 

AI is still in its infancy in the world of higher education, but the future looks bright. More than 70% of higher ed admins have a favorable view of AI despite low adoption to date, and one-third of campus IT leaders are considering experimenting with AI, machine learning (a type of AI that enables machines to automatically learn from data to identify patterns and make predictions with minimal human involvement) and adaptive learning (data-driven instruction that adjusts and tailors learning experiences to meet the individual needs of each student). 

From recruitment to alumni, the possibilities are endless for AI to power seamless and effortlessly integrated experiences. Here are just a few ways AI-powered tools can be used to improve students’ experiences:

  1. Optimize course loads
  2. Provide personalized coaching and tutoring
  3. Design flexible, customizable learning paths
  4. Integrate content across courses and extracurricular activities
  5. Evaluate institutional processes and courses
  6. Assess student learning and much more

Technology in action: Alabama State University is pioneering the use of generative AI in the classroom to improve student outcomes and support its busy faculty. Students are greeted with a personalized generative AI teaching assistant in their Learning Management System. The integrated bot is available 24/7 and each version is customized with faculty course materials and a unique personality – a “digital twin” of the professor. In addition, the technology assists faculty in building lesson plans, curating ever-changing content, and managing assessments. 

See how Avaya helped Clemson University create an experiential, AI-powered learning platform.

Accessible and Inclusive Tools and Processes

From student affairs to online learning to financial aid, the entirety of a student’s experience should be accessible and inclusive. There’s no one way to approach this, nor a playbook for “getting it right.” Nonetheless, colleges and universities must commit to ongoing and continuous evaluation and improvement, which requires close collaboration and alignment across all key stakeholders. 

Top priorities for accessibility and inclusion in higher education include:

  • Expanded mental health support for students as depression, anxiety, and loneliness grow (83% of students say mental health negatively impacts academic performance).
  • Inclusion for those who identify as non-traditional students (i.e., adult learners, hybrid and remote learners).
  • Tools that help educators take a more proactive approach to accessibility and inclusion.
  • Addressing the needs of historically marginalized groups of students.
  • Improving the needs of students with disabilities. 

Technology in action: The State University of New York (SUNY), the Cal State Los Angeles Center for Effective Teaching and Learning (Cal State LA CETL), and the California Community Colleges are currently collaborating with more than 60 institutions to create a framework that can be used to infuse diversity, equity, and inclusion (DEI) practices into any higher ed online course. At Cal State Los Angeles, the course is called Annotations for Diversity, Equity, and Inclusion in Online Course Design.

See how Avaya helped Delgado Community College create an inclusive learning environment that increased student and faculty collaboration.

Unified Data Models for Learning Analytics 

Higher ed institutions sit on mountains of data, but these mountains are molehills without the help of unified data models. Unified data models bring together all the disparate data an institution has, allowing the people who handle it to carry out more robust analyses (in other words, make smarter, student-centered decisions with a 30,000 ft. data view). For example, data about students’ course engagement behavior can be combined with data about their extracurricular activities to glean insights about school/life balance. 

Here are some ways to wield the power of your institution’s data:

  • Strategic planning 
  • Student advising 
  • Student retention (gleaning insights from attendance, grades, and enrollment data)
  • Providing faculty and staff with recommendations, such as instructional approaches and resources for students
  • Using your data as a recruitment tool, showing prospective students what they’ll get for their investment with your institution (in the U.S., several states have actually passed or proposed laws requiring that certain information be made available to students for this purpose).

Knowing the importance of technology, would students choose to re-enroll at your institution?

Get more insights, examples, and technology practices for optimizing the student experience with this comprehensive report from EDUCAUSE, sponsored by Avaya.

Artificial Intelligence, Digital Transformation, Machine Learning]]>
https://www.cio.com/article/1310399/how-technology-is-reshaping-the-college-student-experience.html 1310399
Atos deal to sell its legacy service business falls through Wed, 28 Feb 2024 14:36:12 +0000

French IT services company Atos has put an end to its attempts to sell its ailing legacy managed infrastructure services business after failing to reach an agreement with a prospective buyer and will now have to glue the two halves of its business back together.

Exclusive talks with EP Equity Investment over the sale of the Tech Foundations business ended after the two parties failed to agree on deal terms and pricing. “We could not reach a mutually satisfactory agreement,” Atos group CEO Paul Saleh said in a conference call with press and analysts Wednesday. Neither party will pay a cancellation fee, and their only obligations will be to keep details of their negotiations secret.

Atos revealed the plan to split itself in two in 2022, after its larger rival, IBM, spun out its own managed infrastructure services business to form Kyndryl in November 2021.

Saleh hasn’t ruled out looking for another buyer, though, saying, “We will continue to consider strategic options for Atos for all of our assets in a way that best serves the interest of our customers, employees, and shareholders.”

Meanwhile, Atos will continue to operate both halves of the company, Tech Foundations and Eviden, as separate entities with a coordinated go-to-market strategy, he said.

Eviden includes the company’s transformation acceleration, smart platforms, cloud, digital security, advanced computing, and net zero transformation activities. The legacy Tech Foundations business manages hybrid cloud and infrastructure, digital workplace, digital business platforms, technology advisory and customized services.

Those responsible at Atos did not want to reveal why the negotiations ultimately failed, although financial news agency Bloomberg reported that the French state had raised concerns about selling a company that handles defense contracts to a foreign investor.

Tech debt

Atos is burdened with debts totaling almost €4.7 billion. The group announced that it wanted to talk to the banks about refinancing and debt restructuring. Most recently, the Atos management also did not want to rule out taking legal protective measures should the financial emergency situation worsen.

The sale of the Big Data & Security (BDS) division to Airbus, which has been under negotiation for several months, could provide relief. A price of between 1.5 and 1.8 billion euros is apparently being discussed. It is not known how far the negotiations have progressed.

However, the pressure on Saleh, who only took office in mid-January 2024 and replaced the unhappy Yves Bernaert, is likely to increase further. Behind the scenes, there has been a lot of trouble recently. On his departure, Bernaert spoke openly about differences of opinion between him and the Executive Board about “the way in which the strategy should be adapted and implemented”. Saleh’s predecessor only took up the post at the beginning of October 2023 and only lasted three months.

Saleh, who has been promoted from CFO to CEO, may not have much time left to steer Atos into calmer waters. Along with the admission of the failed sales talks, he announced preliminary financial results for 2023 during the conference call. Group revenue for 2023 totaled €10.7 billion (about $11.6 billion), up 0.4% year on year and in line with the company’s previous forecasts. Tech Foundations accounted for €5.6 billion of that, down 1.7% year on year, a situation he characterized as a “managed decrease.” Revenue from the Eviden half of the business rose 2.9% to €5.1 billion, growing a little faster than in the previous year.

Atos is delaying announcing its full results until March 20, as its auditors need additional time to review an independent business review report and complete their audit of non-cash goodwill impairment charges.

Saleh said he will provide more details of the company’s plans to coordinate the activities of Eviden and Tech Foundations at that time.

IT Consulting Services, Managed Service Providers, Technology Industry]]>
https://www.cio.com/article/1310376/atos-deal-to-sell-its-legacy-service-business-falls-through.html 1310376
For IT leaders, operationalized gen AI is still a moving target Wed, 28 Feb 2024 10:00:00 +0000

The rate of companies that have either already deployed generative AI or are actively exploring it is accelerating to the point where, combined, there are very few holdouts. 

The use of gen AI in the enterprise was nearly nothing in November 2022, where the only tools commonly available were AI image or early text generators. But by May 2023, according to an IDC survey, 65% of companies were using gen AI, and in September, that number rose to 71%, with another 22% planning to implement it in the next 12 months.

Even in its infancy, gen AI has become an accepted course of action and application, and the most common use cases include automation of IT processes, security and threat detection, supply chain intelligence, and automating customer service and network processes, according to a report released by IBM in January. Plus, when you add in cloud-based gen AI tools like ChatGPT, the percentage of companies using gen AI in one form or another becomes nearly universal.

And this doesn’t include the gen AI that’s now being embedded into platforms like Office 365, Google Docs, and Salesforce.

However, getting into the more difficult types of implementations — the fine-tuned models, vector databases to provide context and up-to-date information to the AI systems, and APIs to integrate gen AI into workflows — is where problems might crop up. Building enterprise-grade gen AI platforms is like shooting at a moving target, and AI progress is developing at a much faster rate than they can adapt.

“It makes it challenging for organizations to operationalize generative AI,” says Anand Rao, AI professor at Carnegie Mellon University. “There are different tools, models, and vector databases evolving, and new papers coming out, which makes it very challenging for a company. They need stability. Tell me what to do for the next three months; don’t change everything every two weeks.”

Due to the complexity of this challenge, plus the cost involved and the expertise required, only 10% of organizations were actually able to launch gen AI models into production last year, according to findings released by Intel’s cnvrg.io in December.

But that doesn’t mean enterprises should just wait for things to settle down. To help regain some initiative, there are best practices that can be applied now to start building gen AI platforms — practices that will allow them to adapt quickly as the technology changes, including building robust and modern data and API infrastructures, creating an AI abstraction layer between their enterprise applications and the AI models they use, and setting up security and cost policies, usage guardrails, and ethics frameworks to guide how they deploy gen AI.

Data and API infrastructure

“Data still matters,” says Bradley Shimmin, chief analyst for AI platforms, analytics, and data management at London-based independent analyst and consultancy Omdia. Yet according to the IBM survey, data complexity was the second biggest barrier to adoption after lack of expertise, while the cnvrg.io survey said infrastructure was the single biggest challenge for companies looking to productionize large language models (LLMs).

Another setback is enterprises unable to keep up with business demands due to inadequate data management capabilities. An overriding issue, though, is that most organizations don’t have a plan, says Nayur Khan, a partner at McKinsey & Company. “They try to do something and see what sticks.” But with gen AI models being delivered as a service, in the form of, say, OpenAI APIs, there are use cases where companies can skip right ahead to deploying AI as a service.

“Now it becomes a service I can call and I don’t have to worry about training,” says Khan. “That’s fine, but language models are great for language. They’re not great for knowledge.” Knowledge sits inside organizations, he says.

A retail company, for example, might have a 360-degree view of customers, which is all fed into analytics engines, machine learning, and other traditional AI to calculate the next best action. Then the gen AI could be used to personalize the messages to those customers. So by using the company’s data, a general-purpose language model becomes a useful business tool. And everyone is trying to build these types of applications.

“I’m seeing it across all industries,” says Khan, “from high tech and banking all the way to agriculture and insurance.” It’s forcing companies to move faster on the digital front, he adds, and fix all the things they said they were going to do but never got around to doing.

And not only do companies have to get all the basics in place to build for analytics and MLOps, but they also need to build new data structures and pipelines specifically for gen AI.

When a company wants to fine-tune a model or create a new one in a particular subject area, it requires data architecture, critical choices about which model or type of model to pursue, and more. “It quickly adds up in complexity,” says Sheldon Monteiro, EVP at Publicis Sapient, a global digital consultancy.

Even a simpler project, like adding an external data source to an existing gen AI model, requires a vector database, the right choice of model, and an industrial-grade pipeline.

But it all begins with data, and it’s an area where many companies lag behind. Without a single and holistic strategy, every department will set up its own individual solutions.

“If you do that, you’ll end up making a lot more mistakes and re-learning the same things over and over again,” says Monteiro. “What you have to do as a CIO is take an architectural approach and invest in a common platform.”

Then there’s the hard work of collecting and prepping data. Quality checks and validation are critical to create a solid base, he says, so you don’t introduce bias, which undermines customers and business.

So if a particular data set excludes the highest-value transactions because those are all handled manually, then the resulting model could potentially have a bias toward smaller, less profitable business lines. Garbage in, garbage out applies to the new era of gen AI as much as it did in previous technological periods.

For companies that have already invested in their data infrastructure, those investments will continue to pay off into the future, says Monteiro. “Companies that invested in data foundations have a tremendous head start in what they’re doing with generative AI,” he says.

Still, these traditional data foundations originally designed for advanced analytics and machine learning use cases only go so far.

“If you want to go beyond the basics, you’ll need to understand some of the deeper subtleties of generative AI,” says Omdia’s Shimmin. “What’s the difference between different embedding models, what is chunking, what is overlap? What are all the different methodologies you can use to tokenize data in the most efficient way? Do you want high or low dimensionality to save space in your vector database? The MLOps tools we have weren’t built to do that. It’s all very complicated and you can waste a lot of time and money if you don’t know what you’re doing.”

But MLOps platforms vendors are stepping up, he says. “Companies like Dataku, DataRobot, and Databricks all have retooled to support LLMOps or GenAIOps. All the little pieces are starting to come into place.”

Analyzing the abstraction layer

Last November, OpenAI, the go-to platform for enterprise gen AI, unexpectedly fired its CEO, Sam Altman, which set off a circus-like scramble to find a new CEO, staff threatening to walk out, and Microsoft offering to take everyone in. During those tumultuous days, many companies using OpenAI’s models suddenly realized they put all their eggs into one unstable basket.  

“We saw a lot of OpenAI integrations,” says Dion Hinchcliffe, VP and principal analyst at Constellation Research. “But the whole management issue that happened with OpenAI has made people question their over-commitment.”

Even if a company doesn’t go out of business, it might quickly become obsolete. Early last summer, ChatGPT was pretty much the only game in town. Then Facebook released Llama 2, free for most enterprise customers followed by Anthropic’s Claude 2, which came out with a context window of 200,000 tokens — enough for users to cut-and-paste the equivalent of a 600-page book right into a prompt — leaving GPT-4’s 32,000 tokens in the dust. Not to be outdone, however, Google announced in February its new Gemini 1.5 model can handle up to 10 million tokens. With that, and greater speed, efficiency and accuracy across video, audio, and written copy, there were virtually no limits.

The number of free, open-source models continues to proliferate, as well as industry-specific models, which are pre-trained on, say, finance, medicine or material science.

“You’ve got new announcements every week, it seems,” says Publicis Sapient’s Monteiro.

That’s where a “model garden” comes in, he says. Companies that are disciplined about how they select and manage their models, and architect their systems so models can be easily swapped in and out, will be able to handle the volatility in this space.

But this abstraction layer needs to do more than just allow a company to upgrade models or pick the best one for each particular use case.

It can also be used for observability, metering, and role-based access controls, says Subha Tatavarti, CTO at technology and consulting firm Wipro Technologies.

Wipro, with 245,000 employees, has no choice but to adopt gen AI, she says, because its customers are expecting it to.

“We’re foundationally a technology company,” she says. “We have to do this.”

Broadening perspectives

Observability allows a company to see where data is going, what models and prompts are being used, and how long it takes for responses to come back. It can also include a mechanism to edit or obfuscate sensitive data.

Once a company knows what’s happening with its models, it can implement metering controls — limits on how much a particular model can be used, for example — to avoid unexpected spikes in costs.

“Right now, the way the metering works is the token consumption model,” Tatavarti says. “And it could get very expensive.”

In addition, for FAQs, companies can cache responses to save time and money. And for some use cases, an expensive, high-end commercial LLM might not be required since a locally-hosted open source model might suffice.

“All of that is fascinating to us and my team is definitely working on this,” she adds. “This is imperative for us to do.”

And when it comes to access controls, the fundamental principle should be to never expose native APIs to the organization but instead have a middle layer that checks permissions and handles other security and management tasks.

If, for example, an HR platform uses gen AI to answer questions based on a vector database of policies and other information, an employee should be able to ask questions about their own salary, says Rajat Gupta, chief digital officer at Xebia, an IT consultancy. But they shouldn’t be able to ask questions about those of other employees — unless they’re a manager or work in HR themselves.

Given how fast gen AI is being adopted in enterprises across all different business units and functions, it would be a nightmare to build these controls from scratch for every use case.

“The work would be enormous,” he says. “There’d be chaos.”

Gupta agrees enterprises that need to build this kind of functionality should do so once and then reuse it. “Take everything they need in common — security, monitoring, access controls — and build it as part of an enterprise-level platform,” he says.

He calls it an AI gateway, with the open source MLflow AI Gateway being one example. Released last May, it’s already been deprecated in favor of the MLflow Deployments Server. Another tool his company is using is Arthur AI’s Arthur Shield, a firewall for LLMs. It filters prompt injection attacks, profanity, and other malicious or dangerous prompts.

And then there’s Ragas, which helps check a gen AI response against the actual information in a vector database in order to improve accuracy and reduce hallucinations.

“There are many such projects both in the open source and the commercial space,” he says.

Third-party AI platforms, startups, and consultants are also rushing in to fill the gaps.

“The way the AI ecosystem is evolving is surprising,” says Gupta. “We thought the pace would slow down but it’s not. It’s rapidly increasing.”

So to get to market faster, Xebia is weaving these different projects together, he says, but it doesn’t help that AI companies keep coming up with new stuff like autonomous AI-powered agents, for example.

“If you’re using autonomous agents, how do you actually measure the efficacy of your overall agents project?” he asks. “It’s a challenge to actually monitor and control.”

Today, Xebia hobbles agents, curtailing their autonomy and allowing them to carry out only very limited and precise tasks. “That’s the only way to do it right now,” he adds. “Limit the skills they have access to, and have a central controller so they’re not talking to each other. We control it until we have more evolved understanding and feedback loops. This is a pretty new area, so it’s interesting to see how this evolves.”

Building guardrails

According to the cnvrg.io survey, compliance and privacy were top concerns for companies looking to implement gen AI, ahead of reliability, cost, and lack of technical skills.

Similarly, in the IBM survey, for companies not implementing gen AI, data privacy was cited as the barrier by 57% of respondents, and transparency by 43%. In addition, 85% of all respondents said consumers would be more likely to pick companies with transparent and ethical AI practices, but fewer than half are working toward reducing bias, tracking data provenance, working on making AI explainable, or developing ethical AI policies.

It’s easy for technologists to focus on technical solutions. Ethical AI goes beyond the technology to include legal and compliance perspectives, and issues of corporate values and identity. So this is an area where CIOs or chief AI officers can step up and help guide the larger organizations.

And it goes even further than that. Setting up gen AI-friendly data infrastructures, security and management controls, and ethical guide rails can be the first step on the journey to fully operationalize LLMs.

Gen AI will require CIOs to rethink technology, says Matt Barrington, EY Americas emerging technologies leader. Prior to gen AI, software was deterministic, he says.

“You’d design, build, test, and iterate until the software behaved as expected,” he says. “If it didn’t, it was a bug, and you’d go back and fix it. And if it did, you’d deploy it into production.” All the large compute stacks, regardless of software pattern, were deterministic. Now, other than quantum computing, gen AI is the first broadly known non-deterministic software pattern, he says. “The bug is actually the feature. The fact it can generate things on its own is the main selling point.”

That doesn’t mean the old stuff should all be thrown out. MLOps and Pytorch are still important, he says, as is knowing when to do a RAG embedding model, a DAG, or go multi-modal, as well as getting data ready for gen AI.

“All those things will remain and be important,” he says. “But you’ll have the emergence of a new non-deterministic platform stack that’ll sit alongside the traditional stack with a whole new area of infrastructure engineering and ops that will emerge to support those capabilities.”

This will change how businesses operate at a core level, and moving in this direction to become a truly AI-powered enterprise will be a fast-paced shift, he says. “Watching this emerge will be very cool,” he says.

Artificial Intelligence, Business Operations, CIO, Data Architecture, Data Governance, Data Management, Enterprise Architecture, Generative AI, IT Leadership, IT Operations]]>
https://www.cio.com/article/1309747/for-it-leaders-operationalized-gen-ai-is-still-a-moving-target.html 1309747
The multi-faceted digital transformation of Barcelona City Council Wed, 28 Feb 2024 10:00:00 +0000

A holistic digital transformation of its services, comprised of many technological initiatives, earned Barcelona City Council a place as a finalist for Public Entity of the Year at the CIO 100 Awards Spain 2023 in December. And as its CIO, Nacho Santillana Montiel was the central figure of this distinction due to a series of innovative projects including integrating various emerging technologies into daily life at the City Council and for Catalan citizens. 

“The project stands out for its social commitment by addressing emerging problems such as loneliness in the elderly,” Santillana says. “Ethics and social responsibility are fundamental elements in decision-making and the implementation of new solutions.”

Among these solutions, artificial intelligence stands out as something integrated in all municipal services, as well as how work methodologies and protocols for the deployment of these systems are defined, and ensuring they’re used in accordance with legal, ethical, and technical standards. Plus, a public registry of algorithms and an external advisory board to prepare algorithmic impact studies was launched.

Laying the groundwork

Last year, the Municipal Data Office (OMD) was created, which is responsible for the management, quality, governance, analysis, and dissemination of data from the Barcelona City Council and its associated entities. The OMD also serves as the main system to organize and plan sociological research, and monitor public opinion, statistical production, and socioeconomic analysis of the city and environments of interest, which allows the publication of high frequency data and the development of evidence-based public policies. It also includes the set of applications derived through the CityOS platform, the main municipal data repository. 

Barcelona has also created its own digital twin to check whether the city meets the requirements of the so-called “15-minute cities,” in which public services, equipment, and facilities, such as metro stops, electric charging points, bus stations, health establishments, and green spaces, are less than a 15-minute walk from any point in the capital. This project is the first phase of a collaboration with the Barcelona Supercomputing Center, the Centro Nacional de Supercomputación (BSC-CNS), which, together with the support of the Municipal Institute of Informatics (IMI) and Barcelona Regional (BR), has allowed simulations and predictions about the impact of certain projects or public policies.

In addition to the chatbots that combat loneliness among the elderly and disabled, Barcelona City Council’s portfolio of technological initiatives also includes payment of fines and taxes through Bizum, the payment system that’s prevalent throughout Spain; the digitalization of Barcelona cemetery services; real-time monitoring of children’s play areas to improve their usability; and the Zoobot chatbot, which makes visits to the Barcelona Zoo more inclusive for visitors with specific needs.

These and other projects have helped Barcelona to be recognized within Europe as a leading digital city, with priorities such as digital rights and inclusion, the use of emerging technologies to promote urban innovation, and using data as a public service for good. As a result, the city has held the presidency of the Eurocities Digital Forum for the last two years, where its contribution has been key to increase participation and ongoing impact of cities in European legislative initiatives, such as the Digital Services Law, the Governance Law Data, the exchange of short-term rental data, the Data Act, and the Artificial Intelligence Law, among others.

An inclusive approach

For Santillana, part of what makes this digital transformation of the City Council so relevant is its holistic approach. “It addresses both the practical and emotional needs of the community, especially older people and those with disabilities,” he says. “The project is distinguished by its attention to the diversity of the population, and the implementation of digital solutions seeks to ensure universal accessibility, guaranteeing that all citizens can benefit from municipal services.”

He also highlights the ability to adapt to the changing needs of the community and the technological environment. “Flexibility in implementation allows for continuous adjustments to ensure relevance and effectiveness over time,” he adds. “Together, these aspects make the projects unique by addressing not only efficiency in municipal management, but also significant improvement of the quality of life and social well-being of citizens, highlighting an exceptional commitment to ethics, inclusion, and social innovation.” 

For Santillana, the keys to project success are, among others, taking into account the needs and experiences of users from the beginning and throughout the process, prioritizing the security and privacy of information, and that the introduction of new technologies requires an adaptation phase, so training and support for employees and citizens are vital.

“These projects reflect Barcelona City Council’s commitment to innovation, inclusion, and continuous improvement to the quality of life for its citizens through the strategic application of digital technologies,” he adds. “The combination of administrative efficiency, universal accessibility and focus on social well-being positions Barcelona as a reference point in the digital transformation of local administrations.”

Artificial Intelligence, CIO, Digital Transformation, Innovation, IT Leadership]]>
https://www.cio.com/article/1310119/the-multi-faceted-digital-transformation-of-barcelona-city-council.html 1310119
Whether your technology is new or old, lifecycle management is key Tue, 27 Feb 2024 17:11:21 +0000

Does your organization see technology infrastructure as a commodity or as a strategic business enabler? The answer will shape your approach to infrastructure: you can keep legacy infrastructure going as long as you can, or you can pursue the cutting edge of technology.

However, in an increasingly software-centric environment, both new and legacy assets must deliver value throughout their lifecycle.

Business needs first, then the lifecycle management program

Your infrastructure lifecycle management program should be aligned with your organization’s growth roadmap. Think of how people buy phones: if they just need to make calls, an older, entry-level mobile phone fits their needs perfectly – and they will only have to replace it once it breaks. If they need their phone to do business and use the latest apps, even in the field, then new technology is a must. The phone will have to be replaced every few years to keep supporting new apps.

Then, you should link your infrastructure adoption and governance strategy to specific business outcomes. This will involve comprehensive and proactive infrastructure management.

The dangers of sweating assets

If you have an older phone, the manufacturer eventually stops updating the software. You can still use your older phone for calls but suddenly, you can’t load new apps and it becomes easier for data to leak.

The same principle applies to infrastructure. Security patches stop at least six months before an asset reaches its end of life. Your risk of data breach increases as soon as your assets are no longer protected from cyberthreats. This creates risk for the company, shareholders, customers and their brand. You don’t have to upgrade to the newest technology as soon as it becomes available. But your infrastructure, software and software versions must always remain current.

Legacy technology also affects sustainability. Excessive energy use, inefficient disposal methods and the need to extract raw materials to manufacture new equipment all add up to a significant impact on the environment.

Complexity, challenges and gaining control

Lifecycle management requires an approach that can quickly adapt to shifting customer and market needs. This is where software-driven infrastructure plays a key role because it enables the scalability, performance and resilience you need to support secure, high-quality user and customer experiences.

But, to gain control of your software lifecycle, you must first understand what you have, where it’s deployed, what you’ll need next and when demand for licenses will increase.

At the same time, you’ll need to rethink your hardware and software capital expenditure, as organizations are increasingly moving to subscription models (such as enterprise agreements) and as-a-service offerings.

These undertakings are best attempted with the help of an expert partner. At NTT DATA, we offer support throughout the lifecycle management journey – and our range of vendor relationships mean we can easily coordinate our efforts across these organizations to benefit our clients. We create continuity and keep your success criteria in mind at every stage of delivery.

Start right to get it right

The startup phase of infrastructure management is the most important step. First comes a meeting of IT minds: our solution leaders work with your organization’s IT leaders. This enables us to map out your environment and formulate a plan of action that fits both the technology constraints and your budget.

Lifecycle management keeps an equal focus on an asset through all its stages. For a purchase to fulfill its potential, every relevant feature must be configured and adopted into your organization’s environment from the start. This is especially true of security, monitoring and reporting functions. Inevitably, an asset will become dated and no longer align with your business environment. Our lifecycle approach tackles this from the get-go.

Best practices for getting the most out of infrastructure investments

  • Consolidate your software environment for more control over costs and to get the information you need for investment decisions.
  • Establish and maintain accurate information on hardware inventory and the efficient management of your software assets.
  • Identify underused assets to get the most out of your investment by using them to their full potential.
  • Identify unapproved and blocked installations to improve compliance and mitigate risk.

We also offer continual support to ease the adoption of new software and hardware and to ensure that all use cases are addressed.

Measure the success of lifecycle management

With efficient lifecycle management, your organization can deploy solutions swiftly while remaining compliant, reducing costs, mitigating risks, increasing adoption rates and, ultimately, maximizing your return on investment.

These outcomes should be defined and measured across the lifecycle, and your lifecycle methodology should be adjusted accordingly in pursuit of your business objectives.

This is why it’s critical to partner with a technology solutions provider to define your success criteria and to measure performance against these goals.

Read more about NTT DATA’s Technology Solutions to see how our comprehensive suite of lifecycle services can help your organization.

Infrastructure Management]]>
https://www.cio.com/article/1310070/whether-your-technology-is-new-or-old-lifecycle-management-is-key.html 1310070
A digital paradigm shift for emergency communications: Toronto Fire Services makes history with Avaya Tue, 27 Feb 2024 15:45:14 +0000

The first telephone call was made almost 150 years ago in Ontario, Canada by Alexander Graham Bell. Today, that call would probably look different. He might have texted Mr. Watson, record a video, or send a DM. There are so many ways we now communicate, and we don’t think twice about the details. We expect to know where someone is, what communication device or application they’re using, and more in real-time as we engage. This is the norm in today’s highly connected, digital world. 

We have yet to see this paradigm shift in Canada for emergency communications, but it’s coming. Per the CRTC’s NG9-1-1 timeline, Canadian public safety answering points (PSAPs) must transition to NG9-1-1 by March 2025. That means they must move from the existing E9-1-1 network (designed for analog phones, which Canadian service providers have been instructed to decommission by this deadline) to NG9-1-1 capabilities. The vision for NG9-1-1 which will be realized over time could see a text message from a mobile device or an electronic request through an IoT device (ex: your Tesla automatically dialing 9-1-1 when the airbag deploys), with the PSAP receiving and processing the call including situational awareness with additional information being provided about the call, caller, and location (e.g., medical alert, hazardous materials, etc.). 

Avaya is at the forefront of this critical movement, most recently enabling Toronto Fire Services (TFS) as the first PSAP in Bell Canada territory to migrate to Next Generation 9-1-1- (NG9-1-1).   

Toronto Fire Services’ move to NG9-1-1 

In July 2023, Avaya, in collaboration with partners like; Komutel, a multi-sided and industry-leading PSAP organization for ambulance dispatch communications; Netagen, an Avaya systems integrator providing a fully managed, all-in-one NG9-1-1 solution; and 9-1-1 service providers like Bell, helped the TFS communication center accept, process, and transfer the first ever public network initiated NG9-1-1 test call conducted in Canada. TFS staff used their smartphones to dial 9-1-1, which was answered by the city’s primary PSAP operated by the Toronto Police Services, which then transferred the call to the Toronto Fire Services’ NG9-1-1 system. TFS communications staff processed the mock call successfully within the new NG9-1-1 compliant system and the existing live dispatch system. 

In December 2023, TFS officially moved from its existing E9-1-1 network to NG9-1-1, making it the first-ever PSAP to do so in the Bell Canada territory. Key features include:

IP-based infrastructure: TFS no longer needs to rely on analog technology, instead using internet protocol (IP) networks that enable more efficient and flexible data transfer.

Geospatial information: NG9-1-1 will eventually enable accurate location information through GIS (Geographic Information System), enabling TFS to pinpoint each caller’s exact location.

Interconnectivity: Communication and information sharing is seamless between the multiple PSAPs that might be involved in processing the 9-1-1 call or handling the emergency.

Enhanced call routing: TFS can kick emergency response into high gear with the ability to intelligently route calls based on location, load balancing, and other factors.

Here’s what this means for Toronto Fire Services:

  • Faster response time: It’s estimated that thousands of lives could be saved every year by reducing 9-1-1 response times by just one minute. The interoperability of TFS’ NG9-1-1 system accelerates response times through rapid data sharing, mitigating injuries, preventing economic loss, and saving lives. 
  • Situational awareness: Pinpointing the exact location of a caller is critically important for their safety and the safety of those responding to the scene. Situational awareness means less time navigating smoke-filled rooms and more time getting people out.  
  • Improved call processing times: TFS can more efficiently manage emergency call transfers and call overload, significantly improving operations and emergency outcomes. 
  • Digital transformation: TFS is innovating without disruption by leveraging a highly agile and tightly integrated platform that meets new citizen and operational requirements.

The hopes of decades past become a reality

The kind of advancements needed to enhance emergency communications in Canada was a pipe dream as long as emergency calls remained on legacy analog networks. Avaya, along with others, is helping make NG9-1-1 a reality. We imagine Alexander Graham Bell would be proud! 

Avaya has been an active contributor to Canada’s Emergency Services Working Group (ESWG), working with telecom service providers, PSAPs, and 9-1-1 industry specialists to address the top issues that relate to the provisioning of emergency services and the operational and technical aspects of NG9-1-1, and for addressing calls coming from multi-line telephone systems (MLTS). In 2019, we made history by conducting the first-ever NG9-1-1 test call using a commercially available system in Canada. In 2021, we equipped a large provincial healthcare providerwith an NG9-1-1 capable Call Handling Platform to be used by more than 20 central ambulance communication centers, over 1,000 phone lines, and more than 200 operators. This is only the beginning as Canadian PSAPs begin their mandated modernization journey. 

Learn how Avaya can add NG9-1-1 capabilities to your communication platform, be it from the PSAP/receiving side or the enterprise/sending side, to transform 9-1-1 services. 

Data Management, Networking]]>
https://www.cio.com/article/1310137/a-digital-paradigm-shift-for-emergency-communications-toronto-fire-services-makes-history-with-avaya.html 1310137
How CIOs in the Middle East address talent shortages Tue, 27 Feb 2024 10:55:03 +0000

IT organizations are having to transform themselves to meet the evolving needs of the future enterprise, and CIOs are increasingly being tasked with leading this transformation as IT becomes the enterprise operating system. As the world of work continues to evolve and organizations shift to hybrid work models, new challenges and opportunities present themselves.

Finding and keeping tech talent has never been easy. However, as the world of work continues to evolve and organisations shift to hybrid work models, new challenges and opportunities present themselves. How can technology leaders leverage these shifts to enhance online and virtual experiences and strengthen competitiveness by developing people, talent, and skills?

According to the World Economic Forum, by 2023, 85 million jobs will disappear, 97 million new jobs will arise and 50% of the workforce will need to reskill. Every year, talent is one of the top challenges IT organisations face, especially after last year\’s upswing of GenAI. Is there anything CIOs can do to transcend this shortage?

“Our strategy is based on the pilar ‘Prepared people for the future’ and that’s why in Oman the Government has also launched several initiatives to prepare either job seekers or students. AI has changed the world, especially GenAI last year so, so how do we prepare ourselves?”, asked Dr. Salim Al-Shuaili, Director, Artificial Intelligence & Advanced Technology Projects Unit, Oman\’s Ministry of Transportation & Information Technology. “First of all, surely, we have to digitalise our services, be data-driven and decision making, also it’s important in the world there’s a regulation across AI and some policies for technology.”

A lot of updates are coming in tech with innovations all the time, the reason why CIOs are suffering a shortage of talent, according to tech leaders from a technical perspective it’s hard to find a special team in certain technologies and cover the needs of the projects

“There are not enough people in the market, when you explain your need to the recruitment team they work based on your job description, but there is a shortage so we need to find someone proactive who is willing to learn,” added Dr. Ebrahim Al Alkeem Al Zaabi, Director of Digital Transformation, Cybersecurity and AI, Abu Dhabi Government.

The market is changing so the demand for new fields is changing too and it’s important organizations and HR know what a CIO needs. “ We can’t move with our current strategy, the change has to come from the C-senior level, and we have to be early adopters. In healthcare ten years ago every organisation was using paper, breaking into digital was something new and required a lot of work and for that, you need to find the proper workforce to execute the mission, so you have two options, either you find the talent if possible or you train your staff, of course, you can fail at the beginning but if you follow an AI-process oriented strategy who understand the impact of AI then you will have an oriented workforce who will find solutions and will close the gaps,” highlighted Dr. Ali Al Sanousi, Executive Director, Clinical Information Systems, Hamad Medical Corporation.

]]>
https://www.cio.com/article/1310033/how-cios-in-the-middle-east-address-talent-shortages.html 1310033
CIOs rethink all-in cloud strategies Tue, 27 Feb 2024 10:01:00 +0000

After years of marching to the cloud migration drumbeat, CIOs are increasingly becoming circumspect about the cloud-first mantra, catching on to the need to turn some workloads away from the public cloud to platforms where they will run more productively, more efficiently, and cheaper.

“‘Cloud exit’ became a big theme in 2023 and there’s good odds it’ll turn into a real trend for 2024. The savings are just too big to ignore for a ton of companies,” says David Heinemeier Hansson, the Danish developer of Ruby on Rails and co-owner and CTO of 37signals, which completed a six-month total exit from the cloud last June. “Enough people are realizing that the cloud marketing spiel doesn’t necessarily match their reality.”

And it’s that reality of accumulated experience in the cloud that has many CIOs rethinking their platform-centric approach to cloud in favor of a workload-specific one. The resulting infrastructure of choice — a combination of on-premises and hybrid-cloud platforms — will aim to reduce cost overruns, contain cloud chaos, and ensure adequate funding for generative AI projects.

David Linthicum, former chief cloud strategy officer at Deloitte, says many enterprise CIOs who got caught up in the race to the cloud are now fixing their “misadventures” by seeking out the ideal platforms for various applications — whether that is in a private cloud, on an industry cloud, within their own data centers, through a managed service provider, on the edge, or orchestrated in a multicloud architecture.

“The most common motivator for repatriation I’ve been seeing is cost,” writes Linthicum, who conjectures that “most enterprise workloads aren’t exactly modern” and thus not best fits for the cloud.

Srinivasan CR, EVP of cloud and cybersecurity services and chief digital officer at Tata Communications, sees many enterprises “getting more nuanced” with their cloud use and strategies in an effort to balance performance, costs, and security.

“As businesses look to leverage artificial intelligence a lot more, they are and will relook at the workloads and place them on the right infrastructure, be it in the public cloud or the edge or bringing them back to their own private cloud or servers in-house,” Srinivasan says. “Such decisions are largely driven by the need to maximize performance and business benefits while not losing track of costs.”

John Musser, senior director of engineering for Ford Pro at Ford Motor Co., agrees.

“It’s a form of rightsizing, trying to balance around cost effectiveness, capability, regulation, and privacy,” says Musser, whose team found it more cost effective to run some workloads on a high-performance computing (HPC) cluster in the company’s data center than on the cloud. “Even though we’ll often do it in the cloud, doesn’t mean we should always do it in the cloud.”

The public cloud pivot

That mindset is catching on as CIOs look to apply lessons learned from their initial push to the cloud.

“Any organization of size, dealing with diverse technology, is doing their company a disservice if a public cloud–only strategy is their ultimate end goal,” says Brian Shields, SVP and CTO of Boston Red Sox and Fenway Sports Management.

“Like many complex businesses, we are an evolving hybrid model that maintains compute and storage capabilities in the public cloud, on-prem, with our co-location partner, and industry cloud partners,” Shields adds.

This refinement of thinking about the cloud comes as hefty AI costs loom on the horizon. For CIOs who need real-time access to data, such as for manufacturing or industrial controls, loading data on the edge is a better solution than the public cloud.

“Edge provides processing of real-time computation, for instance computer vision and real-time computation of algorithms for decision-making,” says Gavin Laybourne, CIO at Maersk. “I send data back to the cloud where I can afford a 5-10 millisecond delay of processing. “

At the CDO Summit in December in Boston, Mojgan Lefebvre, chief technology and operations officer at Travelers, noted that the cloud has the scalable and adaptable infrastructure for various needs, as well as access to more advanced AI tools such as large language models.

But “importantly, this reliance on cloud technology does not necessitate a complete migration of all assets to a cloud-based environment,” Lefebvre said.

Payroll giant ADP, for instance, uses AWS for most of its net-new applications, as well as Microsoft Azure and Cisco Cloud, but “we still have a lot of load running in our data centers,” says Vipul Nagrath, head of product development at ADP and the company’s former CIO.

Some CIOs are opting to host workloads in private clouds, such as HP Enterprise’s Greenlake or Dell’s APEX platforms, to achieve greater security, and lower costs, than they would in the public cloud.

Richard Semple, CIO of Williamson County, Texas, where Samsung’s sprawling new chip-making facility is under development, considered all the public clouds for the government’s growing digital infrastructure. In the end, he opted for the security of retaining data on premises but on a private cloud engineered by Dell.

Reassessing, one workload at a time

For those CIOs already deep into the cloud, taking a hard look at all aspects of an application before adding yet another to their cloud estate is becoming more the norm than simply pushing forward.

“We don’t go into the cloud unless we know there are savings and we keep measuring to ensure,” says Jamie Holcombe, CIO of the US Patent & Trademark Office. “I know from experience that ‘chatty’ applications are often the most expensive in the cloud, so we either re-factor or keep on-premise.”

Not all government CIOs are moving workloads off the cloud or feeling the need for repatriation. “I am 100% in the cloud and would not have it any other way,” says Gerald Caron, CIO of the International Trade Administration.

And while repatriation is a real trend, it’s not yet universal.

“It just shows CIOs are actually thinking about where they want to platform their application portfolios,” says Steve Randich, CIO of the Financial Regulatory Authority (FINRA), a private company. “The cloud makes sense in some but not all cases.”

As for FINRA, the cloud remains central.

“In FINRAs case it would cost double to build the infrastructure internally that we use every day on AWS,” Randich says. “Plus, we would lose the flexibility to quickly ramp up and ramp down infrastructure based on expansion and contraction of transaction volume. It may very well be that many organizations have highly predictable, stable volume. Not FINRA.”

Whether any given workload is best suited to the cloud is a matter of context. Wiser and more experienced, CIOs today are being more intentional about determining this to ensure, on a case-by-case basis, applications are hosted in the context that matters most.

Cloud Computing, Data Center, Edge Computing, Hybrid Cloud, IT Strategy, Multi Cloud]]>
https://www.cio.com/article/1309572/cios-rethink-all-in-cloud-strategies.html 1309572