by Peter Wayner

10 emerging innovations that could redefine IT

Feature
13 Jun 202314 mins
BlockchainData ManagementDatabases

CIOs must keep an eye on the horizon. The following forward-thinking strategies and technologies are starting to gain traction and could impact the next decade of IT.

Engineers Meeting in Technology Research Laboratory: Engineers, Scientists and Developers Gathered Around Illuminated Conference Table, Talking and Finding Solution Inspecting Industrial Engine Design
Credit: Gorodenkoff / Shutterstock

The pace of innovation is relentless. CIOs must watch for the next generation of emerging technologies because new software can go from the dreams of some clever coder to an essential part of every IT shop in the blink of an eye.

Once wild and seemingly impossible notions such as large language models, machine learning, and natural language processing have gone from the labs to the front lines. The next generation promises to deliver the same unstoppable parade of innovation.

Here are nine big ideas, buzzwords, and evolving technologies that are starting to gather momentum today. Embracing an idea before it’s ready can be invigorating — if you’re right. Waiting until it’s established may be safer but can put you behind your competitors.

IT departments must keep apprised of these newly emerging ideas and technologies as they evolve and ascertain when — and whether — the moment is right to deploy them for serious work. Only you know when that moment might be for your tech stack.

Analog computing

The most common paradigm for computation has been digital hardware built of transistors that have two states: on and off. Now some AI architects are eyeing the long-forgotten model of analog computation where values are expressed as voltages or currents. Instead of just two states, these can have almost an infinite number of values, or at least as much as the precision of the system can measure accurately.

The fascination in the idea comes from the observation that AI models don’t need the same kind of precision as, say, bank ledgers. If some of the billions of parameters in a model drift by 1%, 10% or even more, the others will compensate and the model will often still be just as accurate overall.

The hope is that these new analog chips will use dramatically less power making them useful for any of the mobile and distributed applications on machines that aren’t always plugged in.

The main constituency: Any developer who will happily trade a bit of precision for a big savings in electricity.

Chance of succeeding: The success or failure will probably be governed by the nature of the applications. If there’s less of a need for absolute precision and more of a need to save electricity, the chips will be more likely to win.

Better API scrutiny

Once upon a time, there was little difference between an API and a web page. If the information is public, why make a distinction? But now many are recognizing that data may be used in different ways and maybe access needs to be more regulated.

The biggest worries are coming from websites that recognize how their data may be used to train AI models. Some are afraid that the models built from scraping their site will misuse or misunderstand the data. Could the models expose their users to privacy leaks when the AI regurgitates some detail at the wrong time? Others just want a piece of the action from the IPO when the model makes its debut.

Some sites are already tightly controlling access to the video and textual data that the AI builders crave. They’re limiting downloads and tightening up the terms of service for when it’s time for a lawsuit. Others are building a new layer of intelligence into their APIs so that smarter, more business-savvy decisions can be made about releasing information.

Main constituents: Companies that control access to large blocks of data that can be useful to automated analysis.

Chance of succeeding: The moment is already here. Companies are re-examining their APIs and setting up stronger controls. (See here and here, for instance.) The deeper question is how users will react. Could the tight controls end up cutting off some great features enabled by the API?

Remote collaboration

The COVID-19 pandemic may be a memory for many, but the battle over workspaces and proximity will continue. On one side are people who value the synergy that comes from putting people in the same room. On the other are people who talk of the time and expense of commuting.

The IT department has a big role in this debate as it tests and deploys the second and third generation of collaboration tools. Basic video chatting is being replaced by more purpose-built tools for enabling standup meetings, casual discussions, and full-blown multi-day conferences.

The debate is not just technical. Some of the decisions are being swayed by the investment that the company has made in commercial office space. (Some are even suggesting finding a way for workers to literally live at the office.) Other leaders have strong personal opinions about the best way to run a team. The IT teams must find the best way to support the larger mission with better tools and practices.

Main constituents: Any team that must balance proximity with efficiency. Any team with people who must trade off the time of commuting against the value of face-to-face interaction.

Chance of succeeding: The battle between the remote and non-remote lovers is already here. The real question is whether the collaborative software teams will be able to build tools that are so good that even the lovers of face-to-face meetings will start saying, “Why don’t we all log into the tool from this conference room together?”

Physical security of digital systems

When most IT people think of computer security, they think of clever hackers who infiltrate their systems through the internet. They worry about protecting the digital data that’s stored in databases, networks, or servers. The world of locking doors and protecting physical access is left to locksmiths, carpenters, and construction managers.

But physical security is becoming a real worry and IT managers can’t take it for granted. The best example of how physical and cybersecurity are merging may be the car thieves who’ve figured out that they can pry open a seam by the headlight, connect to the data bus, and inject the right message to open the doors and start the engine. The Death Star wasn’t the only technical marvel brought down by an unguarded physical port.

Similar attacks on the hardware found in desktops or laptops is starting to hit closer to home for IT departments. (See here.) Building devices that are secure against both digital and physical assaults is very hard.

Main constituents: The most shocked are often companies that fall victim to a poorly guarded physical attack, but everyone needs to consider the dangers.

Chance of succeeding: Basic physical security is as easy as locking your doors. Real physical security may be impossible. IT departments must find the practical balance that works for their data and, at the very least, up their game to defeat the new generation of attackers.

Reliable computing

Trustworthy systems have always been the goal for developers but lately some high-profile events are convincing some IT managers that better architectures and practices are necessary. They know that many software developers fall into the trap of watching their code run perfectly on their desktop and assuming it will always be so in the real world. A number of high-profile software failures at companies like Southwest Airlines or EasyJet show how code that runs well most of the time can also fail spectacularly.

The challenge for IT teams is trying to anticipate these problems and build yet another layer of resiliency into their code. Some systems such as databases are designed to offer high reliability. Now developers need to take this to the next level by adding even better protections. Some of the newer architectures such as microservices and serverless designs offer better protections but often come with troubles of their own.

Developers are re-evaluating their microservice and massive monolayers with an eye toward understanding how and when they collapse.

Main constituency: Businesses like airlines that can’t live without their technology.

Chance of succeeding: Companies that are able to improve the reliability and avoid even the occasional mishaps and catastrophes will be the ones that live on. Those that don’t will be bled dry by the lost contracts and opportunities.

Web assembly

Much of the past several decades of software development has been devoted to duplicating the ease and speed of native desktop code inside the security straightjackets of the modern web browser.

The results have generally been good but they’re about to get better thanks to the emergence of web assembly (WASM). The technology opens up the opportunity for developers to write more complex code that offers more sophisticated and flexible interfaces to the user. Sophisticated tools like photo editors and more immersive environments become possible.

The technology also opens up the options for more complex, compute-heavy code with more sophisticated AI models and better, more responsive code. Tools like CheerpJ, Wasmer, and Cobweb are just three examples of tools bringing languages like Java, Python, and COBOL to the world that was once the land of JavaScript. 

Main constituency: Teams that must deliver complex, reactive code to distant users. If much of the work is done at the client machine, then web assembly can often speed up that layer. Managers who want to ensure that all the hardware runs the same code will find the simplicity attractive.

Chance of succeeding: The foundation is here. The deeper problem is building out the compilers and distribution mechanisms to put the running code in people’s machines. The biggest challenge may be that downloading and installing executable code is not that hard for many users.

Decentralized identity

The idea of splitting up our so-called identity is evolving on two levels. On one, privacy advocates are building clever algorithms that reveal just enough information to pass through whatever identity check while keeping everything else about a person secret. One potential algorithm, for instance, might be a digital drinking license that would guarantee that a beer buyer was over 21 years old without revealing their birth month, day, or even year.

Another version seems to be evolving in reverse as the advertising industry looks for ways to stitch together all our various pseudonyms and semi-anonymous browsing on the web. If you go to a catalog store to shop for umbrellas and then start seeing ads for umbrellas on news sites, you can see how this is unfolding. Even if you don’t log in, even if you delete your cookies, these clever teams are finding ways to track us everywhere.

Main constituents: Enterprises like medical care or banking that deal with personal information and crime.

Chance of succeeding: The basic algorithms work well; the challenge is social resistance.

GPUs

Graphic processing units were first developed to speed up rendering complex visual scenes but lately developers have been discovering that the chips can also accelerate algorithms that have nothing to do with games or 3D worlds. Some physicists have been using GPUs for complex simulations for some time. Some AI developers have deployed them to churn through training sets.

Now, developers are starting to explore speeding up more common tasks such as database searching using GPUs. The chips shine when the same tasks need to be done at the same time to vast amounts of data in parallel. When the problem is right, they can speed up jobs by 10 to 1,000 times. Not only that, but companies like Apple and AMD are doing a great job integrating the GPU with the CPU to produce something that can do both types of tasks well.

Main constituents: Data-driven enterprises that want to explore computation-heavy challenges such as AI or complex analytics.

Chance of succeeding: Smart programmers have been tapping GPUs for years for special projects. Now they’re unlocking the potential in projects that touch on problems faced by a larger array of businesses.

Green computing

Every day we hear new stories about huge new data centers filled with massive computers that are powering the cloud and unlocking the power of incredibly complicated algorithms and artificial intelligence applications. After the feeling of awe dissipates, two types of people cringe: the CFOs who must pay the electricity bill, and green advocates who worry about what this is doing to the environment. Both groups have one goal in common: reducing the amount of electricity used to create the magic.

It turns out that many algorithms have room for improvement and this is driving the push for green computing. Does that machine learning algorithm really need to study one terabyte of historical data or could it get the same results with several hundred gigabytes. Or maybe just ten or five or one? The new goal for algorithm designers is to generate the same awe with much less electricity, thus saving money and maybe even the planet.

Main constituents: Any entity that cares about the environment — or pays a utility bill.

Chance of succeeding: Programmers have been sheltered from the true cost of running their code by Moore’s Law. There’s plenty of room for better code that will save electricity.

Decentralized finance

Some call it a blockchain. Others prefer the more mundane phrase “distributed ledger.” Either way, the challenge is to create a shared version of the truth — even when everyone doesn’t get along. This “truth” evolves as everyone adds events or transactions to the shared distributed list. Cryptocurrencies, which rely heavily on such mathematically guaranteed lists to track who owns the various virtual coins, have made the idea famous, but there’s no reason to believe decentralized approaches like this need to be limited to just currency.

Decentralized finance is one such possibility, and its potential rides in part because it would involve several companies or groups that need to cooperate even though they don’t really trust each other. The chain of transactions held in the distributed ledger might track insurance payments, car purchases, or any number of assets. As long as all parties agree to the ledger as truth, the individual transactions can be guaranteed.

There also continues to be real interest in Non-fungible Transactions (NFT), even though the hype has faded. These can end up having practical value to any business that wants to add a layer of authenticity to a digital experience. Perhaps a baseball team might issue an NFT version of the scorecard to anyone who bought a real ticket to sit in the stands. Perhaps a sneaker company might dole out NFTs with access to the next drop of a certain colorway.

Main constituency: Anybody who needs to both trust and verify their work with another company or entity. Enterprises working with digital elements that need more authenticity and, perhaps, artificial scarcity.

Chance of succeeding: It’s already here but only in cryptocurrency worlds. More conservative companies are slowly following.

Exit mobile version