AI systems don't need to be perfect to be put into production. They just need to be better than humans. Credit: Iaremenko / Getty Images Artificial intelligence (AI) has been a hot topic among IT and business leaders as it promises to be the biggest driver of change in human history. The way we work, live, learn and play will never be the same once AI is infused into all of our devices, cars, appliances and everything else we interact with. CIOs are well aware of this and are looking to use AI as part of their digital transformation strategy. One of the challenges is that people often overestimate what an AI can do and they expect perfection. If there are any mistakes at all, it’s back to the drawing board to refine the algorithms or spend more time in the learning phase. With self-driving cars, for example, when an accident occurs, people freak out and act like the car is the long-lost cousin of a T-600 Terminator that purposefully had the accident to wipe out a human. The fact is, self-driving cars don’t need to be accident-free, they just need to be better than human drivers to be helpful to society. That bar is achievable today. This means, broadly, that AI systems merely need to be assistive (i.e., helpful to the person using it) to be put into production. Can it make a doctor work smarter? Can it help classify images faster than people? Can it predict outages faster than an engineer? Once that threshold is met, roll it out and reap the benefits. Aim for minimum viable intelligence Last week I attended an AI event in San Francisco hosted by Cambridge Consultants, NVIDIA and NetApp where this very topic was discussed. During his keynote,Tim Ensor, director of AI at Cambridge Consultants, mentioned how when his company works with customers, AI initiatives are launched once they achieve “minimum viable intelligence” (MVI). The threshold for what “minimum viable” means will vary by use case. For example, an AI-based robot that assembles customer orders for a retailer needs to be near perfect as errors here can cost companies big bucks in returns. For other applications, though, the bar isn’t nearly as high. Artificial intelligence (AI) has been a hot topic among IT and business leaders as it promises to be the biggest driver of change in human history. The way we work, live, learn and play will never be the same once AI is infused into all of our devices, cars, appliances and everything else we interact with. CIOs are well aware of this and are looking to use AI as part of their digital transformation strategy. One of the challenges is that people often overestimate what an AI can do and they expect perfection. If there are any mistakes at all, it’s back to the drawing board to refine the algorithms or spend more time in the learning phase. With self-driving cars, for example, when an accident occurs, people freak out and act like the car is the long-lost cousin of a T-600 Terminator that purposefully had the accident to wipe out a human. The fact is, self-driving cars don’t need to be accident-free, they just need to be better than human drivers to be helpful to society. That bar is achievable today. This means, broadly, that AI systems merely need to be assistive (i.e., helpful to the person using it) to be put into production. Can it make a doctor work smarter? Can it help classify images faster than people? Can it predict outages faster than an engineer? Once that threshold is met, roll it out and reap the benefits. Aim for minimum viable intelligence Last week I attended an AI event in San Francisco hosted by Cambridge Consultants, NVIDIA and NetApp where this very topic was discussed. During his keynote,Tim Ensor, director of AI at Cambridge Consultants, mentioned how when his company works with customers, AI initiatives are launched once they achieve “minimum viable intelligence” (MVI). The threshold for what “minimum viable” means will vary by use case. For example, an AI-based robot that assembles customer orders for a retailer needs to be near perfect as errors here can cost companies big bucks in returns. For other applications, though, the bar isn’t nearly as high. One of the use cases Cambridge Consultants presented was the ability for an AI to catalog songs by genre. Ensor explained that the AI did a fairly good job with studio music but had a hard time with humans playing as it interpreted the errors as jazz. In this case, the incorrect classifications can be fed back into the system to further train the AI, so launching the solution earlier actually helps get more data faster. Another example was a medical application called Bacill AI that is able to look at medical images on a microscopic level to find tuberculosis in people in third world countries. This is normally a painstaking process that can take doctors hours to do, but the AI can scan the images and find anomalies indicating TB in a fraction of the time. Again, the algorithms do not need to be 100% perfect at the outset, and as more analysis is done, the data can be fed back into the system as training data to bring the system to near perfect. Data variety matters more than volume A critical point that Cambridge Consultants brought up during their presentation was the value of different types of data. Historically, businesses would train AI systems with large volumes of curated data (i.e., data that has been scrubbed and cleaned to remove anomalies, duplication information, etc.). This was needed as AI learning algorithms were not that sophisticated, and despite the high volumes of data, this could lead to bad insights. Today, AI systems learn more naturally (i.e., more like humans) and can be fed a much smaller amount of data that can be curated, raw (unfiltered) or even synthetic (generated by people or machines). And the use of generative adversarial networks (GAN) allows the AI to create its own data during the training process. For those not familiar with a GAN, it’s a machine learning-based system that uses a combination of correct and incorrect data to speed up the learning process — finishing in days instead of months, say Cambridge Consultants. This means achieving MVI can be done in a fraction of the time it once took. Related content opinion How DMaaS eliminates data silos and 4 tips for choosing a provider For CIOs caught in an ever-growing web of complicated data silos, data management as a service can help drive a competitive business advantage. By Zeus Kerravala 21 Jan 2021 5 mins Cloud Computing Data Management Security feature 3 factors for implementing contact tracing in the workplace As businesses plan to return to the office, CIOs need to develop a contact tracing strategy for a safe working environment. By Zeus Kerravala 24 Jul 2020 6 mins IT Strategy Budgeting IT Leadership opinion How AI is transforming retail Retailers looking to develop new customer experiences need to make artificial intelligence part of their digital transformation plans or risk falling further behind. By Zeus Kerravala 17 Apr 2020 5 mins Retail Industry Digital Transformation Analytics opinion The big task for CIOs in 2020: Bringing security and IT operations together Bridging the gap between these siloed teams pays off in improved visibility and better security. By Zeus Kerravala 11 Dec 2019 7 mins Internet of Things Endpoint Protection IT Leadership PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe