The lengths to which AI can be biased are still being understood. The potential damage is, therefore, a big priority as companies increasingly use various AI tools for decision-making. Credit: NDAB Creativity / Shutterstock AI doesn’t get better than the data it’s trained on. This means that biased selection and human preferences can propagate into the AI and cause the results that come out to be skewed. In the US, authorities are now using new laws to enforce instances of discrimination due to prejudicial AI, and the Consumer Financial Protection Bureau currently investigates housing discrimination due to biases in algorithms for lending or housing valuation. “There is no exception in our nation’s civil rights laws for new technologies and artificial intelligence that engage in unlawful discrimination,” said its director Rohit Chopra recently on CNBC. And many CIOs and other senior managers are aware of the problem, according to an international survey commissioned by Swedish software supplier Progress. In the survey, 56% of Swedish managers stated they believe there’s definitely or probably discriminatory data in their operations today, while 62% also believe or think it’s likely such data will become a bigger problem for their business as AI and ML become more widely used. Elisabeth Stjernstoft, CIO at Swedish energy giant Ellevio, agrees that there’s a risk of using biased data that’s not representative of the customer group or population being looked at. “It can, of course, affect AI’s ability to make accurate predictions,” she says. “We have to look at the data on which the model is trained, but also at how the algorithms are designed and the selection of functions. The bottom line is the risk is there, so we need to monitor the models and correct them if necessary.” Having said that, however, she’s not concerned about the AI solutions that Ellevio uses today. “We use AI primarily to write code faster and better, so I’m not worried about that,” she says. “After the code is developed, it’s also reviewed. Machine learning is mainly used for things of a technical nature, to be able to make predictive analyses.” However, she has encountered problems when it comes to obtaining relevant training data to predict energy load when the cold is at its most severe. “It’s a challenge because the occasions with such cold are so rare that there is simply not much to train on,” she says. Consider the consequences Göran Kördel, CIO at Swedish metals company Boliden, agrees it’s vital to understand the risks that exist with biased AI. “This is probably something that all CIOs are thinking about now, and I think it’s important we do even if we don’t know what AI will look like in a few years or how it will be used then,” he says. “We have to think about the consequences of that.” But for Boliden, he sees no major risks in the short term. “We mostly use image analysis where we’ve done some pilots with cameras that examine things,” he says. “I worry more about biased data when it comes to things related to humans and generative AI; about AI that produces consumer-oriented information and uses consumer data.” Kördel also sees a risk that AI can dampen creative ability and inhibit thinking outside the box, he says. A thorough examination Skandia’s CIO Johan Clausén points to the importance of an evaluation of needs and risk when new solutions are introduced. But for the time being, he, like Boliden and Stjernstoft, doesn’t see any risks with the use of AI in his company. “We use AI to a very limited extent, which is why I don’t see biased data as a challenge at the moment,” he says. “The external data sources that we have are reliable.” But in order to guard against any instances in the future, Clausén says it’s important to think about where secure data should be and where it might be acceptable to use skewed data to begin with, and from there set up a control based on that model. But these CIOs agree that a lack of competence that exists in AI would presently affect risk. Kördel specifically points out that there’s always a lack of skills when new technology comes along, which is a general obstacle for all companies. At the same time, he believes companies have more time than many might think. “With all due respect to the developing technology, the question is how quickly will we apply it,” he says. “I think it will take longer than we think to do so on a large scale.” Related content feature The startup CIO’s guide to formalizing IT for liquidity events CIO turned VC Brian Hoyt draws on his experience prepping companies for IPO and other liquidity events, including his own, to outline a playbook for crossing the start-up to scale-up chasm. By Michael Bertha and Duke Dyksterhouse 01 Mar 2024 9 mins CIO Startups IT Strategy feature 15 worthwhile conferences for women in tech For women seeking to connect and advance their IT careers, or those who support diversity and inclusion in technology fields, here are 15 conferences you won’t want to miss. By Sarah K. White 01 Mar 2024 11 mins Women in IT Diversity and Inclusion IT Skills brandpost Sponsored by Avanade By enabling “ask and expert” capabilities, generative AI like Microsoft Copilot will transform manufacturing By CIO Contributor 29 Feb 2024 4 mins Generative AI Innovation feature Captive centers are back. Is DIY offshoring right for you? Fully-owned global IT service centers picked up steam in 2023, but going the captive route requires clear-eyed consideration of benefits and risks, as well as desired business outcomes. By Stephanie Overby 29 Feb 2024 10 mins Offshoring IT Strategy Outsourcing PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe