Josh Fruhlinger
Contributing writer

8 big IT failures of 2023

Feature
26 Dec 202311 mins
Business ContinuityDisaster RecoveryGenerative AI

From damaged database files to generative AI misuse, these high-profile IT disasters wreaked real-world havoc this year. Let them serve as cautionary tales.

Tired young man in formalwear touching his head while sitting in the office
Credit: G-Stock Studio / Shutterstock

IT provides the plumbing for just about every company in existence today. Most of the time, that plumbing works fine — but when something goes wrong, it can be more embarrassing (and more expensive) than the most extravagantly overflowing toilet.

We’ve gathered eight instances of big tech failures that struck companies and other organizations in 2023. Every problem is a teachable moment, of course, and we hope these disasters can serve as cautionary tales as you try to navigate your own potential IT troubles in 2024.

Tech trouble in the skies

The airline industry has all the elements necessary to produce truly dire IT horror stories: It’s dominated by huge companies and large government bureaucracies, and it requires near-perfect coordination of thousands of aircraft and millions of passengers, where any delays can cause cascading failures leading to delays or worse. And because the incumbent companies have been around for so long, many are running IT systems with some elements that are years or decades old. Honestly, it’s a wonder the system works at all. Both United Airlines and Hawaiian Airlines saw service outages in 2023 resulting from wonky software upgrades, and Southwest ended the previous year with a Christmas travel meltdown blamed on outdated systems.

Probably the worst IT airline disaster of 2023 came on the government side, however. The FAA maintains a database called Notice to Air Missions (NOTAM) that provides an automated, centralized source of information about things like closed runaways or equipment outages at various airports, or hazards along different routes. On January 11, NOTAM went down, causing a nationwide “ground stop” that halted all takeoffs, though planes in the air were allowed to continue to their destinations.

The outage was traced to a damaged database file; a contractor was working to correct a problem with the synchronization between live and backup databases and ended up corrupting both. The engineer “replaced one file with another” in “an honest mistake that cost the country millions,” with the incident holding some obvious lessons for ensuring critical data is backed up redundantly, especially if you’re going to be mucking around with the backup system.

The NYSE’s brittle backup process

The FAA wasn’t the only organization that found that its backup process, which is supposed to help stave off disaster, spawned a disaster of its own. The New York Stock Exchange faced a similar crisis in January as well. The NYSE wisely locates its backup servers in Chicago, hundreds of miles away from Wall Street, to serve as a data redoubt in case a crisis hits lower Manhattan; somewhat less wisely, its daily backup relies on a process whereby employees have to physically turn backup systems on and off at appropriate times.

Starting and stopping digital processes at the exact same time every day is in fact something computers are fairly good at, and that people tend to screw up now and then, so it was perhaps inevitable that one of these days a crisis would arise. And arise it did on January 24, when a Chicago employee failed to turn the backup server off at the appropriate time. As a result, when trading began in New York at 9:30 a.m., the NYSE computers thought they were continuing the previous day’s trading session and ignored the day’s opening auctions, which are supposed to set initial prices for many stocks. The outcome was a series of violent market swings and numerous transactions at incorrect prices that had to be cancelled at great expense. The lesson: Never send a human to do a computer’s job, especially if that computer’s job is pretty simple.

In space, no-one can cancel your software license

NASA is a scientific marvel that does all sorts of cool and inspiring space stuff; it’s also a sprawling government bureaucracy with thousands of employees and computer systems under its umbrella. Unfortunately, the agency is having a harder time keeping track of all those computers than it is various bits of space debris. A report this year from the OIG focused on numerous licenses NASA purchased for Oracle products to support the Space Shuttle program, which wrapped up more than a decade ago; not only is the agency locked into Oracle tech as a result, but poor documentation processes means that NASA isn’t really sure how many of those Oracle systems they’re actually using. As a result, the agency spent $15 million over the past three years on software it may not be using, but didn’t want to risk a software audit from Oracle that might end in a fine that’s even more costly.

The solution to a problems like this is to implement a software asset management program that can help you understand exactly what software you’re using and what license you need and don’t need. The good news is that the US federal government has mandated that agencies like NASA implement such programs; the bad news is that, according to the OIG report, “efforts to implement an enterprise-wide software asset management program have been hindered by both budget and staffing issues and the complexity and volume of the agency’s software licensing agreements.”

Software licensing situation cloudy

If NASA serves as an example of an overly cautious government agency paying for software it may not be using just in case, cloud service provider Nutanix was rocked by a scandal this May when it emerged that the company was taking the opposite approach to software licensing. Specifically, Nutanix used third-party software in a “noncompliant manner,” which is a euphemistic way of saying “without paying for it, even though they were supposed to pay for it.”

The company used software from two different vendors for the purposes of “interoperability testing, validation and customer proofs of concept, training and customer support.” Unfortunately, they did all that using versions of the software that were marked for evaluation purposes only, an “evaluation” process that lasted for years. The issue was discovered by an internal review, and because the vendors needed to be paid for the noncompliant use, Nutanix was unable to file its quarterly earnings report to the SEC on time because it was trying to get a handle on what it owed. The screwup resulted in the CIO leaving the company, with the lesson perhaps being that the only thing worse than paying for software you don’t use is not paying for software you do.

Turn off the lights, the party’s over

This next story is, technically, an IT fail that dates back to 2021, but we’ll include it this year’s roundup because it was in 2023 when it was finally resolved. For nearly 10 years, Minnechaug Regional High School in Massachusetts had been happily running a “green lighting” system installed by 5th Light that automatically adjusted the lights inside and outside the school as needed. But in August 2021, teachers and students noticed that the lights were staying on at full brightness continuously. It turns out the system had been hit by malware, and had gone into a fallback mode in which the lights never turned off.

What followed was a series of sobering discoveries that offer lessons to anyone thinking of relying entirely on software for the purposes of controlling things in the real, physical world. The high-tech lighting system had no manual switches that could simply be turned on and off, and the software was integrated into other school systems and could not be easily replaced. The original vendor no longer existed, and the IP had been bought and sold multiple times; it took weeks for the new owner, a company called Reflex Lighting, to track down someone who understood how the school’s system worked. A repair plan was finally developed, but by then post-COVID lockdown supply chain disruptions meant it was months before new equipment could be shipped from China to Massachusetts.

Finally, after nearly 18 months of leaving the lights burning continuously (and occasionally screwing bulbs in an out by hand as needed), the system was updated this year — and yes, it comes with the physical on-off switch that probably should’ve been there in the first place.

When a crash means a real crash

The Minnechaug Regional High School story is a good example of why mechanical, real-world devices don’t always mix well with software. But mechanical and electrical engineering isn’t free from problems either — and sometimes software can help. Take the example of the MRH-90 Taipan, a military helicopter used in Australia; in 2010, a “catastrophic” engine failure occurred when a pilot tried a so-called “hot start” — powering down and then restarting the engine mid-mission. This mechanical problem was fixed in software, with Australian Defence Force rolling out a software patch designed to prevent the helicopter from being hot started.

Unfortunately, the first rule of software patches is that they work only if you actually roll them out, and despite the fact that this patch has been available for the better part of a decade, it wasn’t installed on all of Australia’s Taipans, resulting in a hot start that led to a helicopter crash during a training mission this past April.

Cascading phone failures down under

Australia was the site of another high-profile IT failure in November, when Optus, the country’s second-largest telecom provider, went down for 12 hours, leaving half of Australians without phone or Internet connectivity. The fault could be ultimately traced to routing changes sent by Singtel, the Singapore-based company that owns Optus; this information was apparently such a large wave of data that it overwhelmed Optus’s routers, which then had to be physically restarted — something that took quite a long time, given Australia’s size.

The problem with being a service provider of national significance is that when you have high-profile IT failure, your executives get dragged in front of the national parliament to explain what went wrong, and it definitely doesn’t help if you tell lawmakers that the problem was so widespread and unexpected that you didn’t have plan for dealing with it, and that your CEO carries SIM cards from rival carriers to make sure she could still make phone calls if the carrier she was in charge of collapsed. Perhaps unsurprisingly, Optus CEO Kelly Bayer Rosmarin left the company soon thereafter. (The lesson of the massive Optus outage, we suppose, is to have a disaster plan for all different kinds of disasters, and also to configure your routers correctly.)

Artificial intelligence, real failure

Since 2023 has been the year that generative AI has gone mainstream, we’ll wrap this list up with a couple of high-profile AI disasters. In one of the more high-profile cases, lawyers at Levidow, Levidow & Oberman turned to ChatGPT to help them draft legal briefs related to a client of theirs suing an airline over a personal injury. Unfortunately for them and their client, ChatGPT did what it’s becoming increasingly well known for: produce an extremely plausible document that included a number of factual errors, including citations of multiple court cases that did not exist (a “hallucination,” in AI lingo). Lawyer Steven A. Schwartz admitted to the judge that this had been his first use of ChatGPT for professional purposes and that he was “was unaware of the possibility that its contents could be false.” In his defense, he had asked ChatGPT if its citations were fake, and the chatbot had insisted that they could “be found in reputable legal databases such as LexisNexis and Westlaw.” (This turned out not to be true.)

AI failures also hit the tech journalism world, with CNET being forced to retract more than 35 stories that were written with the help of a tool called the Responsible AI Machine Partner, or RAMP. The less-than-responsible results not only left the company with egg on its face but drove a backlash from its workers as well. The lesson is that AI is just like any IT tool, and shouldn’t be used if you don’t understand how it works or if in your particular use case it’s still half-baked.