Access to sufficient, reliable, and timely data will be a key determinant of success for enterprises over the coming years as AI transforms business workflows. Credit: PeopleImages.com - Yuri A / Shutterstock As enterprises become more data-driven, the old computing adage garbage in, garbage out (GIGO) has never been truer. The application of AI to many business processes will only accelerate the need to ensure the veracity and timeliness of the data used, whether generated internally or sourced externally. The costs of bad data Gartner has estimated that organizations lose an average of $12.9m a year from using poor quality data. And IBM calculate that bad data is costing the US economy more than $3 trillion a year. Most of these costs relate to the work carried out within enterprises checking and correcting data as it moves through and across departments. IBM believes that half of knowledge workers’ time is wasted on these activities. Apart from these internal costs, there’s the greater problem of reputational damage among customers, regulators, and suppliers from organizations acting improperly based on bad or misleading data. Sports Illustrated and its CEO found this out recently when it was revealed the magazine published articles written by fake authors with AI-generated images. While the CEO lost his job, the parent company, Arena Group, lost 20% of its market value. There’ve also been several high-profile cases of legal firms getting into hot water by submitting fake, AI-generated cases as evidence of precedence in legal disputes. The AI black box Although costly, checking and correcting the data used in corporate decision making and business operations has become an established practice for most enterprises. However, understanding what’s going on with some large language models (LLMs) in terms of how they’ve been trained, and on what data and whether the outputs can be trusted, is another matter considering the increasing rate of hallucinations. In Australia, for instance, an elected regional mayor has threatened to sue OpenAI over a false claim made by the company’s ChatGPT that he had served prison time for bribery whereas, in fact, he had been a whistleblower on criminal activity. Training an LLM on trusted data and adopting approaches such as iterative querying, retrieval-augmented generation, or reasoning are good ways to significantly lessen the dangers of hallucinations, but can’t guarantee they won’t occur. Training on synthetic data As companies seek a competitive advantage through deploying AI systems, the rewards may go to those with access to sufficient and relevant proprietary data to train their models. But what about most enterprises without access to such data? Researchers have predicted that high-quality text data used for training LLM models will run out before 2026 if current trends continue. One answer to this impending problem will be an increased use of synthetic training data. Gartner estimates that by 2030, synthetic data will overtake the use of real data in AI models. However, returning to the GIGO warning, an over-reliance on synthetic data risks accelerating the dangers of inaccurate outputs and poor decision making; such data is only as good as the models that created it. A longer-term danger may arise from “data inbreeding,” as AI models are trained on sub-standard synthetic data that produce outputs, which are then fed back into later models. Moving with caution The AI genie is out of the bottle, and while it’ll take more time for the widespread digital revolution promised by some overly-enthusiastic technology vendors and consultants to occur, AI will continue to transform businesses in ways we can’t yet imagine. However, access to reliable and trusted data available at the scale needed by enterprises is already a bottleneck that CIOs and other business leaders have to find ways to remedy before it’s too late. Related content brandpost Sponsored by VMware The Java migration imperative: Why your business should upgrade now To truly take advantage of modern Java, apps built for the ecosystem must be constantly maintained to maximize performance and minimize exposure to risks and security vulnerabilities. By Ryan Morgan, Senior Director, VMware Tanzu, Broadcom Apr 29, 2024 8 mins Cloud Computing news Get Ready for FutureIT Boston With This AI Infographic By Shane O'Neill Apr 29, 2024 1 min Events Artificial Intelligence IT Leadership news Atos may sell national security activities to French government The troubled IT service provider could net up to $1 billion from the sale, meeting most of its financing needs for the next year. By Peter Sayer Apr 29, 2024 4 mins Government IT Government Managed IT Services feature Top 10 barriers to strategic IT success Data challenges, tech debt, and talent shortages are among the issues that can derail your IT org’s work on high-value initiatives. Here’s how some CIOs are addressing them. By Mary Pratt Apr 29, 2024 12 mins Hiring IT Skills Business IT Alignment PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe