businessman working with sign of the top service Quality assurance, Guarantee, Standards, ISO certification and standardization concept.
Image: Looker_Studio/Adobe Stock

Between responding to supply chain disruptions, pivoting in the economic slowdown, reacting to inflation, retaining and gaining new customers, and better managing inventories and production, data quality has never been so crucial for your business.

In the digital age, data is a business’s most valuable resource. Data collection, data analytics and data governance strategies are what separate leaders from the rest of the pack. And data quality is woven throughout the entire data architecture.

What is data quality?

A Forrester survey found that top customer intelligence professionals consider the ability to integrate data and manage data quality the top two factors that hold back customer intelligence. But data quality is about more than customers. Top-level executives and management use internal data to drive daily operations and meet business goals.

Quality data should be accurate, complete, consistent, reliable, secure, updated and not siloed. High-quality data is often defined as data that is “fit for use in operations, decision making and planning.” High-quality data also represents real-world constructions.

The difference between internal and external data and what makes them “fit for use” is important. External data is generated by a company’s customer base and may be of high quality for marketing campaigns but not of high quality nor fit for use for specific business decisions that require internal data. Whether external or internal, data quality should always be verified and should meet or exceed expectations.

Furthermore, as businesses and organizations embrace digital transformation as well as migrating to the cloud and hybrid cloud environments, the need to break down data silos becomes imperative to data quality. It’s critical for companies in this digitalization journey to understand the consequences of not fixing data quality.

SEE: Research: Digital transformation initiatives focus on collaboration (TechRepublic Premium)

What are the business costs or risks of poor data quality?

Data quality will have a direct impact on your bottom line. Poor external data quality can lead to missed opportunities, loss of revenue, reduced efficiency and neglected customer experiences.

Poor internal data quality is also responsible for ineffective supply chains—an issue that has been breaking news constantly in the past year. The same factor is one of the main drivers of the Great Resignation, as HR departments operating with poor data are challenged to understand their workers in order to retain talent.

Additionally, there are severe immediate risks that companies need to address, and they can only do that by tackling data quality. The cybersecurity and threat landscape continues to increase in size and complexity and thrives when poor data quality management policies prevail.

Companies that work with data and fail to meet data, financial and privacy regulations risk reputation damages, lawsuits, fines and other consequences linked to lack of compliance.

Gartner estimates that the average financial impact of poor data quality on organizations is $9.7 million annually. At the same time, IBM says that in the U.S. alone, businesses lose $3.1 trillion annually due to insufficient data quality.

As the new economic slowdown and recession threaten every organization, data quality becomes key to navigating new economies; making hard decisions; and drawing up short-, mid- and long-term plans.

Common data quality issues

The most common data quality issues are duplicated, ambiguous, inaccurate, and hidden and inconsistent data. New problems include siloed data, outdated data, and data that is not secure.

But another growing issue with data is that it is often strictly managed by IT departments when an organization should have an all-levels approach to data quality. McKinsey says that companies should think of data as a product, managing their data to create “data products” across the organization.

How to address data quality issues

When data is managed like a product, quality is guaranteed because the data is ready to use, consume and sell. The quality of this data is unique. It is verified, reliable, consistent and secure. Like a finished product your company sells, it is double-checked for quality.

Gartner explains that to address data quality issues, businesses must align data policies and quality processes with business goals and missions. Executives must understand the connection between their business priorities and the challenges they face and take on a data quality approach that solves real-world problems.

For example, if a company has high churn rates, and its main business goal is to increase its customer base, a data quality program will work to strengthen performance in those areas.

Once the business goal and challenges are understood and data teams have selected appropriate performance metrics, Gartner says the organization should profile its current data quality.

Data profiling should be done early and often, and high data quality standards should be set to benchmark progress toward meeting a target. Data quality is not a “one and done” activity; it’s a constant, active management approach that needs to evolve, adjust and perfect itself.

SEE: Hiring Kit: Database Engineer (TechRepublic Premium)

Improving data quality issues

McKinsey explains that teams using data should not have to waste their time searching for it, processing it, cleaning it or making sure it is ready for use. It proposes an integral data architecture to deal with data quality and assures its model can accelerate business use cases by 90%, reduce data-associated costs by 30% and keep companies free from data governance risks.

To improve data quality, organizations require the right model. McKinsey warns that neither the grass-root approach, in which individual teams piece together data, nor the big-bang data strategy, where a centralized team responds to all the processes, will reap good results.

In an effective data quality model, different teams are responsible for different types of data, which are classified by use. Each team works independently. For example, data that consumers will use in digital apps should be managed by a team responsible for cleaning, storing and preparing the data as a product.

Internal data used for reporting systems or decision-making should also be managed by a separate team responsible for closely guarding quality, security and data changes. This focused approach makes it possible for data to be used for operational decisions and regulatory compliance. The same applies to data used for external sharing or information used for advanced analytics, where a team must clean and engineer the data for it to be used by machine learning and AI systems.

Companies that excel in creating data products will need to set standards and best practices and track performance and value across internal and external business operations. This attention to a productized version of data is one of the most effective ways to protect against data quality erosion.

Subscribe to the Data Insider Newsletter

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more. Delivered Mondays and Thursdays

Subscribe to the Data Insider Newsletter

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more. Delivered Mondays and Thursdays