Data Disasters: Hidden Risks Cost More Than You Think

March 25, 2025
By Steven Schiro, Craig Moster

Editor’s note: This is the first article in a series, “Data Disasters Versus Data Dynasties,” authored by data consultants and engineers from IBM’s Procurement Analytics as a Service team. The series will cover common downfalls that procurement organizations make with data, as well as how organizations develop a winning data strategy.

***

Does poor data management hinder your procurement organization’s operational efficiency, data security and bottom line? If so, you may be experiencing the effects of a data disaster.

To build a winning, agile strategy to keep up with and even outpace competitors, it’s critical to understand these key concepts:

  • Common data mistakes and why organizations should care about data management
  • How to identify data disaster qualities
  • How to improve data governance.

An understanding of these concepts can potentially save your business millions in hidden costs.

Why Data Disasters Matter

It’s imperative to have confidence in the accuracy and quality of your data, as well as a secure access point. A company may leverage its most qualified, tech-savvy employee to perform spend analysis for an upcoming quarterly business review, but if data is inaccurate or missing, this becomes a wasted opportunity.

According to Stamford, Connecticut-based advisory and consulting company Gartner, companies could be losing nearly US$13 million in hidden costs due to poor data quality. That is a steep opportunity cost in today’s growing technology industry.

Lacking a data management strategy can also have damaging effects on efficiency and innovation. By using outdated practices, organizations may find themselves with data silos, causing information to be isolated to specific departments and hindering cross-functional collaboration.

To draw a comparison to the airline sector, imagine that airlines had their own air traffic control towers, with no communication between them. However, they still share the same airports and airspace and serve the same customer base. Such a scenario would lead to a wildly inefficient process with lengthy delays for passengers, wasted fuel for planes waiting for runway availability, and no visibility into weather updates. Thankfully, air traffic systems are centralized to encourage real-time data transfer, effective communication and increased travel efficiency.

This is where a centralized data management system comes into play. When departments share data, the result is faster decision-making, increased efficiency and co-creation. According to a Harvard Business Review article, “Humans have natural empathy, and almost all will work to improve data once they understand how other teams use it. Help them connect with each other.”

That is why IBM encourages a growth mindset within its teams to allow employees to solve their own data problems and collaborate using IBM’s centralized data management system. This centralized system is a trusted source for integrated enterprise data that ends the data chase for users by aggregating and validating data from trusted sources, offering up more time for analysis and insights.

Visibility into quality data is crucial to strategic decision-making, so it is essential to keep that data housed within a centralized management system for accurate analysis. Confidence in data quality empowers leaders to gain insights into such focus areas as supplier fragmentation, consumer and buyer behavior, and predictive analysis.

Data Disaster Qualities

Determining if your organization is at risk of a data disaster requires careful consideration and assessment of your data practices. Disasters typically do not pop up overnight, but rather are the consequences of poor data handling habits, outdated processes, and fragmented responsibilities over time. Common signs of disaster include:

Multiple versions of the truth. Problems can arise when different teams produce conflicting reports from what should be the same data. For example: One team reports on PO spend by manually merging various ERP tables; another reports on PO spend based on a centralized data lake that has table merges and business-applicable filters already applied to the back end.

Manual data processes. Organizational reliance on manual data entry often can lead to inefficiencies and human errors. For example: Manually entering transportation route rates based on vendor contracts rather than feeding the contracts through a transportation management system could lead to inaccurate or incomplete data and result in procurement strategies being built on a shaky foundation.

Shadow business intelligence. This occurs when individuals or departments within an organization use their own data reporting solutions without the involvement or oversight of the formal IT or analytics team. While the issue often stems from a sense of innovation, like a desire for faster or more tailored insights, it can lead to a chaotic data environment with inefficiencies and security risks.

Unclear data ownership. Ultimately, many of these signs of data disaster can continue to snowball into true disasters when an organization lacks defined roles for data management.

Know Your Data

To alleviate and avoid such symptoms, conduct an internal data assessment. Begin by engaging key stakeholders across departments, including analysts, IT teams and business leaders. These team members will often have firsthand insights into data bottlenecks, inconsistencies and shadow processes. It is also important to involve data entry teams where applicable, as they can highlight day-to-day inefficiencies and hidden risks.

The end goal is to gain visibility into the entire data life cycle, from collection to reporting, while gathering direct feedback from the people who work with each facet of the life cycle.

By gathering perspectives from both strategic and operational levels, you’ll gain a comprehensive understanding of your organization’s data health.

Governance in Data Disaster Organizations

Data governance is to an organization what oil is to an engine — a necessity to keep operations running smoothly. Without proper governance in place, or with an outdated governance approach, disaster is inevitable.

There are three areas of focus when it comes to solid data governance:

  • Clear data ownership roles have been established, with assigned responsibilities for maintaining data quality and security.
  • Processes are in place to routinely monitor and clean data to ensure its accuracy and completeness; consistent data definitions should also be established to prevent confusion and conflicting insights across teams.
  • Data should be readily available to authorized users, while being protected through security protocols that are in line with regulatory requirements.

Depending on the industry, a data breach costs an organization, on average, $4.35 million to $10.1 million. To avoid a costly disaster, your organization must establish clear data governance frameworks. Strong governance not only mitigates risk, but also empowers teams to harness the true value of their data to gain clarity rather than face chaos.

Winning organizations understand the value of data and the importance that good data plays in moving strategically and with speed. It is impossible to overcome a data disaster by simply working “harder” or burdening tech-savvy employees with more work.

Rather, the key is to take a step back and evaluate what has led you astray. The best time to assess your organization for data disaster risk was yesterday, the second-best time is today.

***

Coming next: Winning strategies for forging a path toward data dynasty.

(Image credit: Getty Images/Thx4Stock)

About the Author

Steven Schiro

About the Author

Steven Schiro is a member of IBM’s Procurement Analytics as a Service team. The perspective and opinions represented are those of the author and do not represent those of IBM; they are reflective of the authors’ experiences at various companies and organizations.

About the Author

Craig Moster

About the Author

Craig Moster is a member of IBM’s Procurement Analytics as a Service team. The perspective and opinions represented are those of the author and do not represent those of IBM; they are reflective of the author’s experiences at various companies and organizations.