Inside Supply Management Magazine
Smart Supply Chains Depend on Data Quality
By Ali Hasan R.
Supply chains once existed only to move goods between points A and B. With the advent of technologies like artificial intelligence (AI) and machine learning, supply chains serve as important sources of KPI data — like a dashboard for the entire enterprise.
With smart technology installed throughout the supply chain, managers can track the precise movement of goods through a series of progressive benchmarks. This process generates several petabytes of data (a petabyte is a million gigabytes) that are invaluable for improving supply chain visibility, insight and overall performance, so much so that New York-based management consulting firm McKinsey & Company estimates AI-enhanced supply chains will annually deliver as much as US$2 trillion in economic advantages. Meanwhile, PricewaterhouseCoopers (PwC), the London-based professional services giant, suggests they will generate more than $15 trillion for the global economy by 2030. Calling the smart supply chain a game changer undersells it.
Of course, supply managers have always relied on operations data: Manufacturing executions systems have been around for decades. Data is not a new asset, and it’s a misconception that manufacturers tend to be “data poor” compared to industries like finance or health care. Almost everyone involved with supply chains has a wealth of data at their disposal before AI starts collecting more.
Quantity matters for turning data into value, but what supply managers must understand, practically and philosophically, is that quality matters even more.
Bad data already costs the U.S. economy $3.1 trillion a year, according to IBM. Like a bad apple that spoils the whole barrel, imperfect data has an outsize impact on everything around it. When facts are inaccurate, incomplete, outdated or irrelevant, it calls the entire data set into question and makes any insight unreliable. Therefore, it’s no coincidence that data has the potential to create trillions of dollars — or lose an equivalent amount.
Whether data helps or hurts the operation ultimately depends on supply managers. First, they must recognize the sheer volume of data already available in the form of time logs, payroll slips, inventory ledgers — anything with a transaction date.
Second, they must be honest about the quality of old and new data. This piece is crucial because acting based on bad data makes it impossible for supply managers to carry out their two key objectives: (1) keeping the cost of operations within revenue targets and (2) preserving the quality within internal and external standards.
As complex manufacturing environments integrated with AI become data powerhouses, it’s up to supply managers to assess and ensure the quality. Considering the volume coming out of multiple production lines, an individual or team can’t police the quality. It takes an organization-wide effort to keep quality greater than or equal to quantity. Start with these strategies:
1) Involve operations leaders. It might make sense to leave a data-cleansing effort up to the IT team, but it’s essential to include operations leaders. As experts on the front lines, they know better than anyone which data merits consideration. For the same reason, they are uniquely able to spot where and why quality issues may exist.
Without these keen eyes, it could take the IT team weeks or months to resolve data issues, putting digital transformation on hold. Fundamentally, manufacturers need to understand that operations leaders aren’t just data consumers — they’re data consultants.
2) Determine KPIs. When there’s too much supply chain data to cleanse everything, manufacturers should focus on the metrics that matter most. Identify the KPIs, study where the data originates and make those sources smarter and more streamlined. Data must be perfect for metrics that inform the most common and consequential decisions.
Tools like connected sensors and high-powered AI make collection effortless, but without the infrastructure to handle the scale of that data, quality issues are inevitable. Moving to the cloud — while not mandatory or a panacea — makes issues less likely while enabling solutions like real-time root cause resolution.
3) Implement incrementally. Some companies require real-time analytics capabilities, but most don’t, and that’s a good thing. The faster data moves, the greater the risk of it going bad. By nature, real-time insights are less trustworthy and harder to quality control, so if they’re not absolutely necessary, avoid them for now.
Also, consider using a solution to analyze preexisting data. The stakes are a lot lower, yet the insights are still valuable — and companies gain experience with analytics they can use to embrace real-time capabilities later.
At the start of the Fourth Industrial Revolution, it’s easy to assume that everything needs to change. But manufacturers have always run on data, and quality has always mattered above all. In that way, seizing opportunities of the future depends on learning lessons of the past.
Ali Hasan R. is the co-founder and CEO of ThroughPut Inc., a Palo Alto, California-based artificial intelligence supply chain software provider that enables companies to detect, prioritize and alleviate dynamic operational bottlenecks.