Thursday 27th Jul 2017 - Logistics & Supply Chain

Dirty data

As businesses forge closer links with one another through the supply chain, the amount of information that they routinely exchange is rising dramatically. Swapping prices, product descriptions, details of promotions and so on is part and parcel of collaborative trading.
But so far most attention has been paid to the mechanisms companies need to exchange information – the networks, software and terminals that get information from one place to another or the message formats that deliver it. Little attention has focussed on the accuracy or otherwise of the data itself.

Software company Udex recently issued a wake-up call when it announced that it had processed a sample of 3.8 million electronically stored attributes relating to 91,000 products and found that two-thirds of the information was inaccurate.

These were not just any old products. They were all supposed to conform to standards laid down for the global synchronisation network (GDSN), which was set up to help companies exchange data. The GDSN was even supposed to have checked the files.

Errors in the data
Errors in the data included spelling mistakes, bad punctuation and wrong abbreviations. There were also multiple variations of the same product and missing or incorrect weights, dimensions and prices.

None of this bodes well for efforts to automate business transactions, which is often sold on the basis that it eliminates human error.

If the basic information about products is wrong then errors begin to multiply. When consultancy AT Kearney decided to look at how accurate invoices were it found that 60 per cent contained errors, many because of flaws in the data used to draw up the invoices. AT Kearney calculated it cost up to $300 to straighten out some mistakes.

Other unwelcome consequences of dirty data are loss of sales from being out of stock, problems with fleet operations and increased handling costs, to name but a few.

Ventana Reseach, which recently published a paper on product data, rightly points out that global data synchronisation projects will fail if they don’t address data quality. And with $200m already invested by the top 100 suppliers in RFID technology there is a lot riding on synchronisation.

The picture is further complicated by the fact that rules for global data synchronisation barely scratch the surface of data quality. There are only 20 of them that relate to data accuracy. Further there is a lack of rigor in the way companies apply these rules, with the result that data can comply with them and still be incorrect.

‘It’s not sexy, it’s not exciting, it’s grunt work,’ says Bill Grize, president and CEO of grocery company Ahold USA. ‘Our business is large so those little numbers represent billions of dollars. Why would we allow billions of dollars of waste?’ Savings could amount to $1m for every $1bn in sales, he estimates.

Grize says that at one time Ahold employed 144 people to clean data because about 25 per cent of orders were inaccurate. Some 40 per cent of invoices contained discrepancies that cost an average $70 per invoice to put right. ‘This is not elective surgery it has to be done,’ he says.

But many senior executives are unaware of the problem or think it can be easily solved by adopting a single set of standards for formatting data. The problem is that standards for data synchronisation are mostly concerned with getting messages through rather than ensuring the accuracy of their contents.

Buck passing and departmental turf wars conspire to make sure that the issue remains low down the corporate agenda.

So what’s to be done? The obvious starting point is to establish a strategy to manage data quality before even thinking about global data synchronisation.

High quality data, according to Ventana, has four main characteristics. It must be complete, accurate, consistent and time stamped, so that changes can be tracked and audited.

There are no short cuts to getting data in the best condition. For example, while software can help in detecting errors it cannot fix them. That remains a labour-intensive exercise that involves physically checking entries.

There is also the question of maintaining data quality. Again that may be a laborious task that needs to be closely managed to make sure data is kept up-to-date. It is vital to establish a master file of information so that there is just one version of the truth.

Ventana recommends that companies set up a data governance group to oversee the process and establish a strong link with business managers who are vital to the success of global data synchronisation.

Get Weekly Logistics & Supply Chain News
Get Weekly Logistics & Supply Chain News
Thank you for your subscription