Contact us

0333 772 1963

info@infoboss.co.uk

Get in touch

    1-10-100 the spiralling costs of poor quality data

    Organisations tend to over-estimate the quality of their data and under-estimate the costs. Business processes, customer expectations, source systems and compliance rules are constantly changing. Your data quality management systems must reflect this. Vast amounts of time and money are spent on custom coding and traditional methods – usually fire fighting an immediate crisis rather than taking the bold steps necessary for long term data quality improvement through a strategy of poor quality prevention.

    Back in 1992, George Labovitz and Yu Chang developed the 1-10-100 rule as a quality management concept to quantify the hidden costs of poor-quality.
    When applied to data quality the supposition is that for every unit spent preventing poor quality data entering the data estate, will in turn save 10x units on fix/remediation activity and 100x units if nothing is done, leading to failure. When relating this concept to data quality it must be recognised that the principle, rather than the exact numbers will apply.

    Fixing costs a lot more than preventing

    A traditional data fix methodology has been for an organisation to undertake periodic data quality fix activities. These projects tend to be resource and cost heavy and achieve very little in the long term. Indeed, research undertaken on “the costs of poor-quality data” (Haug, Zachariassen, Liempd) suggests that data typically degenerates at a rate of 2% per month or 25% per annum. So, any benefits derived from a one-off fix exercise are at best short lived and inevitably need to be repeated, but often are not because of the cost and impact of running them.

    But the costs of fixing pale into insignificance when compared to doing nothing

    Bad data leads to poor customer experiences, missed opportunities and compliance failure. Decisions made on bad data often have to be re-worked, resulting in embarrassing u-turns, productivity and performance degradation and potentially significant safety and welfare impacts on citizens. Compliance breaches not only impact financially but can leave a stain on the reputation of an organisation that it will not be able to recover from.

    Back in 2016, IBM estimated that bad data costs US businesses $3 trillion per year!!! The reason it costs so much is that executives, managers, knowledge workers, data scientists, data analysts et al must accommodate it in their work. Just to emphasise this point further, research by Forbes suggests that data workers spend 60% their time cleaning data before they can use it.

    Conclusion

    Far too many data quality improvement initiatives focus on fix/remediation after the fact which ultimately fail unless processes are established to switch off the “dirty data tap” and prevent poor quality data from occurring as part of normal business operations.

    The 1-10-100 rule refers to the hidden costs of waste associated with poor quality. There is an optimal point to the level of effort and cost to support a culture of continuous data quality improvement. However, It’s a well-established principal that the costs of prevention are by far and away much lower than the costs of fixing or failure. If you’d like to start the journey to prevent poor-quality data entering your data estate, then please get in touch, we’d love to talk.

    Share