A recipe for better data management

We often hear that data is the fuel of modern business, but we think that food provides an even better analogy. When we go to fill our car up at the pumps, very few of us prefer a particular brand– we just want a full tank. But when it comes to what we eat, it’s not enough to have a full belly; we need the right sort of food that is both nourishing and tastes good. By Nick Goode, EVP Product, Sage.

  • 4 years ago Posted in

It’s the same with data. Filling up on information doesn’t necessarily make a business better; in fact, the wrong sort of data can have a highly damaging effect on the health of the whole organisation. That’s because – in the era of the connected business – the effects of bad data aren’t confined to the system in which it resides. Instead, it ripples out to a range of other applications and processes that rely on that information. 

 

Businesses may not realise it, but bad data is a serious and costly issue. In 2016, IBM estimated that poor quality data costs over $3 trillion in the US alone. (By comparison, the size of the entire big data industry in the same year, according to IDC, was a ‘paltry’ $136 billion.)  

 

This can only ever be an estimate, though, because it’s difficult to put a price tag on the missed opportunities, reputational damage and lost revenue that comes from having the wrong data – not to mention the time and effort involved in searching for and correcting it. Knowledge workers spend far too much time searching for and correcting errors in their data. 

 

Other researchers provide further evidence for the devastating impact of bad data. Gartner found that the average cost to organisations is $15 million a year, while a report from the Royal Mail suggested that it causes a loss of six per cent of annual turnover. Why are businesses failing to address an issue with such a direct impact on their bottom line – especially given today’s fixation on data-powered insight? 

 

The domino effect of bad data 

 

You would expect that the figures listed above would provide plenty of food for thought, especially as every line of business, from marketing to finance, customer service to supply chain, is now so completely dependent on accurate data on which to base their insights. Yet in our pursuit of data quantity we seem to have forgotten one of the oldest tenets of the information age: ‘Garbage In, Garbage Out’.  

 

Too often, businesses lack a coherent data integration strategy which means that inaccurate or incomplete data causes a domino effect through the organisation.  

 

Nothing highlights the interconnected nature of modern business better than the issue of bad data. If a department does a bad job of keeping data clean, up-to-date, and accurate, it affects every other department that relies on that data. This means that the effects are not limited to those who are responsible for managing records and updating systems; instead, they spread throughout the organisation. This results in all manner of problems: from badly-targeted marketing campaigns to poor customer service outcomes, to errors in payroll, resource allocation and product development. 

 

Another grave consequence of inaccurate data is that it can lead to people mistrusting the insights that they gain, and even resenting the data creators who have allowed erroneous information to creep into their systems.  

 

A recipe for success 

 

For all the hype around data-driven insights, businesses are facing a data credibility problem, with insights and performance metrics badly skewed by inaccurate information. So, while no-one discounts the importance of having large data sets from which to draw insight, the more urgent challenge facing organisations is to improve the quality and accuracy of the information that they hold. 

 

Just as the food we eat has a direct effect on our wellbeing, so the quality of their information has a bearing on the health of a business. That’s why they need to treat data as a delicacy, rather than just fuel. By focusing on data quality, they can then ensure a positive domino effect throughout the organisation, with departments and workers able to trust to the insight they derive from it. 

To do this, every organisation must undertake a regular data quality audit that not only verifies the accuracy of information that is kept, but also examines the internal processes and workflows associated with gathering and storing information.  

 

For example, the organisation needs to have complete confidence that employees are capturing all relevant information in systems such as ERP systems, and that all data is entered accurately and kept up to date. This should include cross-referencing with information held in other systems such as CRM, ensuring that the business can have faith in the data on which it bases its most important decisions. 

 

The recipe for success is simple: be as discriminating with your data as you would towards the food you put in your mouth: prioritise data quality to ensure you get accurate insights. 

By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.
By Thomas Kiessling, CTO Siemens Smart Infrastructure & Gerhard Kress, SVP Xcelerator Portfolio...
By Dael Williamson, Chief Technology Officer EMEA at Databricks.
By Ramzi Charif, VP Technical Operations, EMEA, VIRTUS Data Centres.