Data valuation methodologies keep evolving
The oldest mathematical objects ever to have been found were bones of baboons with clearly defined notches between 25,000 and 35,000 years ago. While there are many theories as to what these notches represented, some believe that these are the earliest records of humans recording numerical data. Perhaps these marks signify that humans have understood the importance of historical data for longer than imagined.
Over thousands of years, the importance of data then evolved – particularly in retail and trade, where a value was attached to certain assets, whether it be products or services. At that point it was only simple data – the supply and demand of a product or service – that would help to place a value on it.
Since then, far more abstract assets have been and continue to be valued; from stocks, to brands, patents and trademarks. And yet, there has always been a difficulty in putting a value on the data itself, particularly as there are far more complex types of data, vast amounts more of the data, and because its use has become far more sophisticated.
In fact, it is this complexity which means that those who do not work in data roles may not understand the true value of the data they’re using, aside from being told by leaders that data is important. By putting a value on the data, it changes the way people within an organisation think about it, as it translates it into a language which they can understand.
Data valuation is the first step to data monetisation. There are numerous ways to find this value, and this article focuses on financial methods for valuing data. Gartner’s esteemed analyst, Doug Laney’s Infonomics framework explains three: cost value, market value, and economic value. Our approach is the fourth: stakeholder value. We built it by learning from all the other approaches, using the latest techniques in economics, complex decision analysis, psychology, value attribution and of data science (of course), and a carefully trained gradient boosting algorithm.
The cost value methods
The cost value method measures the cost to produce and store the data, the cost to replace it and the impact on cash flows if it was lost.
There are a number of methods which fit within this scheme. Daniel Moody’s modifications adjust the valuation based on what he calls the ‘7 Laws of Information’. The laws include: redundant and unused data should be considered to have zero value, the number of users and number of accesses to the data should be used to multiply the value of the information, and the value should be depreciated based on the ‘shelf life’ of the information. Identifying laws or principles can help a business to ensure they’re always considering the value of data in everything they do.
Similar cost value methods include the glue reply valuation technique. This is a more precisely formulated method than Moody’s as it maps different data producers and consumers such as applications and processes. Meanwhile, Deloitte’s relief from royalty method identifies how much the company would be willing to pay to acquire the data asset from a third party if it didn’t own it, and Internet of Water’s data hub valuation technique values the benefits and savings gained by users accessing data through a central hub instead of accessing it closer to source.
All of these cost value methods are relatively easy to carry out compared to other approaches. They provide useful information on the value of a data asset – in the case of the glue reply valuation technique, it gives a real-time, shifting value to the asset that takes into account production and usage.
However, all of these data valuation methodologies also suffer from being very subjective. They are useful for data owners to conceptualise the value of the data asset, but may not be an accurate indicator for real economic value of the asset, and lack a focus on the potential value generated. For instance, it is useful to know that sensor data is providing a benefit to the business as it is used by a number of employees from a number of departments, and is an important source of data for a number of applications and processes. This shows that this data is more important than others. However, the return-on-investment or economic value of the data is far harder to quantify.
Other cash value approaches such as Deloitte’s with and without, and Internetofwater.org’s valuing data hubs are questionable in terms of accuracy.
The market value approaches
The market value method tracks the current value of data, based on what others pay for it in an active market or pay for other comparable assets. While this is relatively easy to calculate for a large proportion of data, there is a lot of data which isn’t tradeable, either because it is ‘boring’, or because a company would not want to trade it as it provides them with a competitive advantage. Some data is also unique, and it is hard to find comparable equivalents in the market.
The economic value approaches
On the economic value approaches, there are two key methods.
The first is income or utility valuation, which tracks the impact of data on the business’ bottom line, therefore it can identify value added to the business by data and can be used to identify value add for specific business functions or use cases. However, this is hard to measure, particularly distinguishing value added by data from value added more broadly. Much like the other approaches, a lot of this is subjective and it is incredibly hard to predict the future value of data.
The second approach is around use case valuation – and there are two separate techniques here. The first is the business model maturity index (Internet of Water), which calculates the value of data by identifying a number of business use cases, estimating the value of each of these use cases, and calculating how much of this value is contributed by data. The benefit of this approach is that it values the data based on a thorough analysis of multiple use cases within the business, and ties it to real business outcomes. However, it is one of the most subjective as the contribution of data assigned to each use case is through surveying, based on hypothetical scenarios rather than real use cases. The margin for error is large.
The decision-based valuation method is similar but has an increased degree of sophistication as it models frequency of data collection, accuracy and how fit for purpose the data is. However, once again there is a degree of subjective estimation. It is also a complex model to apply for data assets as it requires the ability to conceive and project use cases. There is also an issue with ‘unknown unknowns’ – in other words, using this method businesses can only model use cases and desired outcomes that can be thought of from inside of the business. This relates back to the importance of what question a business is asking – sometimes if it is too specific, and if the data set is also very specific, a business will get the answers it wants, but this discounts many of the other factors and unknowns.
The stakeholder value approach
Value is in the eye of the beholder. The stakeholder value approach goes right to the source of value, by measuring the economic value created for each stakeholder. Not just shareholders, but customers, employees, suppliers, communities and the environment. This makes it a more modern approach, aligned with the shift from shareholder to stakeholder capitalism, much discussed at the World Economic Forum 2020, and mirrored by the growth of environmental, societal and governance (ESG) factors in investing. And yes, it’s our approach, but we won’t be all salesy about it. It’s not perfect, but it does overcome many of the problems of previous data valuation methodologies.
While other data valuation methodologies race towards data monetisation, they ignore the broader context, to focus on data in use, or not. The stakeholder method works from an understanding of the total economic value the organisation creates for its stakeholders. Valuation isn’t an end in itself, it’s a means to achieve better management and decisions.
Decisions are never taken out of context, so data valuation shouldn’t be either.
The most difficult part of this methodology is attributing the right portion of the organisation’s total value to specific activities, and from there, into the data that underpins them. It’s only possible with well-trained, intelligent technologies. This is the main challenge for this method. It’s hard. Very hard. And like all the other data valuation methodologies, does not give a true, definite measure of value, but then monetary value has only ever been a subjective construct anyway.
One historian Yuval Noah Harari explains, the idea of monetary value exists to enable mass cooperation. Instead of having to get to know people intimately to trust them enough to work with them, we just need to trust that the monetary value is something others believe in, because then they will act accordingly. Data valuation achieves the same end – because we all believe money is a measure of value, instead of just repeatedly saying data is valuable, expressing the value of data in monetary form communicates its value far more powerfully than any video, case study or well written marketing message.
For better cooperation to be achieved, trust and belief in the methodology is critical. In our opinion, it’s the combination of complexity with simple logic that makes the stakeholder method the best. At the end of it, you can clearly explain how the organisation creates value and data’s role in that. A simple story that, based on strong evidence, producing a monetary measure of data that is anchored to the real value an organisation creates.
So, in summary, here’s how this methodology works:
1. Calculate the total economic value of the organisation.
2. Reveal which of the organisation’s activities create value for its different stakeholders, and what portion of the total economic value is attributed to each activity.
3. Identify how data dependent each of those activities are, and apportion the value accordingly.
There is no ‘right’ or ‘wrong’ approach to data valuation; businesses that are trying to understand their data better are on the right path. However, what we can take from these different methods is that data valuation does not happen in isolation. For it to be meaningful it should inform a wider business intelligence and decision flow chart.
Even those notches left on baboon bones may have far more value than the numbers that they represent. Indeed, in modern times, there would be data on the people who make those notches, the activities that the notches represent, and data on ‘boring’ functions such as the human cost of recording and extracting that data, as well as the cost of losing the bone. By valuing the data, those people could then alter the way that the activity itself is carried out, the way employees work, and even the way the notches are notched. This should then keep feeding back in a loop.
Data is ultimately about creating meaningful value for business stakeholders, and data valuation methodologies should serve this end.