The good & bad of BI maturity models

I wanted to discuss two things that get used a lot in Data & Analytics, although I’m sure they exist in many other areas.

  • Maturity models – aka capability models
  • Maturity assessments – aka capability assessments

Maturity models are just a static image; a diagram explaining the spectrum of what can be done within a particular discipline.

Maturity assessments are surveys that require you to answer a range of questions about your organisation, they then generate a document/report that tells you your areas of strengths and weaknesses with some suggestions on which areas you should focus on first.

Some examples

Above, an example Maturity Model.

With this particular example, my opinion is that the exponential curve is inverted. For most organisations, moving from “1.0 Frustrated” to “2.0 In Control” is where you will see the biggest jump in “competitive advantage”.

Above, a screenshot of an example DWH Maturity Assessment report from Google.

The good and bad…

First of all, I’ll start with my conclusion:  Capability Models & Assessments can be really useful tools BUT can do more harm than good. Here is why I think that…

The good

Maturity models can really concisely and effectively communicate common patterns and problems.  A lot of time and effort have clearly gone into making them and often show a good understanding of the industry.

They provide a great starting point for teams wanting to discuss strategy.

Capability Assessments can pose questions that make you reflect and review what you currently have.  If you take your time, you can reason through the output and cherry-pick some possible next steps.

Below is an example of one of the better models:

The bad

Maturity models infer that “left is bad, right is good” – the question of quality never comes into it, nor does the philosophy of continuous improvement. 

Take a situation where you have a Data Warehouse that takes 23 hours to load, the data is wrong and its costs a bomb to operate…according to these techniques, none of that matters because that step is done and so its ‘on to the next one’.

For example, compare the following two teams:

Team A - A large, expensive team producing ML models with data sitting in a data-mesh architecture. The output is either not used or they don't actually create better outcomes for the company. 
Team B - A small team of data developers producing simple but high-quality reporting and automating data processing tasks using a relational database. 

If you don’t have a really strong grasp and experience of the data industry then using these maturity models/assessments will lead you to the conclusion that Team A are much “more advanced” and are “performing better” than Team B.

Some leaders may not take their organisations’ individual circumstances into account and look to emerging technologies as silver bullets.  Scenarios arise where great data teams who are a key part of the business can all of a sudden be perceived to have not moved with the times and who have let the competition gain an advantage.

The assessments can also result in FOMO (fear of missing out) and knee-jerk reactions. For example, hasty and speculative new projects may get spawned with little or no ROI; anecdotally I have heard this about some Data Lake projects.

How they can be misused

A feature of BI Maturity models is that they oversimplify, to fit one-page they have to.

This is not a problem if the people using them understand the domain really well as it can be used as more of a discussion prompt.  But for people who only have a very high-level understanding, they suddenly believe that the road-map to great data is simple… move from left to right and tick everything off on the list.

Maturity assessments/models are can also be used by vendors to ‘neg’ the organisation’s in-house team.  ‘Negging’ is a technique where criticism is used to erode someone’s confidence, making them more open to suggestion.  The criticism being cast doesn’t have to be valid or relevant, it is just used to persuade others that there is a problem where one didn’t exist before… and that the consultants have the answer.

Capability Assessments in particular are used by consultancies. When first engaging with them, you are often asked to complete their own bespoke surveys. If you answer honestly, then the results may be used to try and knock business leaders faith in their existing teams and to claim that their consultancy has much greater capabilities.


To recap my thoughts in 3 points:

  1. BI Maturity Models/ Assessments can be useful tools
  2. Some are good, some are pretty useless
  3. Understanding how they can be misused will prevent it happening to you and your team

Kimball vs Inmon

“We acknowledge that organizations have successfully constructed DW/BI based on the approaches advocated by others… Rather than encouraging more consternation over our philosophical differences, the industry would be far better off devoting energy to ensure that our DW/BI deliverables are broadly accepted by the business to make better, more informed decisions.”

– Ralph Kimball

We see this a lot; teams spend all their time in dead-lock deciding on which approach, project-methodology and architecture to choose and then just go and over-engineer something that is of little use to the business anyway.

It is a better approach to keep things simple and deliver early ensuring the solution is consistent, accurate and maintainable.  There is no point in making sure your data-mart can scale to 1TB in size if no-one is going to use it.

Sturgeon’s Law

“90% of everything is crap”

– Thedore Sturgeon

Sturgeon’s ‘revelation’ came about when he was defending the genre of science fiction of which he was an author.  Whilst he agreed with critics that 90% of the genre wasn’t very good, he pointed out that this was of no significance as 90% of everything including film, literature and consumer goods followed the same pattern.

Unfortunately the same goes with IT departments 😦