Combatting Complexity Through Simplification

Martijn ten Napel

Data projects are complex. The question is how to combat this complexity without sacrificing functionality?

Most professionals working in the data / analytics/ BI field of work have one of two reflexes:

  1. Narrowing down the functionality into a single narrative and putting all requests not fitting this narrative out of scope.
  2. Creating complex solutions for complex problems, entertaining all possible deviations and what-if’s.

Neither reflex is sustainable. This results in projects replacing the results of previous projects instead of amending the data landscape.

In order to battle complexity and create sustainable ecosystems of data related solutions, you need to simply things.

Our attitude

Change will happen and is unpredictable. That’s a given, prepare for it. The question is how to deal with this?

Our answer: continuous reduction of complexity.

How to achieve continuous reduction of complexity?

  1. ‘Just enough’ principle: take iterative, small steps forward, just enough to allow anyone involved to keep up. This gives you the ability to adapt quickly to changes in the information demand. Do not lose sight of the human dimension. If users are overwhelmed by your megalomane ‘prepared for every possible deviation’ solution, they will simply ignore it, because they don’t understand it. ‘Just enough’ has two sides:
    • ’Ask just enough’: A lot of the complexity in data projects stem from the inability of users to precisely specify what they need. They will ask for ‘everything’ just to be sure they will get something they can use. Users need to be taught to ask for small simple steps, learn from using it, give feedback and build out from experience.
    • Deliver just enough’: Don’t do more than being asked for in terms of content and in terms of functionality. Developers are not responsible for applying the information delivered, so they cannot asses what is needed. Rather, help users to develop their data skills without baffling them with IT concerns.
  2. Use case approach: create solutions based upon use cases that deliver value. Use cases are validated and prioritized by the user community before being put into development. To avoid the pitfall of expanding solutions haphazard with every use case, you can bundle use cases into principal use cases, representing use patterns of information. You can architect the cohesion between these patterns beforehand and fill the data landscape in time through use cases.
  3. Decomposition: Complexity is inherent. This makes both data modelling (or rather, translating the logical model into physical implementations) and data processing a tough challenge. And yet we try to cram it all into the ‘one size fits all’ solutions. When faced with a complex solution, decompose it into smaller pieces that work together. Barry Devlin’s REAL architecture is an example of such an approach. How to do this?
    • There is an inherent schizophrenia in requirements: it has to be real time, completely unambiguous, highly available and agile in both use and development at the same time. That is simply not possible. Break it up in partial solutions that either meet the high availability / unambiguity demand or the flexibility demand and demarcate the responsibilities of each solution clearly. The principal use case based architecture has drawn some of the demarcation lines upfront.
    • Let the partial solutions cooperate and deliver information through master data for intelligence and master data for authorization solutions to create loosely coupled solutions in the data landscape. This ties the use cases together and prevents you from reinvention with every new use case added.
    • Separate the ‘as is’ from the ‘as interpreted’: a layered or stacked approach to your data model allows you to create several layers of different speeds of change. When confronted with small steps sidewards or backwards in the ‘just enough’ approach, you can easily modify the model. Most of the time the changes are in the ‘as interpreted’ layer.

Isn’t this all very complex to achieve?

Of course, you need to organise for this to work. But from our experience, clearly demarcating the responsibilities and boundaries of autonomous teams working on partial solutions and aligning them through MDI and MDA interface agreements and an architecture decision tree helps organizations to build and expand large data landscapes without drowning in the complexity of coordinating it all.


Martijn started in the field of Business Intelligence in 1998. As an architect, he has delivered many projects to a diverse set of organisations. The recurring theme in Martijn’s work is reorganizing and kick-starting BI, analytics and data organisations that got stuck.

Crunching data is not just a technological challenge, it is an organisational issue. Our field of work has expanded with Big Data and Artificial Intelligence, but this hasn’t changed the essence of every day challenges.

People, as a collective, try to make sense of the information handed to them. This isn’t a walk in the park for most organisations. They struggle to organise the coherence between people, process, information and technology. Not only for present challenges, but with future changeability in mind.

To Martijn, the answer to the question ‘why do so many BI projects fail?’ is simple: the complexity of the data landscape has grown out of control because of this struggle.

Martijn has been educated in modelling economic behaviour of organisations and has been drilled in reducing complexity to get better result. He took this skill to the world of BI.

Martijn ten Napel is an architect from The Netherlands. He works for Free Frogs, a company dedicated to improving the data, analytics and BI efforts of organisations

Comments are closed.