Check out Hilary's video blog!
When my colleague, Paula Herring, joined us nearly three years ago, one of the things that she introduced me to was the phrase that “All models are wrong but some are useful”. And I was quite determined to actually find out where that came from. And I have.
It comes from a guy called George Box who was a world renowned statistician. And what he said about 30/40 years ago was. “Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For such a model there is no need to ask the question ‘is the model true?’ If truth is to be the whole truth the answer must be no. The only question of interest is ‘is the model illuminating and useful?’”
Well, I think he’s actually got it right. But there’s one model that I use in so many different situations and it’s the Cynefin [Kun ev in] model. Or as my Business Manager would say the Cynefin model—and she says that because she’s Welsh, and she can pronounce it properly!
And the reason it’s so useful is you can use it for strategy, you can use it for decision making. Um, you can look at it for anything where you’re trying to differentiate between things that are absolutely obvious—that’s one thing. “What’s obvious, what’s the system we’ve got in place that just if we repeat we know what’s going to happen and it nearly always does?”
Then you can get things that are complicated. And, the fact is, if we analyse them, though, we can actually work out what to do. We know roughly what will happen if we apply a bit of expertise. But, but the exciting bit is when you actually look over and think “yeah, but what about the complex stuff?” Because, you actually, you don’t know what’s going to happen. And so much of what we are dealing with, in the business world, is actually complex. And what we try and do is say “oh no, it’s obvious we just apply a system we’ve used before”. Well, you often can’t. Often we don’t know and we have all these unintended consequences. So what do you do?
Well you get into the complex area. It’s trial and error; it’s iteration; it’s learning as you go. And there’s nothing wrong with that at all.
Of course, the other area is just chaos—things actually just not working, at all. And you’ve just got to act quickly. You’ve probably no idea what’s going to happen. Um, and of course, sometimes you’re not even terribly sure which of the four your issue is actually in.
Somebody was talking to me recently about Pike River. It’s a classic. I mean, for some people they’re just saying “send somebody in and get the bodies out”. Well that’s the obvious solution. Some people are saying “yeah, but it’s quite complicated—we need a lot of expert advice.” But they’re not all agreeing are they? I would actually argue that’s a classic example of a complex issue. We don’t actually quite know what’s going to happen. And quite understandably a whole lot of people are being perhaps a bit cautious about it.
So all models are wrong, but some are useful. And I’d give the thumbs up to Cynefin being pretty darn useful.