Are you looking to your data for answers? You might be starting off on the wrong foot.
The imperfection of data was one of the salient points at the 12 Conference of the Performance Measurement Association in which I presented some of my research.
Organized by Cranfield and the University of Gronigen, it was, without a doubt, the most engaged I have ever been at an online event.
And that’s because what makes the study of performance measurement so interesting is its immediate applicability to practice.
The first and overarching point was that performance measures aren’t imperfect due to lack of technology or lack of brains: They are imperfect because they are inherently imperfect. No amount of technological advancement will make them perfect.
This was a point I wished to make in my talk, and had a made a few years ago at BAM. But what struck me was the degree of consensus on this point from other presenters and discussants. Performance management then, is largely about how we respond to this incompleteness, and how this relates to an increasingly dynamic organizational environment.
Three related points that came up:
One, that the work of performance measurement is ongoing and fluid, rather than distinct cycles of development, implementation, and use as it is often presented. If any of these activities every “stops” in your organization, consider yourself in trouble.
Second, contrary to the mainstream trend that sees big data as providing answers, Bob Scapens discussed the concept of maieutic machines that is, that instead of answers, good performance measurement generates knowledge by asking questions, guiding our engagement with the world in which we work. You can read the full paper here.
Good data create conversations, open us up for discussion, allow us to see connections we had not yet considered. Bad data end conversations, close the discussion, and limit our vision to one way of viewing the world.
Finally, the absolutely critical role our assumptions play in whether or not we are able to use data effectively to lead organizations. Once we have defined performance measures and start to use them, the tendency is to start treating these as if they were more representative of reality than they really are. We should avoid getting comfortable at all costs.
Building in ways to challenge our assumptions regularly–through creative games, through mindfulness practice, storytelling, or otherwise–is one way of keeping our minds and our organizations nimble, able to respond to the dynamic environments in which we operate.
A side note: I presented my own take on all of this, currently under development, informed by Bhaskar’s Dialectical Critical Realism. I am very grateful for the encouraging remarks and feedback I received at the conference!