Last week I found myself in two meetings in which metrics were a focal point. In both meetings, the metrics in question were mechanisms to measure the outcome of specific initiatives, and in both cases metrics had to be used and relied upon with caution. Why is that? How can something as seemingly objective as a measurement actually be a bad thing? In this post, I explain my own recent scenarios for which metrics were a potential tripwire, and why treating metrics with some caution might be necessary in your own endeavours.
I’ve written before about the relationship between objectives, strategies, and tactics, but in that post I didn’t mention metrics. I’ve also written before about some of the perils of metrics, but in that post I didn’t go too deep into the relationships to strategy. Well, it’s time to link the ideas together.
In this era of “big data” (a term I despise), there is a temptation or even expectation to measure everything. Unfortunately, real life is complex and often gets in the way of theory. Sometimes, it just isn’t possible to truly measure what you want (or what your boss expects).
Example #1: Looking back on a project
Early in the week we had a session in which a Product Marketing Manager presented some of his in-flight launches and gave us all an update on how the metrics were tracking to targets.
In most cases, the objective of a launch is to build momentum and demand that will ultimately turn into revenue. In some businesses, it might be fairly straightforward to directly measure revenue resulting from a launch, or some reasonable proxy or related intermediary like generated leads. Our environment is a little bit different: our typical sales cycle is between 12 and 24 months and not all of our launches are for things that are direct drivers of revenue (for instance, platform sales are driven by products…customers don’t just go out and buy a platform by itself). But anyway, the details are unimportant; what’s important is finding a way to measure the success of the launch.
Faced with such long sales cycles, we often turn to proxies that are easy to measure. It isn’t uncommon for us to count RFP responses, beta customers, active trials, analyst coverage and so on. It’s reasonable to suppose that getting more of each of those things will eventually (yep, maybe only two years down the road) lead to revenue.
But when dealing with things that are measurable there is a tendency to over-focus on those things, and more specifically on the activities that drive them, at the expense of other activities that might actually matter more.
“In the information age, things that are precisely measured are rewarded disproportionally relative to impact.” – Theo Epstein, President of the Chicago Cubs
If we’re only measuring four things, then we might pursue only those four things and ignore other, more important items. For a while there we sat as a group discussing the metrics that were being presented, and debating whether the numbers we saw were good or bad, and whether the associated launch should be considered a success. The metrics were framing the discussion, and all ideas of success were being built on some interpretation of the metrics. Then someone asked, “Do you feel like the launch is a success?”. That’s when the meaningful discussion started.
Then someone asked, ‘Do you feel like the launch is a success?’. That’s when the meaningful discussion started.
Free of a metric focus, the next few minutes shined a light on the launch progress and effectiveness that no measure could have. People shared their opinions, formed in the context of years of experience, as to what was good or bad, and where improvements could be made. Actions were taken, and everyone saw a more complete picture of the launch than metrics alone could ever paint.
Sometimes, you have to trust that what feels right really is right.
Up above, I said that what’s important is finding a way to measure the success of the launch. This statement is wrong. Instead, what’s actually important is to do the things that you believe will contribute to achieving your objective. If you do these things and then look back on your proxy metrics and they haven’t moved, then maybe they’re the wrong metrics and should be completely discarded.
Example #2: Looking ahead to a project
Later in the week, I was in a smaller meeting where a few of us were discussing a communications campaign. We quickly agreed upon an objective, but unfortunately it’s one for which we cannot directly measure success or failure. I personally believe this is OK, but I know many people who would say this is a poor approach. In fact, being unable to directly measure whether or not you have achieved your objective is common, and I suppose you have two options:
- Trust that you have the right objective, whether or not you can measure its success or failure, and move forward
- Try to find a different objective motivated by ease of measurement
I think #2 is the wrong answer, because then instead of doing what’s right you’re doing what’s convenient.
So let’s assume we’re going with #1, and we’re now moving forward as was the case in my meeting. We have our objective set, and the golden rule of leadership says that we are now ready to move onto strategy. It’s at this point that you might run into two problems: first, people might start tossing tactics forward, thinking that they’re actually strategies (e.g., “Google AdWords!”, “email blast!”, “webinar!”); second, people might say “Since we can’t measure the objective, let’s think of some other metrics that can stand in.”
The first mistake is obvious (but all too common): if you go to tactics immediately after selecting your objective then you’ve broken the order of operations. But the second mistake isn’t so clear…what’s wrong with thinking about proxy metrics? Well, let’s think it through: you bandy about some metrics that you think approximate or correlate with your objective; now the rest of your planning is framed by those metrics and as a result you start thinking in terms of tactics to maximize them. Oh crap, you’ve just gone and made that same order-of-operations mistake, but you’ve deviously fooled yourself into thinking you haven’t.
…if you go to tactics immediately after selecting your objective then you’ve broken the order of operations
Something else I’ve noticed is that people might propose using a brief metrics discussion as a sanity check on your chosen objective. Don’t let them! It’s not that metrics aren’t a good sanity check (they certainly can be), it’s that this is not the stage in the planning process to be talking metrics.
If you’re truly certain (or at least agreed) that your objective is the right one, then trust that you should now talk about strategies to achieve that objective. So let’s say that you’ve done so and you’ve got some sweet strategies. Now it’s OK to talk metrics, right? Wrong!
Instead, select and employ tactics that support your strategies.
Only once that’s all done (i.e., you started with your terrific objective, you’ve subsequently chosen one or more strategies, and only then picked some tactics) is it time to talk metrics.
Only once that’s all done (i.e., you started with your terrific objective, you’ve subsequently chosen one or more strategies, and only then picked some tactics) is it time to talk metrics. Thankfully, tactics frequently lend themselves quite well to a metric-focused discussion. And this is a good place for that sanity-check discussion (although if the sanity check fails then something has gone horribly wrong in your objective->strategy->tactic planning).
It’s true that these tactical metrics aren’t direct measures of whether or not your strategies are working and you’re on-track to achieve your objective, but if you’ve followed the correct order of operations then you can put a reasonable amount of trust in them.
But remember at some point to ask yourself the simple question, “Do I feel like this is working?”; instinct will often flag you to success or failure long before a metric.
Leadership is about doing the right things, but you can’t always measure the progress along the way.