[caption id="attachment_2015" align="alignleft" width="400" caption="A VC10 cockpit, from Wikipedia user L-Bit. (Click to go to image page.)"][/caption]

I think a lot about numbers:

  • I have a fairly quantitative professional (and graduate education) background, including a stint in finance and a few years at a refreshingly quantitative nonprofit

  • I cook, which requires a lot of attention to measurement, time, temperature, conversions, and adjustments

  • I exercise with specific quantitative goals in mind (intervals, speeds, distances, reps)

It’s easy and sometimes very satisfying to track lots and lots these numbers, and enter them into a spreadsheet and then perform various calculations on them; the Quantified Self movement is a good example of this in practice, outside the green-eyeshade world. Tracking data and thinking about it is a prerequisite for success in a lot of different areas, in personal and professional life.

The operational level of data-gathering is performing experiments, which today I mostly hear about in the context of the website-improving tests that companies like Facebook and Google run. “If we change the text on this button, how many more people will click it?” That sort of thing.

More dramatically, I think, the Green Revolution is a major reason why the planet is able to support as many people as it can, and the massive agricultural productivity improvements it entailed came via data-gathering and experiments. In general, the whole idea of scientific progress is underpinned by the idea that we do tests that yield measurable evidence, i.e. that we can gather and compare data from a lot of simulations.

One area that I think is really difficult to deal with, though, is what data actually matters - what numbers, what measurable statistics, actually correspond with the results we want? There are a few problems here:

1) Data doesn’t tell us anything about what goals to pick in the first place.

  • If you’re a business, do you test with the aim of maximizing revenue or profit?

  • If you participate in quantified self, are things you’re measuring and maximizing actually contributing to your quality of life?

2) Achievement of your goals might end up being difficult or impossible to measure, so you have to judge what measurable evidence acts as the best substitute.

  • “Family and friends” are important to me; do I measure that in minutes spent, or quality of those minutes, or some other measure like the amount I (subjectively) contribute to their lives? The self-reported strength of the emotional connection? Does the very measurement of these goals interfere with achieving them? Does it change?

  • Even in a business context, I find that things like filling out a timecard significantly detract from my enjoyment of the job, though I really like measuring my impact.

3) We may be constrained in the types or amount of data we can actually gather.

  • I work out a fair amount, and I probably should measure things like the level of various nutrients and hormones in my bloodstream in order to assess the success of a workout. Instead, I do it by minutes run or pounds lifted, how sore I feel the next day, and how I think I look in the mirror, which perception in turn is heavily influenced by the workout itself (since I’m usually in a very good mood after going to the gym).

  • You want more people to sign up for your newsletter because you know the newsletter is an important tool for generating sales leads. But if your customers are other companies, you may not know the success of those efforts until a year, or more, later. But you can’t wait a year to make changes to your strategy.

So I think there’s much more of a need for what maybe I’d call analytics strategy. Analytics strategy is not finding stuff to measure, or implementing measurement tools, or making sure the tools work correctly. It’s answering more of the questions I asked in the list above, about what to measure and how it connects with your goals, and maybe even what those goals are.

It also probably involves the decision process for figuring out how to link measurable data with goals (since there are often lots of different things you can use as a proxy for the results you’re trying to achieve, each of which has its own tradeoffs). Maybe “analytics strategy” could even involve a process for evaluating how well analytics are working (meta-analytics). Lastly, I think there’s a part of this which is your experiments strategy. In most cases, you are constrained in how many experiments you can run, sometimes heavily. How do you determine which experiments to prioritize?

Another way to think about this might be: you’re flying a plane with a million gauges. Which ones should you read, what do they mean, and how do you use them to get where you’re going faster?