Book Report: The HEAD Game

the-head-game“The key message of this book is about how to manage tough decisions and piles of data by applying a few consistent guiding principles. The short version is simple: there is a better way to sort through life’s questions than simply sitting down with the loads of available information and treating every decision as a unique process without guidelines that help bring order to chaos. Amassing more and more data often isn’t the answer. Figuring out how to make sense of hard questions efficiently is: we want High Efficiency Analytic Decision-making, or HEAD, not a data dump.” (The HEAD Game)

Title: The HEAD Game – High Efficiency Analytic Decision-Making and the Art of Solving Complex Problems Quickly

Author: Philip Mudd

Publisher: Liveright

Publication Date: 2015

Origin: Quite a bit of my past professional roles involved analyzing reasonably complex problems with plenty of variables and unknowns. I expect my future professions will be characterized by the same phenomena. Lacking any formal training outside of engineering problems (which are usually quite well-defined), I was forced to ‘wing it’. My intention for The HEAD Game is to to use it to develop a framework with which I can methodically tackle such problems.

Summary: In The HEAD Game, Philip Mudd explains and illustrates a straightforward process/framework by which analysts can provide ‘decision advantage’ to decision makers by lessening the uncertainty surrounding complex issues.

Mudd breaks the book down into chapters that show the logical flow of high-efficiency analysis:

  • The Art of Thinking Backward: “The best analysts think backward, providing decision advantage by molding briefings, papers, emails, and other communications so that they answer a decision maker’s question with maximum efficiency.” (see the video embedded below for some examples that further explain what Mudd means by ‘thinking backward’)
  • What’s the Question? “The questions are the hardest part of many analytic problems, but they are the analytic step most frequently ignored by analysts.”
  • The Drivers: “To break down hard questions analytically, we need an approach that allows us to escape the limitations of our minds and look at many characteristics of problems simultaneously.”
  • Measuring Performance: “Without metrics to measure how an analytic problem is changing over time, expert or intuitive judgments about complex problems risk using yesterday’s events to explain tomorrow’s.”
  • What About the Data? “Rather than looking at each bit of information as a discrete data point, we want to look at our drivers and sort the data according to which driver it supports – in other words, sort the data into each of the half-dozen or so driver categories, so analysts have a few piles to deal with rather than a thousand discrete data points.”
  • What Are We Missing? “How can we assume that an analytic process is flawed and then find ways to check for gaps and errors?”
  • The Finish Line: “At the conclusion, we want to put all these pieces of an analytic puzzle back together into one multistep process that we can use to attack a wide variety of analytic problems.”

Mudd also includes two useful resources as appendices: Appendix A explains a number of biases that are always at risk of creeping into any analysis that involves humans making decisions; Appendix B is a checklist of sorts to which practitioners can refer when applying Mudd’s HEAD framework.

My Take: I found The HEAD Game to be a straightforward, reasonably succinct, and – importantly – useful and practical guide to analytic decision-making.

Mudd logically walks the reader through the process, illustrating with well-chosen examples from everyday life (e.g., buying a car, choosing a house, etc.) and from his time at the CIA (e.g., the United States’ second invasion of Iraq, India’s nuclear test in 1998, various insurgencies around the world). These examples show where analytic decision-making succeeded and failed, with useful critiques that show how even seasoned subject matter experts can fall victim to human biases, fundamental errors, and other pitfalls.

Having read The HEAD Game and Superforecasting back-to-back, I’m struck (but not surprised) by the thematic overlap; for instance, both books address the importance of asking the right questions, challenges around understanding and evaluating complex issues, reducing uncertainty, expertise vs analysis, ways to (try to) engineer-out biases, the criticality of feedback to allow calibration and improvement, the need to examine multiple drivers, considering all angles, adversarial cooperation, and others.

Frankly, my biggest (only?) complaint about The HEAD Game is that there isn’t an index. What the hell, right? This annoyance is only slightly mitigated by the book’s use of footnotes, which I vastly prefer over endnotes.

Read This Book If: …You want to improve your own analytic decision-making, or that of an organization of which you are a part.

Notes and Quotes:

Preface

  • From pXV, of the preface: “The key message of this book is about how to manage tough decisions and piles of data by applying a few consistent guiding principles. The short version is simple: there is a better way to sort through life’s questions than simply sitting down with the loads of available information and treating every decision as a unique process without guidelines that help bring order to chaos. Amassing more and more data often isn’t the answer. Figuring out how to make sense of hard questions efficiently is: we want High Efficiency Analytic Decision-making, or HEAD, not a data dump.”

The Art of Thinking Backward

“You can accumulate huge amounts of knowledge to become an expert, but that doesn’t make you an analyst, somebody who practices critical thinking.”

  • p1 makes an important distinction between an expert and an analyst (this distinction will be a recurring theme): “You can accumulate huge amounts of knowledge to become an expert, but that doesn’t make you an analyst, somebody who practices critical thinking.”
  • p2 introduces the mission of HEAD and the goal of creating decision advantage: “Our goal here is to make complex problems more manageable so you, or someone you’re working for, can use that analysis to make better decisions. We want to start this book with an approach to help you reshape your thinking process, beginning with this concept of decision advantage: the analyst helps a decision maker narrow uncertainty by applying knowledge and experience after the analysis understands the decision-maker’s question.”
  • p3 further differentiates between expertise and analysis: “As you think about this contrast between expertise (summarizing data expertly) and analysis (massaging the data into a format that helps a decision maker), start to cement in your mind the following basic distinction: when we’re working on how to analyze a problem, we shouldn’t start with the data or information that we know and then build a case from there. Instead, we should begin with the question of what we need to know, or our customer or boss needs to know, to solve a problem.”

“When we’re working on how to analyze a problem, we shouldn’t start with the data or information that we know and then build a case from there. Instead, we should begin with the question of what we need to know, or our customer or boss needs to know, to solve a problem.”

  • I’d extend this statement, from p12, to apply to all smartypants: “The ugly secret for proud analysts is that this 90% of what they know (the data) might be useful at some other time, but it isn’t today. A good analyst has to have the humility to accept that.”
  • p16: “This divide between analysis and decision-making is not a distinction you can sustain during every analytic process you oversee, but it’s a divide you should be conscious of if you choose to breach it. And it represents a fundamental truth in analysis about the risk of growing attached to the decisions that flow from any analytic process.”
  • p17: “One of the reasons we need to slow down and focus on asking good questions first, right to left, is that people prefer to answer quickly when they’re faced with a question, particularly in their area of expertise.”
  • p25: “Even if you spend another hour or two at the front end of your search, ironing out what your real question is, you will save a lot of time and trouble at the back end of your decision-making process. You’ve started thinking backward, not even touching the data until you know the question, the destination.”
  • p29: “So before you immerse yourself in the data, and combine it with your knowledge and experience, step back and test your brain: Do you understand what question you’re answering in the first place?”

“So before you immerse yourself in the data, and combine it with your knowledge and experience, step back and test your brain: Do you understand what question you’re answering in the first place?”

What’s the Question?

  • p31: “We often assume that questions create themselves. Good questions, though, are hard to come up with, and we typically over-invest our time in analyzing problems by jumping right to the data and the conclusions, while underinvesting in thinking about exactly what we want to know.”

“We typically over-invest our time in analyzing problems by jumping right to the data and the conclusions, while underinvesting in thinking about exactly what we want to know.”

  • p44 uses home-buying and working with a Realtor as an example of focusing on the customer’s needs (the question): “A Realtor doesn’t just look at a new customer and offer a summary of everything she knows about the local housing market.” This example reminds me of two experiences we had when shopping for new windows: one salesperson provided us with hundreds of pages of options; the other, through sound questioning, whittled it down to two or three options.
  • p45 talks about “avoiding the certainty trap” and “the deceptiveness of yes/no questions”
  • This quote from p46 reminded me of a story from (if I’m recalling correctly) Daryl Morey, as told in The Undoing Project: “This tendency toward certainty is understandable, but it is treacherous in the business of complex analysis. Not only does certainty sometimes eliminate complexity, but it also drives us to pretend that we know what is inherently unknowable.”
  • p46: note to self
  • p48, the Call Mom exercise: “If you’re wondering whether you’ve posed a good question, or you’re practicing the answer to that question to see if it’s both concise and comprehensive, then try using only one sentence to summarize your question. This one-sentence limit, artificial as it seems, will force you to be clearer and more concise than you’re comfortable with.”
  • p48 continues in this same vein: “Speak aloud the one-sentence question you’ve settled on. Don’t read it, or mumble it quietly; speak it out loud. If you can look yourself in the mirror, speak the sentence, and confidently affirm that your mother would quickly comprehend the sentence, you’re in good shape. You’re thinking clearly, and our audience should be able to understand you.”
  • p49, the litmus test of inclusivity: “Think through all the kinds of material you will need to provide a decision maker who’s receiving the answer to your question… When you pose your one-sentence question to your mother, you have to look at all the data you’ll need to include in your analysis and ensure that your one sentence gives you teh latitude to cover the subject.”

The Drivers

  • p57, on ‘drivers’: “We are now going to transition from how to think about questions and decisions to how we can pick apart a question and break it down into constituent components. Note the significance of this: without this process, you’ll often (and wrongly) start with data, quickly picking bits of information from a mass of data.”
  • p57 continues the thought, and stating clearly something that might be counterintuitive: “We’ll get to the data later, only nearing the end of the book. It’s the least significant piece of the analytic process.”
  • p64: “No matter how interesting they may be, if drivers aren’t essential to your decision-making, they need to come off your list. (Of course, some of these may turn out to be useful as subordinate drivers later on.)”
  • p65 introduces several problems that might arise if we were to conduct analysis without a rigorous, driver-based process:
    • “The analyst’s answer might result from fast, intuitive thinking… Their experiences are critical, but only when they’re packaged in a process that’s analytically credible.”
    • “The analyst may pay limited attention to gaps. If we don’t line up the elements we think are critical to understanding a problem, including those elements about which we know little or nothing, we risk focusing on what we know without facing the reality that there are huge blank spaces, or gaps, in our knowledge base.”
    • Lacking clear drivers means that non-experts won’t be able to sufficiently interrogate the analysis
    • “Analysts won’t be able to update their analyses effectively and efficiently… If you’re managing this process, you can force accountability by requiring analysts to address information changes in every driver basket each time they discuss an analytic problem.”

“Agnostic drivers are the backbone of how we want to analyze any problem we look at, and there’s no way we can build a valid analytic conversation if we manipulate the drivers from the outset to favor one side or the other.”

  • p67, with a very useful piece of advice: “If you’re involved in evaluating a problem that includes analysts who disagree, this rule will help you narrow their disagreements: to keep your analytic process pure – to eliminate bias – analysts who disagree should select one single set of drivers they agree on.”; p68 continues, “Agnostic drivers are the backbone of how we want to analyze any problem we look at, and there’s no way we can build a valid analytic conversation if we manipulate the drivers from the outset to favor one side or the other.” (this idea of agnostic drivers pops up again in a discussion of having a ‘red team’ perform a second analysis, using the same data and the same drivers)

If you properly employ the right-to-left method in the heat of the moment, you will be perceived as downgrading the significance of a tragedy. This is exactly the point, though.

  • p71, on the recency and availability biases, and on sounding like a cold-hearted monster: “We need to be especially wary of LIFO when we are employing the analytic process immediately following a traumatic or highly emotional event. If you properly employ the right-to-left method in the heat of the moment – after, for example, a horrific terror attack – you will be perceived as downgrading the significance of a tragedy. This is exactly the point, though: in the emotional period following a major event, whether it’s a terror tragedy or a significant error in your business processes, you have to balance the need to address the tragedy or the business error with an understanding of how it fits in context.”
  • p78, as if we needed another reminder that nothing good comes from watching talking heads on TV (this theme pops up a few times, as does the idea of hedgehogs and foxes), this time talking about the availability bias and how it’s used to simplify a situation: “TV is designed for fast entertainment, not slow analysis. Use this image to keep in mind the human tendency to answer complex questions by building analytic stories based on one dimension of a problem drawn from the most recent bit of data… That’s a compelling story but disastrous analysis.”
  • p78 continues, using an example of an analyzing the success prospects of an insurgency: “When you feel the urge to go down this path of least resistance, transitioning quickly from question to data and answer – or when you watch a colleague do so – take a moment out of your day, sit in front of a blank piece of paper, and ask yourself just two questions: Whether I think the insurgents are winning or losing, what are the six or eight or ten elements I think I need to understand to figure out how they’re doing? If I think those six or eight or ten are the key drivers to help me break down and assess the insurgents’ gains, why did I just base an answer on only one of them?”

Measuring Performance

“Experts are the best at explaining what is happening today, because they can trace history and current events and put them in context. But they are terrible at forecasting what tomorrow will bring, because they struggle to explain why their views of today will or won’t hold true tomorrow…The past is an anchor for most experts, leaving them trapped in knowledge and judgments about today while they struggle to see how life might change overnight.”

  • p82, reminiscent of The Black Swan and Superforecasting, on the expertise trap: “Because we have superior knowledge, then, what we say should pass for truth, and sometimes even wisdom. This logical misstep is where we get into trouble. It’s the experts Achilles’ heel, or what I’d call the expertise trap: experts are the best at explaining what is happening today, because they can trace history and current events and put them in context. But they are terrible at forecasting what tomorrow will bring, because they struggle to explain why their views of today will or won’t hold true tomorrow.”
  • p83, reminiscent of some of the philosophical messages within Deep Thinking: “The past is an anchor for most experts, leaving them trapped in knowledge and judgments about today while they struggle to see how life might change overnight.”
  • p83: “Without a way to measure our judgments, we’re all prone to overconfidence.”
  • p85 provides us with a few initial steps to take if we want to or need to “take a hard, unvarnished look in the mirror”:
    • “We have to take the big step of not only judging but believing that we may not be the best at what we do… This process never ends.”
    • “We need to understand the expertise trap, accepting the fact that expertise about what happened yesterday isn’t nearly the same as analysis about tomorrow.”
    • “We have to be willing to go through the frustrating step of figuring out how to add measurements to analytic problems and judgments that lack clarity, measurements that will allow us to test ourselves as we attempt to limit bias.”
  • p95, and I’ve been guilty of this, myself: “…the approach in this book should lead you to be leery of experts who resist metrics because they insist that the measures are too subjective, or because the subject matter is too soft to be measured.”
  • p95: note to self
  • p99, with another practical, sensible tip: “At the outset of the metrics-setting process, we can help create the right environment by letting the experts set their own metrics.”
  • p101: “This raises the inevitable question we will turn to throughout this book: the importance of expertise, and the value of leavening that expertise with fresh perspectives. To add new perspectives, this process of setting guardrail metrics – of checking ourselves – argues for the inclusion of newer analysts, or even nonexperts, to ask different questions of analytic teams.”

“All of us are vulnerable to this boiling-frog syndrome because we want to believe our choices and decisions are right, and we hold onto our beliefs as long as we can, despite contrary indications that should signal us that we’re slowly slipping into hot water. So we judge, often without a great deal of thinking, that we should either discount new information – ‘that’s not really important’ – or force what appears to be anomalous new data into our analytic line of argumentation: in other words, explaining why everything stays the same until it doesn’t.”

  • p105, on the risk of the boiling frog preventing us from seeing and incorporating change: “All of us are vulnerable to this boiling-frog syndrome because we want to believe our choices and decisions are right, and we hold onto our beliefs as long as we can, despite contrary indications that should signal us that we’re slowly slipping into hot water. So we judge, often without a great deal of thinking, that we should either discount new information – ‘that’s not really important’ – or force what appears to be anomalous new data into our analytic line of argumentation: in other words, explaining why everything stays the same until it doesn’t.”
  • p108 and 109 talk about the charity proposition, which was an interesting enough exercise that I wrote a dedicated post about it
  • p110: “To avoid slowly boiling yourself, revalidate your initial conclusions constantly through guidepost exercises. And when you hit a guidepost, be careful about explaining away how that could happen. Sometimes you’ve set the guideposts wrong. Sometimes, though, you’ve gotten so wedded to the analytic avenue you’re headed down that you just can’t see clearly that you’re off course. Stay on course. Try to measure your performance, even when experts around you tell you it’s impossible.”

What About the Data?

“Don’t try too hard to squeeze a random piece of data into a basket that doesn’t seem right; the fact that it doesn’t fit might be a clue, later on, that helps you realize that you haven’t quite defined your drivers clearly.”

  • p112, on weird outliers: “As you do this, you may find some pieces of data that don’t fit clearly into any of your six to ten driver baskets. Place that data into an ‘other stuff’ category. Don’t try too hard to squeeze a random piece of data into a basket that doesn’t seem right; the fact that it doesn’t fit might be a clue, later on, that helps you realize that you haven’t quite defined your drivers clearly.”
  • p115: “Analysts want to know everything, but an excess of data will cloud judgment, if we don’t control the hoarding instinct that comes with expertise. So when we’re pitching our bits of data into our various driver baskets, we should focus on returning to the core principles that are worth returning to again and again: differentiating between what we know, what we don’t know, and what we think. Analysts typically overestimate the first, ignore the second, and confuse the third, taking what we ‘think’ and switching it over to the what we ‘know’ basket.”

“When we’re pitching our bits of data into our various driver baskets, we should focus on returning to the core principles that are worth returning to again and again: differentiating between what we know, what we don’t know, and what we think. Analysts typically overestimate the first, ignore the second, and confuse the third, taking what we ‘think’ and switching it over to the what we ‘know’ basket.”

  • p119: “Set up a schedule that forces you to reexamine all your confidence levels periodically. If your information sources have dried up, and you’ve lost a bit of confidence in your judgment, have the humility and analytic integrity to admit it. Shift a green light to yellow. Or a yellow light to red.”
  • p119 begins a great section on the pitfalls and risks of assessing intent, warning against several points to consider when grading our drivers:
    • “make sure you separate capability from intent as soon as you look at your driver baskets”
    • “question your data aggressively”
    • “be careful of analysts, especially longtime experts, who tell you that they understand the intent of those you’re trying to assess”, whether friend or foe
  • p136 closes the chapter by warning against two major traps that are pitfalls for analysts:
    • “First, there’s the trap of believing we know more than we know, and blowing through the bright line that separates what we know and what we think.”
    • “Second, analysts typically emphasize their own areas of expertise and knowledge, downgrading areas in which they suffer from knowledge gaps.”

What Are We Missing?

“Any analytic conversation about a complex problem has to include a segment about what the analyst doesn’t know.”

  • p140: “Any analytic conversation about a complex problem has to include a segment about what the analyst doesn’t know.”
  • p141, on anomalies: “If there is critical information that doesn’t fit anywhere, does that tell us that there’s another basket we need to weigh? Or are we learning that we’re gathering data that we initially thought was important but now can discard, because it doesn’t relate to any key basket? Perhaps more important, we should look at anomalies to test judgments and assumptions. Anomalies shouldn’t be discarded because they don’t fit an analytic construct; they’re potential canaries in the coal mine, maybe warnings that something is changing.”
  • As with many complex issues of judgment, we need to engineer-out human nature; p145, “It’s the old problem of past experience and accumulated expertise shading how analysts view new information, and it’s human nature.” Damn you, confirmation bias!
  • p145 starts a really neat section on “Red Team Analysis and Alternative Thinking”
  • This section on p147 reminded me of a colleague who treated every collaboration like a confrontation…all the linguistic cues here are the same as what I saw: “You can watch analysts’ use of simple terms to determine how wedded they are to a position and whether they will embrace or even listen to, a red team approach. Analysts who use words such as ‘my argument’ or ‘my case’ or ‘my position’ are suggesting that they are dug in, and that they are more likely to argue their views than to engage in a conversation that might lead them to adjust what they think based on new information or new perspectives. Similarly, analysts who talk about ‘defending’ an analytic position are already on the defensive, more likely to prepare arguments about how they’re right and you’re wrong than to walk into an open-minded conversation.”
  • p150: “It’s your job, if you’re managing these analysts, to change this expert-only dynamic”; and here are four ways to do so…
    • “First, differentiate between current events and future forecasting. The two types of analysts are different.”
    • “Second, supplement your expert analytic teams with relative newcomers, and give them a chance to speak. Ask them in particular about anomalies other analysts might explain away, and ask them about the big assumptions that other analysts might be making.”
    • “Third, guarantee these relative newcomers the chance to ask questions and make statements that might be dismissed out of hand by others at the table.”
    • “Fourth, (have) at least two of this type of questioner at the table.”
  • p163, after a section on the failures of the intelligence apparatus to correctly forecast India’s 1998 nuclear test: “These failures underscore a key point: no matter how good you are at the game of complex analysis, you need a set of checks and balances to hunt for flaws, biases, and levels of confidence that aren’t merited by the underlying analysis or data.”

The Finish Line

  • As an avid reader who takes notes to better remember books, I appreciate Mudd catering to the practical concerns of his audience, p165: “…using this chapter as a one-stop guide, and a refresher when you’d like to apply these principles without reading again through the entire book.”
  • p167: “That final decision, though, shouldn’t be confused with the analytic process we undertake to try to reduce uncertainty before we make the choice. The analytic process helps squeeze a little uncertainty out of the choice; the choice itself, though, still leaves us betting on a future outcome that we can’t really predict.”
  • p168 mentions John Wooden, and quotes the legendary coach on his philosophy of emphasizing preparedness, rather than winning: “We live in a society obsessed with winning and being number 1. Don’t follow the pack. Rather, focus on the process instead of the prize.”
  • p172, another good general point (and you’ll see this in practice all the time on talking-head ‘news’ programs and in politics): “By allowing a yes/no question, you give advantage to a respondent who wants to play the emotion card.”
  • p172: “Just adding the word ‘how’ to many questions results in more open-ended, less yes/no, analysis. Here’s the switch: How much should we worry about terrorism? How should we rank it among our list of concerns?”

Appendix A: Where are the traps? Thoughts on bias.

  • This whole appendix on biases is great

Appendix B: A Practitioner’s Checklist

  • This whole appendix is a very useful guide to the entire process…a handy-dandy cheat sheet that I’m sure I’ll make use of in the future!
  • p214: “Experts will expound on what they know all day long, sometimes with bold bumper-sticker judgments that aren’t based on a painstaking analytic process. Analysts are less bold and less colorful, but more systematic and more open to understanding both the risks of expertise and the challenges or understanding change. They’re more apt to understand, too the gap between what they know and what they think.”

“Experts will expound on what they know all day long, sometimes with bold bumper-sticker judgments that aren’t based on a painstaking analytic process. Analysts are less bold and less colorful, but more systematic and more open to understanding both the risks of expertise and the challenges or understanding change. They’re more apt to understand, too the gap between what they know and what they think.”

Lee Brooks is the founder of Cromulent Marketing, a boutique marketing agency specializing in crafting messaging, creating content, and managing public relations for B2B technology companies.

Tagged with: , , , , , , , , , , , , , , , , ,
Posted in Books, Everything, Leadership, Management
7 comments on “Book Report: The HEAD Game
  1. […] and makes good decisions. That’s why I read Superforecasting, that’s why I read The HEAD Game, that’s why I read Thinking, Fast and Slow, and that’s why I read Mindware – […]

  2. […] it happens, I’d also recently completed The HEAD Game, and intended to apply some of its lessons as I analyzed this business […]

  3. […] p50 discusses a panel convened in 1984 with the mission of “preventing nuclear war”: “The panel did its due diligence. It invited a range of experts – intelligence analysts, military officers, government officials, arms control experts, and Sovietologists – to discuss the issues. They too were an impressive bunch. Deeply informed, intelligent, articulate. And pretty confident that they knew what was happening and where we were heading”. It reminded me of the distinction between experts and analysts, as articulated in The HEAD Game. […]

  4. […] recurring message in Philip Mudd’s The HEAD Game – High Efficiency Analytic Decision-Making and the Art of Solving Complex Proble… is that you should front-load your analytic exercise by taking the effort to make sure you’re […]

  5. […] last sentence reminded me of a few sections of Philip Mudd’s The HEAD Game, specifically parts where he addresses how to properly and efficiently incorporate anomalies into […]

  6. […] Our Goals are Best Achieved Indirectly), analyzing complex issues to gain decision advantage (e.g., The HEAD Game – High Efficiency Analytic Decision-Making and the Art of Solving Complex Problems Q…), the role of luck in success (e.g., Success and Luck – Good Fortune and the Myth of […]

  7. […] papers, etc; I also communicate information inward, in the form of quantified market intelligence, decision-making analysis, and so […]

What do *you* think?

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Enter your email address and get posts delivered straight to your inbox.

Archives
%d bloggers like this: