Book Report: Superforecasting

book-superforecasting“In so many other high-stakes endeavors, forecasters are groping in the dark. They have no idea how good their forecasts could become. At best, they have vague hunches. More often than not, forecasts are made and then . . . nothing. Accuracy is seldom determined after the fact and is almost never done with sufficient regularity and rigor that conclusions can be drawn. The reason? Mostly it’s a demand-side problem: The consumers of forecasting – governments, business, and the public – don’t demand evidence of accuracy. So there is no measurement.” (Superforecasting)

Title: Superforecasting – The Art and Science of Prediction

Authors: Philip E. Tetlock and Dan Gardner

Publisher: Signal

Publication Date: 2015

Origin: Tetlock’s research has been referenced in other books I’ve read over the years, and Superforecasting caught my eye when it was released. I’ve noted elsewhere in this blog that I recognize the general unpredictability of the future, but that doesn’t mean I don’t believe it isn’t prudent to quantify the probability of uncertain outcomes. By reading Superforecasting, I hoped to understand the conditions and methodologies that enable practically accurate forecasting, whether I engage in that forecasting myself or need to make use of it within an organization.

Summary: Let’s start this summary with a quote from p4, “It turns out that forecasting is not a ‘you have it or you don’t’ talent. It is a skill that can be cultivated. This book will show you how.”

True to the authors’ word, that’s what Superforecasting does – and I must say, it does so very well.

This excerpt from p45 tells us what’s at stake: “Although bad forecasting rarely leads as obviously to harm as does bad medicine, it steers us subtly toward bad decisions and all that flows from them – including monetary losses, missed opportunities, unnecessary suffering, even war and death.”

“Although bad forecasting rarely leads as obviously to harm as does bad medicine, it steers us subtly toward bad decisions and all that flows from them – including monetary losses, missed opportunities, unnecessary suffering, even war and death.”

Superforecasting flows in a very clear and logical arc, starting by reviewing some of Tetlock’s more famous research, explaining the foundation of the Good Judgment Project, and then exploring the conclusions that are being drawn from that research:

  • An Optimistic Skeptic: even against all the evidence that much prediction and forecasting is garbage, shows that real forecasting – and superforecasting – can happen
  • Illusions of Knowledge: what we think we know versus the limitations of our knowledge
  • Keeping Score: explains how we can actually measure forecast accuracy – a necessary part of closing a feedback loop to improve performance
  • Superforecasters: illustrates superforecaster performance and contrasts against that of the intelligence community
  • Supersmart? examines the possibility that superforecasting is simply the result of superior intelligence (spoiler: it isn’t)
  • Superquants? examines the possibility that superforecasting is simply the result of superior mathematical skills (spoiler: it isn’t)
  • Supernewsjunkies? examines the possibility that superforecasting is simply the result of an addiction to the news (spoiler: it isn’t)
  • Perpetual Beta: demonstrates a key characteristic of superforecasters – that they’re always questioning their own forecasts and seeking improvement
  • Superteams: explores what can happen, good and bad, when forecasters are organized into teams
  • The Leader’s Dilemma: reconciles the skills that are associated with strong leadership with those seemingly contradictory skills that lead to superforecasting
  • Are They Really So Super? answers a question posed by Daniel Kahneman, when he asked of superforecasters, “Do you see them as different kinds of people, or as people who do different kinds of things?”
  • What’s Next? explores the potential impact of this research and these findings

Additionally, there are rich endnotes with extra info and explanation for the avid reader, and a handy-dandy Appendix of Ten Commandments for Aspiring Superforecasters.

Check out the video below of you want to see Tetlock run through a (fairly lengthy) summary of this topic.

My Take: I ripped through Superforecasting, motivated by both interest and enjoyment. The topic is interesting in and of itself, but has the added bonus that it touches on so many other areas that I enjoy; plus, Tetlock and Gardner’s writing is clear, humourous, illustrative, and honest.

The book flowed logically, with each chapter naturally building upon the previous in an overall arc, which made it that much easier to process. I appreciated the many, many endnotes – although of course I would’ve preferred if they were footnotes, to save me flipping to the back every couple of pages.

Unsurprisingly, Superforecasting spawned several posts, and had significant topical overlap with The HEAD Game; both books deal with processing information and making decisions in an uncertain world, and thankfully they made many of the same points, rather than contradicting each other.

Read This Book If: …You’re interested in understanding how to make and assess forecasts, and in recognizing BS forecasts and forecasters when you see them – and you will see them almost everywhere you look.

Notes and Quotes

On Optimistic Skeptic

“It is one thing to recognize the limits on predictability, and quite another to dismiss all prediction as an exercise in futility.”

  • p3: “Every day, the news media deliver forecasts without reporting, or even asking, how good the forecasters who made the forecasts really are. Every day, corporations and governments pay for forecasts that may be prescient or worthless or something in between. And every day, all of us – leaders of nations, corporate executives, investors, and voters – make critical decisions on the basis of forecasts whose quality is unknown. Baseball managers wouldn’t dream of getting out the checkbook to hire a player without consulting performance statistics. Even fans expect to see player stats on scoreboards and TV screens. And yet when it come to the forecasters who help us make decisions that matter far more than any baseball game, we’re content to be ignorant.”
  • p10: “It is one thing to recognize the limits on predictability, and quite another to dismiss all prediction as an exercise in futility.”
  • p14 talks about the importance of a “forecast-measure-revise” feedback loop, which will be familiar to anyone with control systems training, or who’s read things like Black Box Thinking and The Lean Startup: “In so many other high-stakes endeavors, forecasters are groping in the dark. They have no idea how good their forecasts could become. At best, they have vague hunches. That’s because the forecast-measure-revise procedure operates only within the rarefied confines of high-tech forecasting, such as the work of macroeconomists at central banks or marketing and financial professionals in big companies or opinion poll analysts like Nate Silver. More often than not, forecasts are made and then . . . nothing. Accuracy is seldom determined after the fact and is almost never done with sufficient regularity and rigor that conclusions can be drawn. The reason? Mostly it’s a demand-side problem: The consumers of forecasting – governments, business, and the public – don’t demand evidence of accuracy. So there is no measurement.”

“Accuracy is seldom determined after the fact and is almost never done with sufficient regularity and rigor that conclusions can be drawn. The reason? Mostly it’s a demand-side problem: The consumers of forecasting – governments, business, and the public – don’t demand evidence of accuracy. So there is no measurement.”

  • p15: on that note, see this opinion piece by Bill Gates, My Plan to Fix the World’s Biggest Problems
  • p16 introduces The Good Judgment Project (GJP), which provides the data foundation for much of the content and conclusions of Superforecasting. Reading about it rang a bell in my mind, triggering a memory of a friend mentioning years ago that he was part of some project that does predictions…I pinged him and, sure enough, it was this project.
  • p20: “Broadly speaking, superforecasting demands thinking that is open-minded, careful, curious, and – above all – self-critical. It also demands focus. The kind of thinking that produces superior judgment does not come effortlessly. Only the determined can deliver it reasonably consistently, which is why our analyses have consistently found commitment to self-improvement to be the strongest predictor of performance.”
  • p21 addresses a question: what about situations in which heuristics and algorithms can apply? “The point is now indisputable: when you have a well-validated statistical algorithm, use it.” But please note the “well-validated” part! See Paul Meehl’s research, and the investigations it spawned, for more on this topic.

Illusions of Knowledge

“Physicians who furiously debated the merits of various treatments and theories were ‘like blind men arguing over the colors of the rainbow.'”

  • p25, on the pitfalls of human nature: “We have all been too quick to make up our minds and too slow to change them. And if we don’t examine how we make these mistakes, we will keep making them. This stagnation can go on for years. Or a lifetime. It can even last centuries, as the long and wretched history of medicine illustrates.”
  • p28 provides a great quote from Ira Rutkow, which I intend to put to use: “Ignorance and confidence remained defining features of medicine. As the surgeon and historian Ira Rutkow observed, physicians who furiously debated the merits of various treatments and theories were ‘like blind men arguing over the colors of the rainbow.'”
  • p32, while describing Archie Cochrane’s attempts to improve medicine, and the enormous pushback he faced: “What people didn’t grasp is that the only alternative to a controlled experiment that delivers real insight is an uncontrolled experiment that produces merely the illusion of insight.”

“What people didn’t grasp is that the only alternative to a controlled experiment that delivers real insight is an uncontrolled experiment that produces merely the illusion of insight.”

  • p32 goes on to touch on political policies that go untested (e.g., most of them), and reminded me of Matthew Syed’s description in Black Box Thinking of the failures of the ever-popular Scared Straight programs
  • p32: note to self
  • p35: wow, it took 35 whole pages before Daniel Kahneman was mentioned! (and 174 before Carol Dweck came up!)
  • This part from p38 reminded me of another part of Black Box Thinking, and inspired me to write The Scourge of Confirmation Bias: “In science, the best evidence that a hypothesis is true is often an experiment designed to prove the hypothesis is false, but which fails to do so. Scientists must be able to answer the question ‘What would convince me I am wrong?’ If they can’t, it’s a sign they have grown too attached to their beliefs.” Also, can you imagine asking ideological politicians (and other zealots in general) that question? “Hey, what would convince you that you’re wrong about X?”

“Blink-think is another false dichotomy. The choice isn’t either/or, it’s how to blend them in evolving situations. That conclusion is not as inspiring as a simple exhortation to take one path or the other, but it has the advantage of being true.”

  • p41: “Popular books often draw a dichotomy between intuition and analysis – ‘blink’ versus ‘think’ – and pick one or the other as the way to go. I am more of a thinker than a blinker, but blink-think is another false dichotomy. The choice isn’t either/or, it’s how to blend them in evolving situations. That conclusion is not as inspiring as a simple exhortation to take one path or the other, but it has the advantage of being true.”
  • p45 reminds us what’s at stake: “Although bad forecasting rarely leads as obviously to harm as does bad medicine, it steers us subtly toward bad decisions and all that flows from them – including monetary losses, missed opportunities, unnecessary suffering, even war and death.”

Keeping Score

  • p49 mentioned “the unclassifiable Herbert Simon; I’m not certain if I’d heard of him before, so looked him up…and wowsa, neat stuff
  • p50 discusses a panel convened in 1984 with the mission of “preventing nuclear war”: “The panel did its due diligence. It invited a range of experts – intelligence analysts, military officers, government officials, arms control experts, and Sovietologists – to discuss the issues. They too were an impressive bunch. Deeply informed, intelligent, articulate. And pretty confident that they knew what was happening and where we were heading”. It reminded me of the distinction between experts and analysts, as articulated in The HEAD Game.

“…But then the train of history hit a curve, and as Karl Marx once quipped, when that happens, the intellectuals fall off.”

  • Later, on p50, when illustrating that the panel got their predictions very wrong: “…But then the train of history hit a curve, and as Karl Marx once quipped, when that happens, the intellectuals fall off.”
  • p56 talks about Sherman Kent’s attempts to designate numerical meanings to forecast language; I wrote about it in The Ambiguity of Language.
  • p64 introduces Brier scores, which can be summed up (by Wikipedia) as “a proper score function that measures the accuracy of probabilistic predictions”; Brier scores are how the forecasters in the Good Judgment Project were assessed for accuracy.
  • I’ve read on many occasions about foxes and hedgehogs – I recall specific examples in The HEAD Game and Good to Great – and now I know the origin, thanks to p69: “Decades ago, the philosopher Isaiah Berlin wrote a much-acclaimed but rarely read essay that compared the styles of thinking of great authors through the ages. To organize his observations, he drew on a scrap of 2,500-year-old Greek poetry attributed to the warrior-poet Archilochus: ‘The fox knows many things but the hedgehog knows but one thing.'”
  • For the last few pages, I’d been thinking about the pile of horseshit that is the Laffer curve, and then *BAM*, Art Laffer gets mentioned on p70
  • Here’s a fun fact about Tetlock’s Expert Political Judgment (EPJ) project, “When hedgehogs in the EPJ research made forecasts on the subjects they knew the most about – their own specialities – their accuracy declined.” In addition to being funny, especially if – like me – you can’t stand talking heads on TV, there’s actually a valid point of caution here: we’re at risk of being blinded by our own supposed expertise on matters with which we’re very familiar…so let’s not take things for granted.

“The more famous an expert was, the less accurate he was. That’s not because editors, producers, and the public go looking for bad forecasters. They go looking for hedgehogs, who just happen to be bad forecasters. Animated by a Big Idea, hedgehogs tell tight, simple, clear stories that grab and hold audiences.”

  • p72 has another fun fact from the EPJ: “…the EPJ data, which revealed an inverse correlation between fame and accuracy: the more famous an expert was, the less accurate he was.” But why? “That’s not because editors, producers, and the public go looking for bad forecasters. They go looking for hedgehogs, who just happen to be bad forecasters. Animated by a Big Idea, hedgehogs tell tight, simple, clear stories that grab and hold audiences.”
  • p73 talks about the wisdom of crowds, and makes an important point that I think is often missed – I’m not sure I appreciated it until now: sourcing wisdom from crowds works because and when little bits of signal are aligned, but the noise isn’t. That is, lots of people contribute little things that are correct, but also things that aren’t correct. All the bits that are correct generally point towards the correct answer, while the bits that aren’t correct point all over and cancel each other out.
  • p80, reminiscent both of a growth mindset and of mental discipline: “Our thinking minds are not immutable. Sometimes they evolve without our awareness of the change. But we can also, with effort, choose to shift gears from one mode to another.”

“‘All models are wrong,’ the statistician George Box observed, ‘but some are useful.'”

  • p80: “No model captures the richness of human nature. Models are supposed to simplify things, which is why even the best are flawed. But they’re necessary. Our minds are full of models. We couldn’t function without them. And we often function pretty well because some of our models are decent approximations of reality. ‘All models are wrong,’ the statistician George Box observed, ‘but some are useful.’ The fox/hedgehog model is a starting point, not the end.”
  • p80, summarizing the most important outcome of the EPJ project: “Forget the dart-throwing chimp punch line. What matters is that EPJ found modest but real foresight, and the critical ingredient was the style of thinking.”

“Forget the dart-throwing chimp punch line. What matters is that EPJ found modest but real foresight, and the critical ingredient was the style of thinking.”

Superforecasters

  • p85 reminded me of The HEAD Game, when, in a section about the intelligence failures that led to the conclusion that Iraq had WMDs, it said, “Post-mortems even revealed that the IC [intelligence community] had never seriously explored the idea that it could be wrong.”
  • p87 makes a crucial distinction: “Absent accuracy metrics, there is no meaningful way to hold intelligence analysts accountable for accuracy. Note the word meaningful in that last sentence. When the director of National Intelligence is dragged into Congress for a blown call, that is accountability for accuracy. It may be ill informed or capricious, and serve no purpose beyond political grandstanding, but it is accountability. By contrast, meaningful accountability requires more than getting upset when something goes awry. It requires systematic tracking of accuracy – for all the reasons laid out earlier.
  • An endnote on p99 mentions the book, The Halo Effect: . . . and the Eight Other Business Delusions That Deceive Managers; sounds like one I’d enjoy
  • p100 and 101 talk about regression to mean; beside this section I’ve scrawled, “I really with everyone understood this concept”. If I ran the public school education system, this concept would be drilled into kids. And yes, it matters in things much more important than sports!

Supersmart?

“Coming up with an outside view, an inside view, and a synthesis of the two isn’t the end. It’s a good beginning. Superforecasters constantly look for other views they can synthesize into their own.”

  • p110 quotes Robert McNamara, speaking about the Vietnam war: “‘The foundations of our decision making were gravely flawed,’ McNamara wrote in his autobiography. ‘We failed to analyze our assumptions critically, then or later.'” Should’ve gone for the charity proposition.
  • p118 talks about the importance of choosing/finding/researching a baseline rate, and then adjusting, when making estimations. Doing so is called gathering the “outside view”, which then you adjust with your “inside view”.
  • p120 talks about how to create your inside view: “It is targeted and purposeful: it is an investigation, not an amble.”
  • p123: “Coming up with an outside view, an inside view, and a synthesis of the two isn’t the end. It’s a good beginning. Superforecasters constantly look for other views they can synthesize into their own.”
  • p123, on the importance of detaching and self-scrutinizing: “Researchers have found that merely asking people to assume their initial judgment is wrong, to seriously consider why that might be, and then make another judgment, produces a second estimate which, when combined with the first, improves accuracy almost as much as getting a second estimate from another person.”
  • p124: “The more sophisticated forecaster knows about confirmation bias and will seek out evidence that cuts both ways.”

“The more sophisticated forecaster knows about confirmation bias and will seek out evidence that cuts both ways.”

  • p126: “A brilliant puzzle solver may have the raw material for forecasting, but if he doesn’t also have an appetite for questioning basic, emotionally charged beliefs he will often be at a disadvantage relative to a less intelligent person who has a greater capacity for self-critical thinking. It’s not the raw crunching power you have that matters most. It’s what you do with it.”
  • p127, just to beat this point to a bloody pulp: “For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded. It would be facile to reduce superforecasting to a bumper-sticker slogan, but if I had to, that would be it.”

Superquants?

“A smart executive will not expect universal agreement, and will treat its appearance as a warning flag that groupthink has taken hold. An array of judgments is welcome proof that the people around the table are actually thinking for themselves and offering their unique perspectives.”

  • p131: “A smart executive will not expect universal agreement, and will treat its appearance as a warning flag that groupthink has taken hold. An array of judgments is welcome proof that the people around the table are actually thinking for themselves and offering their unique perspectives.”
  • This single line from p135 incensed me enough to write The Ambiguity of…Numbers?: “But as researchers have shown, people who use ‘50%’ or ‘fifty-fifty’ often do not mean it literally.” … below which I have scrawled, “They should use different words, then!”
  • Haha, just like Smooth Jimmy Apollo, p139: “If a forecaster says there is a 74% chance the Republicans will win control of the Senate in an upcoming election…do not conclude the forecast was wrong if the party doesn’t take the Senate because ‘a 74 percent chance it will’ also means ‘a 26 percent chance it won’t’.”
  • p140 talks about how people have a tendency to conclude that anything with a high probability is a certainty, and only appreciate that something might not happen when the probabilities get into the 60/40 range.

Chance and fate do not co-exist, and I believe in chance.

  • On p149, I’ve scrawled, “Chance and fate do not co-exist, and I believe in chance.”
  • p150 is reminiscent of The Improbability Principle, and further illustrates how poor a grasp of probability most people have, and how quick we are to assign unlikely events that come to fruition as an act of fate
  • p152 says, “So finding meaning in events is positively correlated with well-being but negatively correlated with foresight. That sets up a depressing possibility: Is misery the price of accuracy?” I’ve gotta say, I don’t feel this way. I feel empowered knowing that there is no grand plan, and I can hold sway over my own life. To quote the great philosopher Mortimer: “Nobody exists on purpose. Nobody belongs anywhere. Everybody’s gonna die. Come watch TV.”

Supernewsjunkies?

  • p160 talks about “belief persistence”, also referred to as “belief perseverance” (reminiscent of the judicial stories in Black Box Thinking): “People can be astonishingly intransigent – and capable of rationalizing like crazy to avoid acknowledging new information that upsets their settled beliefs.”
  • p163, ego is the enemy! “This suggests that superforecasters may have a surprising advantage: they’re not experts or professionals, so they have little ego invested in each forecast.”
  • p179…but damnit, I want to learn by reading books! “The knowledge required to ride a bicycle can’t be fully captured in words and conveyed to others. We need ‘tacit knowledge,’ the sort we only get from bruising experience.”
  • p180 continues this antagonizing point: “It should be equally obvious that learning to forecast requires trying to forecast. Reading books on forecasting is no substitute for the experience of the real thing.”

Perpetual Beta

“Research shows that judgment calibrated in one context transfers poorly, if at all, to another.”

  • p182, as all marketers, fortune tellers, and prognosticators know: “Vague language is elastic language.”
  • p185 makes an interesting point about the lack of transferability of judgment from one context to another: “Research shows that judgment calibrated in one context transfers poorly, if at all, to another. So if you were thinking of becoming a better political or business forecaster by playing bridge, forget it. To get better at a certain type of forecasting, that is the type of forecasting you must do – over and over again, with good feedback telling you how your training is going, and a cheerful willingness to say, ‘Wow, I got that one wrong. I’d better think about why.'”
  • p186 talks about the usefulness of making notes that reveal your thinking around decisions and forecasts, something I’ve considered – but admittedly never done – in a business context. I shall endeavour to do so! A business journal, if you will.

Superteams

  • p196, in a really funny section about the very serious Bay of Pigs disaster and its surrounding circumstances: “Groups that get along too well don’t question assumptions or confront uncomfortable facts. So everyone agrees, which is pleasant, and the fact that everyone agrees is tacitly taken to be proof the group is on the right track. We can’t all be wrong, can we? So if a secret American plan to invade Cuba without apparent American involvement happens to be published on the front page of the New York Times, the plan can still go ahead – just make sure there are no American soldiers on the beach and deny American involvement. The world will believe it. And if that sounds implausible . . . well, not to worry, no one in the group has objected, which means everyone thinks it’s perfectly reasonable, so it must be.”
  • p197: “How the Kennedy White House changed its decision-making culture [between the Bay of Pigs and the Cuban Missile Crisis] for the better is a must-read for students of management and public policy because it captures the dual-edged nature of working in groups. Teams can cause terrible mistakes. They can also sharpen judgment and accomplish together what cannot be done alone. Managers tend to focus on the negative or the positive but they need to see both.”
  • p198 has a nice juxtaposition: “Maybe one person is a loudmouth who dominates the discussion, or a bully, or a superficially impressive talker, or something with credentials that cow others into line. In so many ways, a group can get people to abandon independent judgment and buy into errors… But loss of independence isn’t inevitable in a group, as JFK’s team showed during the Cuban missile crisis. If forecasters can keep questioning themselves and their teammates, and welcome vigorous debate, the group can become more than the sum of its parts.”
  • p207 briefly talks about the importance of a shared purpose to bring out the best in teams

The Leader’s Dilemma

“But look at the style of thinking that produces superforecasting and consider how it squares with what leaders must deliver.”

  • p212 outlines the challenge by showing characteristics of leaders and contrasting with everything that’s been said so far about the characteristics that lead to superforecasts: “Leaders must be reasonably confident, and instill confidence in those they lead, because nothing can be accomplished without the belief that it can be. Decisiveness is another essential attribute. Leaders can’t ruminate endlessly. They need to size up the situation, make a decision, and move on. And leaders must deliver a vision – the goal that everyone strives together to achieve. But look at the style of thinking that produces superforecasting and consider how it squares with what leaders must deliver. How can leaders be confident, and inspire confidence, if they see nothing as certain? How can they be decisive and avoid ‘analysis paralysis’ if their thinking is so slow, complex, and self-critical? How can they act with relentless determination if they readily adjust their thinking in light of new information or even conclude they were wrong? And underlying superforecasting is a spirit of humility – a sense that the complexity of reality is staggering, our ability to comprehend limited, and mistakes inevitable.”
  • p213, quoting Helmuth von Moltke: “No plan of operations extends with certainty beyond the first encounter with the enemy’s main strength.” In contemporary literature, one Mike Tyson paraphrased this piece of wisdom as, “Everyone has a plan until they get punched in the mouth”.

“A leader must possess an unwavering determination to overcome obstacles and accomplish his goals – while remaining open to the possibility that he may have to throw out the plan and try something else.”

  • p216: “So a leader must possess an unwavering determination to overcome obstacles and accomplish his goals – while remaining open to the possibility that he may have to throw out the plan and try something else.”
  • p217, while discussing the German military manual: Auftragstaktik blended strategic coherence and decentralized decision making with a simple principle: commanders were to tell subordinates what their goal is but not how to achieve it.”

“The humility required for good judgment is not self-doubt – the sense that you are untalented, unintelligent, or unworthy. It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes.”

  • p228, awfully reminiscent of Dunning-Kruger: “The humility required for good judgment is not self-doubt – the sense that you are untalented, unintelligent, or unworthy. It is intellectual humility. It is a recognition that reality is profoundly complex, that seeing things clearly is a constant struggle, when it can be done at all, and that human judgment must therefore be riddled with mistakes. This is true for fools and geniuses alike. So it’s quite possible to think highly of yourself and be intellectually humble. In fact, this combination can be wonderfully fruitful. Intellectual humility compels the careful reflection necessary for good judgment; confidence in one’s abilities inspires determined action.
  • p229, I’m sure we’ve all seen this in action, whether in sports, business, politics, life’s little conflicts, etc.: “Forecasters who can’t cope with the dissonance risk making the most serious possible forecasting error in a conflict: underestimating your opponent.”

Are They Really So Super?

  • p233 presents Michael Flynn in a not-flattering light (leapt to conclusions, didn’t check assumptions, etc.). And look what’s happened since!
  • p234 talks about adversarial collaboration, which reminds me that I really need to read Team of Rivals
  • p237-242 talk about black swan events, both strict (i.e., genuinely unpredictable/unforecastable) and loose (i.e., the improbable)
  • p243 led to me writing The Problem of Prognostication; I can’t express how much I loved this example
  • p244: “Wells hinted at a better way in his closing comment. If you have to plan for a future beyond the forecasting horizon, plan for surprise. That means, as Danzig advises, planning for adaptability and resilience.” This, of course, is reminiscent of security advisors like Bruce Schneier who advocate for investing in emergency response capabilities (in general), rather than spending gobs on sensationalized movie-plot threats. I found myself in an increasingly heated discussion a few months ago with an Americanized friend, and we needed a third companion to tactfully interrupt the conversation and move it on to something less incendiary.

“Taleb has taken this argument further and called for critical systems – like international banking and nuclear weapons – to be made ‘antifragile,’ meaning they are not only resilient to shocks but strengthened by them. In principle, I agree. But a point often overlooked is that preparing for surprises – whether we are shooting for resilience or antifragility – is costly. We have to set priorities, which puts us back in the forecasting business.”

  • p244: “Taleb has taken this argument further and called for critical systems – like international banking and nuclear weapons – to be made ‘antifragile,’ meaning they are not only resilient to shocks but strengthened by them. In principle, I agree. But a point often overlooked is that preparing for surprises – whether we are shooting for resilience or antifragility – is costly. We have to set priorities, which puts us back in the forecasting business.”
  • p245: “Probability judgments should be explicit so we can consider whether they are as accurate as they can be. And if they are nothing but a guess, because that’s the best we can do, we should say so. Knowing what we don’t know is better than thinking we know what we don’t.”

What’s Next?

“Numbers must be constantly scrutinized and improved, which can be an unnerving process because it is unending. Progressive improvement is attainable. Perfection is not… The solution is not to abandon metrics. It is to resist overinterpreting them.”

  • p254 introduced me to Vladimir Lenin’s, “kto, kogo?”, which literally means “who, whom” and was Lenin’s shorthand for, “Who does what to whom?”
  • p259: oh hey, it turns out Tetlock is Canadian!
  • p260, on a minor note: throughout the book, Tetlock has investigated the true origin of quotes, and I really appreciate that attention to detail and accuracy…he tracked down the real attribution of a famous saying often assigned to John Maynard Keynes (“When the facts change, I change my mind. What do you do, sir?”), and on this page he’s gone into one usually attributed to Einstein (“Not everything that counts can be counted, and not everything that can be counted counts.”)
  • p260: “Numbers must be constantly scrutinized and improved, which can be an unnerving process because it is unending. Progressive improvement is attainable. Perfection is not.” An endnote associated with this paragraph adds, “The solution is not to abandon metrics. It is to resist overinterpreting them.”
  • Interestingly, p265 uses the term “drivers”, just as The HEAD Game
  • p267…I just like how he phrased this philosophy, and it serves as sound advice: “I am agnostic on issues outside my field.”
  • p269, referring back to adversarial collaboration: “The catch is that the Kahneman-Klein collaboration presumed good faith. Each side wanted to be right but they wanted the truth more.”

Appendix: Ten Commandments for Aspiring Superforecasters

  • p278, from the section “Triage”: “Bear in mind the two basic errors it is possible to make here. We could fail to try to predict the potentially predictable or we could waste our time trying to predict the unpredictable.”
  • p284: “Master the fine arts of team management, especially perspective taking (understanding the arguments of the other side so well that you can reproduce them to the other side’s satisfaction), precision questioning (helping others to clarify their arguments so they are not misunderstood), and constructive confrontation (learning to disagree without being disagreeable). Wise leaders know how fine the line can be between a helpful suggestion and micromanagerial meddling or between a rigid group and a decisive one or between a scatterbrained group and an open-minded one.”
  • p285, which really strikes a chord with me (see Adaptability: A Critical Quality of Great Leaders): “‘It is impossible to lay down binding rules,’ Helmuth von Moltke warned, ‘because two cases will never be exactly the same.'”

“Bear in mind the two basic errors it is possible to make here. We could fail to try to predict the potentially predictable or we could waste our time trying to predict the unpredictable.”

Lee Brooks is a technology marketer based in the high-tech hub of Waterloo, Ontario, Canada.

Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Books, Everything, Leadership, Math and Science
8 comments on “Book Report: Superforecasting
  1. […] recently completed Superforecasting, I was also reminded of a quote from that book: “Intellectual humility compels the careful […]

  2. […] the timing comes on the heels of several posts on this blog about the challenges of and best practices for prognostication, so I was in the right frame of mind – skeptical, but interested – for […]

  3. […] someone who wants to learn and build a powerful arsenal of tools – and who subscribes strongly to the unpredictability of the real world – I’d prefer Hoffman genuinely explore different approaches, rather than only tell one […]

  4. […] Philip E Tetlock and Dan Gardner explain in Superforecasting (p38-39): “Our natural inclination is to grab on to the first plausible explanation and […]

  5. […] read The HEAD Game and Superforecasting back-to-back, I’m struck (but not surprised) by the thematic overlap; for instance, both […]

  6. […] the early part of Superforecasting, the authors talk about the importance of being able to keep score of forecast accuracy. Only by […]

  7. […] Like the previous post, this post draws upon examples from Superforecasting. […]

  8. […] the end of Superforecasting, there’s a terrific example that really illustrates the futility of trying to perfectly […]

What do *you* think?

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

Enter your email address and get posts delivered straight to your inbox.

Archives
%d bloggers like this: