Home » Readings (Summary Notes) NEW!!

Click on image to purchase

Olduvai III: Catacylsm
Click on image to purchase

Post categories

Readings (Summary Notes) NEW!!

The following are my personal summary notes from texts I have read. I am beginning with Dan Gardner’s Future Babble: Why Expert Predictions Fail–and Why We Believe Them Anyway. McClelland & Stewart Ltd., 2010. (ISBN 978-0-7710-3513-5). Next on my list is Joseph Tainter’s The Collapse of Complex Societies. Cambridge University Press, 1988. (ISBN 978-0-521-38673-9). Not sure how often this page will be updated but I will keep a running list of the books here at the top.

Gardner, D.
Future Babble: Why Expert Predictions Fail–and Why We Believe Them Anyway. McClelland & Stewart Ltd., 2010. (ISBN 978-0-7710-3513-5)

-experts are awful at predicting the future but psychology compels us to take them seriously
-the book aims “to better understand the human desire to know what will happen, why that desire will never be satisfied, and how we can better prepare ourselves for the unknowable future” (p.X)
-only time will elucidate whether a prediction will be accurate or not

Chapter 1: Introduction
-Gardner opens with quotes and arguments by various ‘experts’ at the start of the 20th century extolling the peace and prosperity of the time and their view that it was sure to last indefinitely
-optimism reigned right up until WW1 began and then shifted quickly and faith in the future and progress were shattered
-the War to End All Wars was followed by The Great Depression and then another world war
-experts began predicting global chaos and a breakdown of all societies
-after WW2, things shifted in the opposite direction again with a lot of optimism
-there appears to be a universal desire to know what the future holds and thus our current deference to experts when trying to gain insight
-history, however, is littered with the failed predictions of experts
-some are prophecies of disaster (e.g., Paul Ehrlich’s The Population Bomb with its 1968 prediction of famines within a decade; President Carter’s call to transition from oil as production would soon fail to meet demand and prices would soar)
-prognostications about amazing technologies to come have also been common and wrong (e.g., stock market euphoria forever–Irving Fisher’s “permanently high plateau” or Ben Bernanke’s “subprime mortgage impact is contained”)
-pessimists and optimists have been equally wrong “so the inaccuracy of expert predictions isn’t limited to pessimists or optimists, liberals or conservatives. It’s also not about a few deluded individuals. Over and over in the history of predictions, it’s not one expert who tries and fails to predict the future. It’s whole legions of experts.” (p.11)
-these examples are not of lone individuals but usually a consensus of experts
-we are constantly provided predictions by experts about the economy, climate, politics, energy, technology, etc.
-economic predictions in particular, have an almost unblemished record of inaccuracy, especially when it comes to forecasting recessions
-while experts might know their particular fields of knowledge quite well, their ability to predict the future of it is abysmal
-yet despite the constant failure of expert predictions, most people continue to listen to and believe them
-Gardner’s book looks at why expert predictions fail and why we tend to believe them anyways
-expert predictions fail because the world is too complex to predict and our brains are prone to cognitive mistakes making efforts at predicting the unpredictable impossible
-humans tend to believe expert predictions so as to lessen uncertainty; we want to be able to know what the future holds so we see patterns where perhaps none exist or we treat random events as meaningful
-we simplify complexity to try and understand our world and what might happen next
-we often look to experts in particular fields to help us; their appearance in media that projects certainty and conclusiveness convince us
-we believe because we want to believe
-it is particularly during times of uncertainty that we seek expert certainty, but Gardner reminds us we must “be skeptical when experts claim to know what the future holds” (p16) and realise “there are no crystal balls, and no style of thinking, no technique, no model will ever eliminate uncertainty. The future will forever be shrouded in darkness. Only if we accept and embrace the fundamental fact can we hope to be prepared for the inevitable surprises that lie ahead.” (p.17)
-few experts will admit their predictions were wrong
-some might admit that the details were off but usually reply ‘I was almost right’; or ‘some unforeseeable event blindsided the prediction’; or that the prediction was a self-negating prophecy whose awareness caused others to act that prevented it; or experts suggest we should wait and see, especially if the time frame provided is vague; or they insist they never made assertions of certainty, only possibility (implicit in the statement that something could happen is that it could also ‘not’ happen
-depending upon one’s perspective/interpretation, differences of opinion regarding predictive accuracy are common
-Gardner’s thesis about the fallibility of expert predictions could be argued to show that only some fail, not all
-in fact, some predictions do turn out as prophesied
-so what is the rate of success/failure?
-spreading a net to try and capture all such predictions is near impossible so perhaps a narrower view of such a rate for individuals might suffice
-media attempts to do this have concluded that predictions are rarely accurate, but these attempts lacked scientific/experimental rigour
The Experiment
-a rigourous study would include: experts from a wide range of fields with varied affiliations, political biases, backgrounds; provide answers to specific questions (i.e., true or false) with their likelihood (i.e., precise percentage); a large number of predictions from each expert so as to allow statistical analysis and a confidence/discrimination measure; questions involving various time frames (e.g., 1 to 20 years)
-Philip Tetlock carried out such an experiment
-as a young appointee to a National Research Council committee searching for a way to reduce tensions during the height of the Cold War (1984), Tetlock listened and noted a variety of expert opinions
-when events unfolded unlike predictions, he noted none of the experts shifted their understanding/narrative but simply incorporated them into their worldview/interpretations–“The experts had their stories and they were sticking to them.” (p.25)
-he subsequently designed an experiment as described above with 27,450 predictions of the future
-Tetlock argues the results showed expert opinions were: ‘less accurate than a dart-throwing chimpanzee’ with respect to calibration/accuracy; they were slightly better in their discrimination/confidence measurement, but still terrible
-basically, the expert guesses were about as good as random ones
-Tetlock concluded that serious skepticism of expert prediction is warranted and that the is a huge range among experts with some being ‘borderline delusional’ and others ‘surprisingly nuanced and well-calibrated’
-political leanings, optimism/pessimism, access to data, and education level had little positive impact
-it seemed how individuals thought was the difference maker
-those uncomfortable with uncertainty and complexity aimed to depend upon a core theoretical theme to guide their predictions, and felt more confident that they were accurate
-alternatively, the better performing experts had no such template, used multiple sources for data/ideas, questions themselves, acknowledged mistakes using them to adjust thinking, and, most importantly, viewed the world as an uncertain, complex entity to the point that they doubled the ability of anyone to predict the future, tending to be much less confident than others in their predictions
-“Tetlock’s data couldn’t be more clear. On both calibration and discrimination, complex and cautious thinking trounced simple and confident…[those] who are ideologically extreme and even worse forecasters than others of their kind…[with] predictions involving their particular specialty, their accuracy declined. And it got worse still when their predictions was for the long term.” (p.27)
-for confident expert that rely on a theme (the ones the media usually uses for expert opinion), their long-term predictions are almost certainly wrong
-in fact, one interesting conclusion from Tetlock’s experiment was that the “bigger the media profile of the expert, the less accurate his predictions are.” (p.28)
-while Tetlock believed predictions could be made better via self-critical and thoughtful reflection, Gardner disagrees arguing “No matter how clever we are, no matter how sophisticated our thinking, the brain we use to make predictions is flawed and the world is fundamentally unpredictable.” (p.28)

Chapter 2: The Unpredictable World
-in April, 1977, Us President Jimmy Carter addressed the US with the grim warning that the US/world was on the precipice of an energy disaster with respect to oil and gas demand exceeding supply
-Carter’s warning was based on his advisor’s beliefs using expert analyses
-Ulf Lantzke, executive director of the International Energy Agency, warned of the likelihood of another depression, echoing the consensual view of experts–virtually all of them expected a storm, soon
-this is a great example of expert prediction fallibility
-the same warnings were issued for coal that experts believed could not be replaced
-predictions of oil’s demise began shortly after its use became widespread in the early 1900s, and continued for most the next century
-but overly optimistic views were also present at times, with President Nixon’s Task Force on Oil Imports declaring the US would be self-sufficient in oil for at least a decade and prices would remain low–only for both predictions to turn 180 degrees within just a couple of years
-oil analysts who attempt to predict oil prices and demand have had little, if any, success
-most people believed the ‘shortages’ were contrived by the oil industry to ‘market’ price increases
-Gardner argues that prices are driven primarily by supply and demand so ‘experts’ should be able to do a much better job with their predictions
[Thoughts: several factors are not considered here such as the role of debt/money-printing, geopolitics, diminishing returns, etc.]
About That Funny Old World
-Newton, in his Principia, introduced laws of motion that allowed planetary movement to be calculated via mathematical equations, resulting in accurate predictions of their future location
-this shifted scientific thinking towards the belief that all matters could be analysed and predicted according to laws and mathematical equations
-the universe and its workings were viewed as a clock whose mechanisms could be investigated, their movements determined, and future destinations predicted
-the world, however, is much more complex than ever imagined and our ability to predict the future has not improved significantly
-as physicist David Gross stated “the most important product of knowledge is ignorance” (p.36); in other words, the more we discover, the more we find out what we don’t know
-one dilemma we encounter in our efforts is that even minute errors in our understanding can lead to enormous mistakes in our conclusions/predictions
-MIT researcher Edward Lorenz demonstrated this predicament with a modelling program he stopped midway and then restarted it with rounded numbers (3 decimal places instead of 6) providing wildly different results
-this discovery was termed chaos and suggested systems subject to change were impossible to predict (Butterfly Effect)
-chaos theory doesn’t negate predictions entirely, it just means the further in time from the present the less precise a model is and the more room for error
-thus a weather forecast may be far more accurate for 12-24 hours and far less so for 5-7 days away
-simple, linear systems (such as planetary motions) can be predicted using mathematical equations
-non-linear systems, however, are far more complex and cannot be reduced to equations
-besides chaos, feedback impacts non-linear systems either increasing/enhancing or decreasing/dampening some variable
-multiple feedback loops can make a system even more unpredictable; and if feedback loops are acting in opposition, a system may appear stable until it suddenly destabilises and experiences explosive change
-while some things are predictable, many are not and never will be
Billiard Balls With Eyes
-physical systems can be very complex, but systems involving humans are even more so
-“With natural science increasingly aware of the limits of prediction, and with prediction even more difficult when people are involved, it would seem obvious that social science–the study of people–would follow the lead of natural science and accept that much of what we sould lke to predict will forever be unpredictable. But that hasn’t happened.” (p.42)
-in fact, as the natural sciences were abandoning ‘predictability’, the social sciences more firmly embraced it and continued issuing long-term predictions regardless of repeated failures
-for example, research by Gaddis (1992) demonstrated that forecasting about geopolitical events after WW2 failed to see the sudden end of the Cold War with the fall of the Soviet Union
-not only are macro level human affairs impossible to predict but so is the human brain that conceives such futures
-with tens of billions of neurons whose firing is unpredictable, the combinations and permutations of neural connections are almost unlimited
-Karl Popper (1930s) argued our scientific knowledge limits our ability to predict and since it is not known how that knowledge might change, our predictions are limited even further
Monkeys and Chaos
-when a man intervened when a monkey attacked his dog, the repercussions were far-reaching (another monkey bit him, it became infected and the man died; his death set off a chain of events inducing a war where some 250,000 people died)
-while a historian may be able to look back in time and trace events that impacted the future, using the present to predict the future given accidents or seemingly trivial events
The Demography of Uncertainty
-while most can admit the future can be surprised by technology or politics, we tend to still believe human affairs are predictable
-demographics, for example, are thought to give solid insight into the future and used to make expert predictions all the time
-Gardner counters, however, that while demographic changes tend to be slower than other aspects of human affairs, nothing is certain
-wars can erupt, fertility rates can suddenly shift, and predictions based on supposedly sound demographic trends can and do often miss widely
-fertility rates in the West, for example, started declining in the mid- to late-19th century with predictions the imminent demise of society, only to experience a huge jump after WW2 accompanied by predictions in the late 1950s of disaster due to a population explosion, only once again reversing course in the mid-1960s
-trends change, constantly requiring projections to be revised again and again
-even ‘facts’ about the past shift, with world population estimates being revised periodically to reflect new information (e.g., 1951 world population total revised 17 times between 1952-1976)
-demographic predictions seem to be relatively stable for a generation (20-25 years) but not beyond that
-variables that can influence trends include: technology, fertility rates, disease, sociopolitics, migration
-Mathus’s 1798 forecast regarding population outpacing available resources based upon careful observation and sound logic never occurred when he proposed (nineteenth century)
-advances in science and technology allowed the growing population to avoid disaster
Whither Oil?
-predicting oil prices, however, should be relatively easy since it is the product of just two variables supply and demand
-supply should be quite easy to determine using a linear equation
-no such formula exists, however, as supply is not so straight forward
-oil production in the Niger Delta region demonstrates why
-oil pipelines are routinely sabotaged, oil workers kidnapped, and drilling platforms vandalised
-such behaviour impacts supply and thus prices given the thin margin between supply and demand
-and of course no one can predict unrest in major oil-producing regions (e.g., Iran’s 1979 revolution) and politics is only a single variable influencing oil supply
-technology is another variable, often dependent upon scientific knowledge
-then there is the even more complex demand side with influences of technology and sociocultural aspects (e.g., increase in suburban housing, high gas/oil taxes, etc.)
But What About the Predictable Peak?
-one important prediction that does seem to be playing out is Marion King Hubbert’s regarding peak production in the U.S.
-other experts scoffed at his 1957 prediction that U.S. oil production would peak in 10-15 years, until it did around the time King suggested
-while few dispute the fact of world oil peak, it is the timing that remains elusive
-knowing this timing seems critical to economies that depend greatly on the energy provided by oil
-many experts fear we have reached or are very near this peak
-several assumptions, however, underpin these predictions such as ever-expanding demand, little technological innovation, and accurate reserve estimates
-Hubbert’s prediction about global production has not materialised (1995 peak)
[thoughts: this analysis overlooks the importance of Energy-Return-On-Energy-Invested completely; the problem of diminishing returns is ignored; the role of debt to sustain production is not considered]
Asking the Right Question
-perhaps asking why can’t we predict oil prices is the wrong question and asking why we think we can is a better one
-since oil began being traded as a commodity, oil price forecasting has been attempted and never successfully, but it continues despite repeated failure
-rather than question the entire idea of forecasting especially long-term, inaccurate predictions are simply abandoned and new ones made
-prices have dropped then jumped then dropped again, almost always in opposition to expert forecasts (January 2004–$24/barrel, doubling in June to $54; it continued to climb to $147 in July 2008, dropping in September until it reached $33 by December)
-at the time of writing (early 2010) oil was just over $80/barrel with some predicting it would rise to $200 by 2012
-Gardner insists no one truly knows where the price is headed and only time will tell

Chapter 3: In the Minds of Experts
-in his A Study of History, Arnold Toynbee argued he had discovered a civilisational pattern of: genesis, growth, breakdown, and disintegration
-while some called him a genius for this, others feared he was deceiving himself
-in the summer of 1920 he read Oswald Spengler’s The Decline of the West that argued all civilisations rise and then fall, and the West was no different
-Toynbee was critical of Spengler’s thesis as it lacked evidence and merely made assertions
-Toynbee aimed to use empirical methods to assess Spengler’s argument
-Toynbee went onto suggest “all civilizations follow a pattern: Birth was followed by differentiation, expansion, breakdown, empire, universal religion, and finally interregnum” (p.60)
-he studied numerous histories and found the pattern in each
-other historians, however, were critical suggesting Toynbee omitted counterfactual evidence, misinterpreted evidence, and even created data where none existed
-few of his peers accepted his theory arguing his work was not empirical but consisted of filtering facts to fit his thesis
-others contended there were no universal patterns and that unique events define history, not generalisations
-despite fellow historians panning the work, media popularised it and Toynbee and his theory became all the rage
-Toynbee used this popularity to voice his views on the future and what humanity should do to avoid its inevitable decline
-in an age of nuclear weapons, Toynbee insisted there were only two choices: nuclear war or a universal state
-he argued for: international agencies with unchecked power; a world government; an invincible military in few hands (probably American); while democratic, the public would have no real influence on government and it would regulate all aspect of life; life would turn from technology to spirituality/religion
-late in his life he continued to insist in the need for a universal state, even one led by an authoritarian tyrant; national sovereignty, overpopulation, and nuclear weapons required it–it was either political unification or mass suicide
-Gardner contends that his visions failed to materialise and we need to explore why
Enter the Kluge
-Gardner argues the problem lies in the human brain and the manner in which it has evolved
-since proto-humans diverged from other primates 5-7 million years ago, the human brain has experienced some significant changes due to evolutionary pressures (beneficial mutations that provided survival/reproductive success proliferated)
-mutations that spread may be suboptimal but can still become the ‘standard’ and other faulty but still successful are built on top of them (e.g., spines that allow bipedalism but are prone to breakdown; wisdom teeth that have little purpose but can lead to problems; depth-perceiving vision with a blindspot)
-evolutionary changes also tend to be beneficial to a particular environment but disadvantageous in another (e.g., pale skin in higher latitudes)
-our brains have evolved for very different challenges and environments than they encounter today
-Gardner outlines the two different decision-making systems humans appear to have: a conscious one that works relatively slowly using ‘reason’ to reach a decision, and an unconscious system that delivers decisions rather instantaneously and tends to dominate decision-making and influence the conscious mind
Seeing Things
-our brains do not like randomness and we like to believe we have control of certain things, or the ability to predict outcomes
-“This illusion is a key reason that experts routinely make the mistake of seeing random hits as proof of predictive ability” (p.77)
-an understanding of randomness bestows no evolutionary advantage, but an ability to spot patterns and causal connections certainly did and was selected
-sometimes false connections are made and unrelated events are believed to be connected; these type of mistakes are not usually fatal but failing to see a pattern that does exist can be, resulting in us overlooking randomness and seeing patterns everywhere
-research with ‘split-brain’ patients (right and left hemisphere severed) suggests the left hemisphere is home to an ‘interpreter’ that attempts to make sense of and explain perceptions, emotions, etc.; it is always attempting to find order and reason in what the body senses/perceives, even when we encounter things that aren’t sensible or orderly
-as a result, we created stories/narratives to help make our world appear sensible and orderly
-this usually helps us but can also be misleading
-having more data may convince us that our story is closer to reality but usually it has the opposite effect and cause us to see things that aren’t there
-“‘Data mining’ is not a big problem for precisely this reason: Statisticians know that with plenty of numbers and a powerful computer, statistical correlations can always be found. These correlations will often be meaningless but if the human capacity for inventing explanatory stories is not restrained by constant critical scrutiny, they won’t appear meaningless. They will look like hard evidence of a compelling hypothesis” (p.81)
-this trap is especially problematic for ‘experts’ who have access to lots of facts/data and that can help to find order that might not exist and create convincing stories which are false
-this is what tripped up Toynbee who believed the West was following a pattern that had also impacted Ancient Greece and Rome; data that suggested otherwise was explained away to maintain order according to the perceived pattern
Always Confident, Always Right
-research looking at calibration (alignment of self-confidence rating with response accuracy) finds we are under-confident with easier questions and overconfident with more difficult ones
-having more information/data actually exacerbates the problem
-this overconfidence seems related to optimism bias, a belief that despite evidence to the contrary risk of bad outcomes is minimised for oneself (e.g., smokers believing their risk of lung cancer is lower than for most)
-the evolutionary advantage of this appears to be that it encourages action and resiliency in the face of setbacks
-and then there’s confirmation bias that influences our thinking once we’ve formed a belief/opinion, regardless of its accuracy or reflection of reality
-we dismiss data/evidence that goes against our belief and seek out that which supports it
-we become hypercritical  of information that challenges us
-experts are no less prone to this bias than others as studies have demonstrated (e.g., peer review of papers that showed biased support for personal views and judged primarily on conclusions, not the soundness of methods as they should have been)
-we search for and collect evidence to support our views, dismissing or ignoring that which doesn’t
-and if the information cannot be forgotten or swept away, we transform it to fit into our schema/worldview
Better a Fox Than a Hedgehog
-while these biases and tendencies could cause us to dismiss all expert opinions, the story is not so simple
-some ‘experts’ perform better than others: foxes tend to be far better than hedgehogs
-foxes are those who tend to acknowledge complexity and uncertainty, drawing cautious conclusions and admitting they could be wrong
-hedgehogs, on the other hand, seek simple, certain answers usually via One Big Idea; they see patterns in their massive knowledge base, rationalising away contradictory evidence or massaging it to fit their belief system; they are confident and declare certainty
-the cyclical theories of history fall into this latter category as espoused by Hegel, Spengler, Marx, and Toynbee
-history is complex and events unique without a simplistic pattern rolling over and over through time

Chapter 4: The Experts Agree: Expect More of the Same
-in the the 1980s and early 1990s, expert after expert extolled the virtues of the Japanese economy versus the US and how it would soon supplant the US as the world’s top economic engine
-experts all agreed: the future was bleak for the US but great for Japan
-but then came 1993 with a collapse of Japanese real estate, banks, and financial markets
-the US, on the other hand, experienced an economic boom (of course, after this the US witnessed both the dot com collapse of 2001 and Great Financial Crisis of 2008)
-experts appear to have fallen victim to the status quo bias where they view tomorrow much like today, projecting current trends into the future
-typically this is true but changes do occur and the further we project into the future the greater the likelihood that a shift in the trend takes place
-long-term predictions of virtually every expert follow this path
-predictions draw on the information and thoughts of the time they are made and tend to reveal more about their own temporal context than the future they project
-in fact, most critical factors/variables that impact the future (e.g., internet, atomic bombs/power) are left out of expert predictions; they are overlooked or not even imagined
-extrapolating the past into the future is not a sound strategy for forecasting; it is not accurate but it is safe
Pick a Number
-research by Kahneman and Tversky demonstrated that when people are asked a question that requires a numerical response, they latch onto an anchor number–a number readily available to them–and then adjust their choice up or down according to whatever they believe reasonable (high anchors skew responses higher and low ones lower)
-experts do the same thing when making forecasts that include numbers and use the current relevant number as their anchor–a built-in, status quo bias
Think of an Example
-other studies by Kahneman and Tversky demonstrated the use of an availability heuristic when trying to estimate how common something is or how likely something is to occur in the future; we try to think of an example and tend to think of those recently in our thoughts or are dramatic
-these heuristic devices are not conscious processes but happen automatically
-they are the result of evolutionary adaptations that bestowed advantages in a very different time and environment
-when something significant happens (e.g., 2001 terrorist attack) we expect more of the same to happen as we have a relevant and fresh experience (high risk); if such an event has not occurred for a long time, we expect it won’t occur (low risk)
Everyone Knows That!
-people make judgements based not only on personal knowledge and unconscious processes, but as social animals based upon the views and opinions of those around us
-most people don’t stray far from the consensus of those close by
-research in the 1950s (e.g., Solomon Asch) found that we abandon personal judgements when surrounding consensus is different (even if blatantly wrong)
-conformity of this type increases with the stakes involved and the task is more difficult; and participants increasingly believe their judgement is correct
-this tendency has been termed ‘groupthink’ and has been suggested to be the root casue of many disasters
-information cascades develop where persuaded experts persuade even more experts expanding the consensus, reinforcing each other and confirming one another’s beliefs; following the herd is easier than pointing out contrary evidence or challenging data
-not everyone bows to social pressure and adheres to consensual views, and being on the outside can sometimes make contrarians more extreme in order to be heard
It’s 2023 and an Asteroid Wipes Out Australia
-a review of predictions as presented in the academic journal Futures shows futurists fail consistently to see coming change instead falling victim to the trap of status quo thinking
-scenario planning was devised to get by this problem
-a number of different futures are predicted and the various scenarios considered to help devise decisions that could be useful for a variety of contexts
-scenario planning, however, is also prone to the problems of the availability heuristic with easily recalled issues, especially dramatic ones, seeming more likely to happen and thus be planned for leading to bad judgements
-Kahneman and Tversky also discovered that the representativeness heuristic impacts our thinking during scenario planning
-we tend to believe something will happen if it is typical of the category as a whole (a type of stereotyping); that is, if something similar, it is likely the same will happen in the future when the context is similar
-scenario planning or similar imaginative guesses about the future can help us avoid status quo thinking but it can lead us to overestimate change likelihood
-research by Tetlock found that scenario planning did not open the closed minds of hedgehogs but they did befuddle foxes causing them to get more carried away and overestimating the likelihood of change
Forget What We Said About that Other Asian Country and Listen to This
-predictions tend to start from a status quo bias
-confirmation bias then impacts us be embracing without question the evidence that supports our beliefs and ignoring/dismissing that which contradicts them
-our beliefs strengthen until an event occurs that we cannot avoid, reminding us that tomorrow can be quite different than today and we begin imagining all the scenarios that could happen until enough time passes for us to fall back into status quo thinking
-regardless of what happens, we convince ourselves we are right in our thinking
-our status quo thinking is fine as long as current trends continue but least helpful when change is occurring
-thus, predictions are most likely to be correct when least needed and least likely when most needed

Chapter 5: Unsettled by Uncertainty
-Gardner summarises social and political shifts that began around 1968
-assassinations, morality changes, increased crime and disorder, bombings, illicit drug use, financial woes (inflation, unemployment), geopolitical posturing and conflict
-some predicted years of the same, suggesting that people prepare for collapse (e.g., move out of cities, buy gold, store food)
-it was in this milieu that The Limits of Growth study was published
-it was also against this background that fears of population outstripping food supply reemerged and ever more grim forecasts were made, some suggesting widespread famine and collapse by 2000
-many experts agreed: current trends were suggesting a very dire future in the relatively near term
-mainstream media echoed their warnings
The Agony of Not Knowing
-humans want and need control, especially of their environment/surroundings and not having it can lead to stress, disease, and early death
-knowing what the future holds is a type of control even if we know what happens is out of our personal control
-fear is another common consequence of uncertainty and we have developed psychological mechanisms to defend against it (e.g., illusion of control, superstition/magical thinking)
-psychologists have found an increased resort to magical thinking when control is lost or uncertainty increases
-people also tend to see patterns where none exist more often
-this universal human tendency also manifests itself at the national and societal level when things seem uncertain where an interest in astrology or more dogmatic religious institutions increases
-reassurance about the future motivates people to seek it somewhere
-solace in conspiracy theories also flourished to explain what could very well be random events
-and expert predictions could provide certainty as well, especially for those looking for something ‘rational’
-it doesn’t matter if the forecast is doom and gloom since uncertainty is more unsettling
-gloomy forecasts tend to attract disproportionate attention due to our negativity bias, and tendency to give more attention to and remember negative things
-this tendency may be the result of the evolutionary fitness bestowed by attending to dangers in our environment
-the issue of uncertainty also affects this predilection since we peer more attentively into the future when things are going badly and then project today’s problems into tomorrow (due to the availability heuristic)
-optimistic predictions go against our judgements and confirmation bias and social reinforcement strengthens our beliefs
-the pessimistic forecasts support our feelings and provide certainty, eve if they are dire
-experts who are foxes qualify their predictions with talk of probabilities and possibilities, but hedgehogs speak with certainty and tend to draw a larger audience as a result
-the confidence hedgehogs exude and the feedback they get as a result actually makes tem worse at prediction accuracy, not better
-few if any of the dire forecast of the 1970s came to pass
-while those gloomy predictions could have occurred, they did not as decisions and accidents took us in a different way
-they were possibilities but not inevitable futures
-it seems the only certainty is uncertainty

Chapter 6: Everyone Loves a Hedgehog
-Gardner overviews the diametrically-opposed views of some economists in the lead-up to the Great Financial Crisis of 2008
-both sides of the debate were confident in their position, sometimes mocking the opposing view
-this is true for most prognosticators in the public sphere; they are personable and “they commonly see things through a single analytical lens, which helps them come up with simple, clear, conclusive, and compelling explanations for what is happening and what will happen” (p.149)
-these hedgehogs don’t qualify their opinions with doubt and never acknowledge mistakes
-they tend to dominate in the media and are the least accurate prognosticators but the most famous (and the more famous, the worse their predictive prowess)
-and rather than get weeded out for inaccuracy, these people become popular and in demand
Introducing the Renowned Professor, Dr. Myron L. Fox
studies demonstrate that people’s opinions and deference to authority are impacted by those who display confidence, status (e.g., affiliations, post-secondary degrees, important employment history), and appearance
-people tend to comply with those they believe are in a place of authority (e.g., Stanley Milgram’s shock experiment), or find their statements persuasive
The Confidence Game
-further research has found that it is not expert status, intelligence, or manifest ability that persuades people but enthusiasm and confidence
-this confidence also engenders greater ratings of trust
-these studies suggest a confidence heuristic exists with us tending to believe confident people and find less certain people less reliable
-this impacts our judgements of forecasters with those who qualify predictions with probabilities as less competent and uninformed
-like all heuristic devices, this occurs automatically and with no conscious awareness
-it’s not unreasonable to use confidence as a proxy for accuracy but there are problems with tis
-overconfidence can lead us astray
-and people often sound and look more confident than they actually are, downplaying doubts and putting on an appearance of surety
-when people get rewarded or this self-assurance it becomes self-reinforcing and increases confidence even more
Tell Me a Story
-every human culture carries with it stories about itself and the world
-from an evolutionary advantage perspective, stories allow knowledge to be passed between generations; they also strengthen social bonds, and allow possible outcomes to be practised
-stories also serve the function of sharing work/explanations for phenomena and help make sense of more, but we are unsettled if the story is left unresolved
-good story telling must be about people (not abstractions), elicit emotion, be novel, contain a threat (negativity bias), and fit our schemas/worldviews (confirmation bias)
-if stories don’t fit our prior beliefs (schema) we tend not to attend to them; the facts/statistics/data doesn’t matter, it is the story/narrative that makes a difference in persuading people
-getting people to connect on a personal level and trust the storyteller is most important
-persuading others that your story is compelling is not particularly a rational process as it depends more on trust than statistics or data
Now Put It All Together and Go on The Tonight Show
-Gardner outlines Paul Ehrlich’s (author of The Population Bomb) first appearance on The Tonight Show when he debated journalist Ben Wattenberg on humanity’s future
-Ehrlich is confident, appears relaxed/calm, and carries the weight of a Stanford University professor
-on the other hand, Wattenberg seems hesitant and unfocused, accepting what Ehrlich is arguing but insists it’s overstated and not balanced
-Ehrlich continues with clarity, enthusiasm, and charm; the story he tells is simple, clear, and addresses many people’s immediate concerns; he uses humour and simplifies complex issues
-this performance helped Ehrlich garner a huge following but the topic was not new; in fact, it had been a hot topic for many years
-the issue became far more popular with Ehrlich as he was a great performer and was invited back to The Tonight Show more than 20 times
-“For experts who want the public’s attention, Paul Ehrlich is the gold standard. Be articulate, enthusiastic, and authoritative. Be likeable. See things through a single analytical lens and craft an explanatory story that is simple, clear, conclusive, and compelling. Do not doubt yourself. Do not acknowledge mistakes. And never, ever say “I don’t know.”” (p.165)
-people want to hear a good story from confident experts who know what the future holds, regardless of how accurate (or not) their tale may be
The Ehrlich Lesson
-communicators of all types follow the lessons demonstrated by Paul Ehrlich: be confident, clear, and simple, and avoid ambiguity and uncertainty
-for politicians, it’s about convincing people today regardless of whether they are right tomorrow or not
-policy makers ‘over-argue’ creating narratives that combine assumptions, goals, and action plans but are crafted to appear certain (knowing uncertainty would undermine authority during any debates/questions)
-scientists have also learned that if they speak true to science (complex, uncertain, ambiguous) they will be ignored, so to attract support/funding they deliver confident and bold predictions
-in fact, many fields that need to communicate to others follow this lead, especially the media, talk-show hosts and various bloggers
-grand theories about what’s going to happen are floated often and quickly forgotten when they fail to materialise
-simple, clear, and confident predictions litter the news media every day and we seldom note how ridiculous these are
-science news is no different; while articles may bury any uncertainty somewhere in its text, headlines, and summaries rarely, if ever, do
-while journalists claim to base writings on ‘fact’, they despise contingent data and prefer to turn the unknown into the known
-uncertainty is swept aside and extreme scenarios are highlighted
-the media is not interested in experts with on-thrilling stories
-Gardner outlines the example of Cambridge University computer science professor Ross Anderson who analysed Cambridge’s systems during the concern built around Y2K
-he and his team found the danger was nowhere near the level the media and politicians had made it out to be to the public but no one wanted to listen to him as his story that Y2K was an issue but not a very serious one was not compelling
-the incentives to predict with a compelling story full of ‘breathless hype’ are significant: media interviews, attention, book contracts, public-speaking engagements, etc.
– thinking and talking like a hedgehog is paramount
A Swing and a Miss! And Nobody Cares…
-the attention these compelling and confident forecasters get should mean a countervailing loss of followers and reputation when predictions fail, but this seldom occurs
-Gardner describes the predictions of James Howard Kunstler in light of the Y2K threat
-his dire warnings of agricultural collapse, supply chain disruptions, a deflationary depression, and geopolitical unrest
-despite these problems never occurring, Kunstler followed up  with another collapse scenario in the best-selling book The Long Emergency
-Gardner outlines all the predictions made with the U.S. invasion of Iraq, both positive and negative for the U.S.; none of them materialised but this had no impact on all the forecasting experts, in fact many of them were given opportunities in the mass media
-and the failure of the dire forecasts of Ehrlich for the late 20th century did little to detract from his luster as he won award after award
-“It seems we take predictions very seriously, until they don’t pan out. Then they’re trivia.” (p.175)
It’s a Hit! And the Crowd Goes Wild!
-successful predictions make their authors instant celebrities, even if they didn’t actually make the specific forecast (e.g., Nostradamus being identified as one who predicted 911)
-people don’t want to live in an unpredictable and uncontrollable world so they search for data to support the opposite notion: the world is both predictable and controllable
-with vast numbers of people making a range of predictions someone is almost always liable to be the lucky one to have their forecast align with events (e.g., Peter Schiff predicting the subprime mortgage fiasco that contributed to The Great Financial Crisis)
-we often attribute these ‘successes’ to skill despite our knowledge of the ‘illusion of prediction’ and poor sense of randomness
-we ignore missed predictions and celebrate successes even though the misses vastly outnumber the hits; this has been termed the ‘Jeane Dixon Effect’ (a psychic who was often wrong but occasionally right)
Cui Bono?
-there is obvious self-interest in highlighting hits and letting misses fade into obscurity
-journalists do this well by staying on topic while events confirm predictions and avoiding topics when they don’t
-Gardner provides the example of New York Times columnist Anthony Lewis who highlighted evidence to support his focus on Ehrlich’s thesis and that of The Limits to Growth but when world events turned less dire and actually ran counter to the forecast, Lewis moved on to different topics
-the same is true with all the predictions around other topics (e.g., Y2K, Arab Spring protests) as the media shifted focus when events unfolded differently than expert forecasts
-the media additionally bolsters the credibility of the experts it quotes by highlighting successful predictions and ignoring misses
-on rare occasions expert failures are highlighted, sometimes as scapegoats but usually as support for the counterargument and against any similar predictions
-prognosticators who do face such scrutiny do not usually take it wellk, as the example of Arianna Huffington pointing out to Larry Kudlow his 1999 prediction of the Dow at 50,000 by 2020 and his less that congenial response
Yesterday’s News
-we tend to notice successful predictions far more than failed ones
-part of the reason is that successful ones get far greater publicity and circulation but failed ones disappear from our discourse
-the ‘news’ is what is occurring today and if a crisis hits and someone had predicted it a decade ago, that forecast becomes news and is projected far and wide; but if a prediction fails to occur, it is forgotten so there is little risk for those making repeated predictions that fail
Capricorns Are Honest, Intelligent, Hard-Working, Gullible…
-when vague statements about the future are presented and then compared to events and/or circumstances, we tend to notice the hits more than the misses
-and our perception also impacts our evaluation of whether something is a hit or miss (ambiguous wording can tilt our interpretation)
Peter Schiff was Right!
-Peter Schiff’s calls for a crash of the U.S. economy was correct, sort of
-some of what he predicted occurred, but some did not; in fact, some of his dire forecast went the opposite of his argument
-despite the misses, Schiff insists he got everything right
-this ignores that he maintained the same diagnosis and prediction for many years, and that while he was getting coverage for his apparent correct forecasts, his clients actually lost money (i.e., foreign markets he was advocating as investment opportunities did worse; oil crashed despite his call for gains to extend)
-as most hedgehogs, Schiff is authoritative, passionate, unswerving, and sure of himself
Postscript: Hang the Innocent
-Gardner explores the case of a pundit who was incorrectly mocked for a prediction he never made
-Norman Angell wrote a book on the economic folly of waging war (arguing it was always an economic loss regardless of who won) but critics/reviewers misconstrued his argument repeatedly by saying he predicted war was impossible just a few years before WW1
-his ‘prediction’ was used as an example of why making such forecasts is absurd, yet we continue to do it and seek them
Chapter 7: When Prophets Fail
-Gardner describes the precise prediction of American psychic Marian Keech who said tectonic plate shifting would lead to catastrophic flooding but that her and her followers would be taken to safety by aliens
-when the event failed to occur, rather than recognise the failure of the prediction Keech and her followers shifted to believing their ardent faith had led to God to forgo the event and saved the entire world
Making Everything Fit
-psychologist Leon Festinger had been developing a theory during the time of Keech and had followed her and her group closely
-his theory on cognitive dissonance goes as follows: “The human mind wants the world to make sense, Festinger noted. For that to happen, our cognitions–our thoughts, perceptions, and memories–must fit together. They must be consonant. If they aren’t, if they clash, and we are aware of the contradiction, they are dissonant. And we can’t be comfortable in the presence of cognitive dissonance. It has to be resolved. Distraction–”Think about something else!”–is the simplest solution. But sometimes it’s impossible to ignore the thoughts crashing into each other and we have to deal with it. That may take the creation of new cognitions or the alteration of existing ones, or they may have to be forgotten altogether. However, it’s done, it must be done, because dissonance is a highly aversive emotional state. Like a bad headache, we must make it go away.” (p.200)
-we do this through rationalisation, an action we take every time we make a difficult decision where pros and cons exist
-dissonance is created but we reduce it by playing up the factors that support our decision and play down those that don’t
-even when the facts facing them show they were wrong, most people stick with their beliefs
-how committed one is to a belief impacts what our response is when faced with evidence we are wrong; the more committed, the greater the dissonance and the stronger the attempted rationalisation
-this is especially seen when it come to politics and for those who believe it matters deeply
-those more interested, involved, and informed are far more committed and face far greater dissonance when evidence doesn’t fit their beliefs leading to more significant rationalisation to make facts fit beliefs
-confirmation bias is one way dissonance is reduced: “Having settled on a belief, we naturally subject evidence that contradicts the belief to harsh critical scrutiny or ignore such evidence altogether. At the same time, we lower our standard when we examine supportive evidence so that even weak evidence is accepted as powerful proof.” (p.204)
-in fact, if we’re really desperate to confirm our belief system because on compelling contrary evidence, we will drop our standards entirely
Experts on the Defensive
-some of the experts involved in Philip Tetlock’s research were forthright about their failures in predicting; these foxes had never believed such forecasting was possible anyways
-but those who did believe it could be done, the hedgehogs, fought back with a variety of mental defences (e.g., exogenous shock that could not be foreseen)
-we all interpret our experiences through our core beliefs and ‘experts’ see their professional identity threatened when wrong, creating cognitive dissonance
-and having knowledge of their field give them more data/information to draw upon to help their rationalisations making failure meaningless or even vindication
-one common rationalisation is to alter the time frame for the predicted event
-another one is based in how our memories are formed for they are not unchanging reflections of the past but evolved to suit the present (as time passes, our memories shift to align with our current circumstances)
-the ability of memories to shift without conscious awareness make them a great tool for reducing cognitive dissonance
-Tetlock found this ‘misremembering’ common among the experts in his research
-this hindsight bias was far more pronounced among the hedgehogs than the foxes, but they all showed it
-we all experience this type of bias: once we know an outcome we tend to believe that outcome was more likely than we would have if we had not known the outcome
-this has a significant impact on our memories
-we all suffer from hindsight bias but it gets strengthened if we have to provide an explanation for why an event occurred
-and constructing after-the-fact explanations are what experts do all the time using their vast store of knowledge
-experts who are in the public limelight for their insight and knowledge have even more on the line: their reputations, connections, and revenue
-Festinger argued that when someone believes with all their heart and takes irreversible actions in light of their belief but then is presented with irrefutable evidence that their belief is wrong, the person usually becomes more convinced than ever in their beliefs
James Howard Kunstler
-Gardner outlines Kunstler’s belief in the eventual collapse of American society (e.g., “…ecologically catastrophic, economically insane, socially toxic, spiritually degrading, and fundamentally unsustainable” (p.213))
-Kunstler claimed in 1999 that Y2K was a tipping point for the U.S. and its demise
-he gave both specific and grim details (i.e., a deflationary depression, dramatic and critical events would unfolded over the years following Y2K)
-apart from a few minor technical glitches, little bad occurred with Y2K
-rather than admit his failed forecast, Kunstler expanded on his thesis and that Y2K was one simple signal about our over-investment in hypercomplexity
-he rationalised the lack of Y2K crises to a concerted effort to avoid it via dedicated investments (although he had minimised such efforts prior of the event in 1998 and 1999), although the countries/corporations that equally ignored the dire warnings and did little were just fine
-despite his forecasting failure around Y2K, Kunstler expanded on his beliefs in the Long Emergency aruding peak oil is the root of our demise
Robert Heilbroner
-economist Heilbroner was a social critic who wrote in 1972/73 (An Inquiry Into the Human Prospect) that there was little hope for humanity and that the future was sure to be one of “darkness, cruelty, and disorder”
-the population explosion was a problem that the Green Revolution could not address with the underdeveloped world sure to experience famine and government moving towards military-socialism, possibly moving towards war with wealthy nations (and using nuclear weapons)
-in addition, dwindling resources (especially oil) and increasing environmental degradation coupled with a warming climate, would eventually lead to a collapse of industry, or at least a perpetual decline
-he argued that this was not a death sentence but would radically transform society
-he foresaw authoritarian governments as the most likely system capable of attaining a stable society
-re-publications years later added retrospectives by Heilbroner who, despite his predictions not occurring, admitted  he was ust a bit off-base but that the problems persisted and most people failed to acknowledge them due to massive delusions about our prospects
-he acknowledged some issues were viewed slightly differently (e.g., global warming from increasing heat due to industrial processes had shifted to CO2 emissions creating a greenhouse effect)
-he insisted that his forecast had only shifted marginally over the decades (1972-1991) but Gardner insists this is primarily because he failed to raise the contrary evidence
Lord William Rees-Moog
-Gardner outlines the published predictions of Rees-Moog and his co-author, James Dale Davidson, whose three publications laid out economic and geopolitical events of the near future
-while Rees-Moog insists their forecasts were mostly correct (just the timing was off)
-Gardner, however, suggests the few hits they got in their multitude of predictions were far outweighed by the misses
-with each subsequent writing, they highlighted their correct calls but conveniently left out their many misses
Paul Ehrlich
-Ehrlich’s predictions in the late 1960s and early 1970s boiled down to the birth rate being far higher than the death rate and as a result the population growing beyond the natural carrying capacity of the environment; in species in which this has been observed, this eventually results in collapse of the species
-as a result of this view, he called for widespread famines within the next decade
-looking at the evidence, however, shows trends in the opposite direction of what he suggested
-Ehrlich has failed to acknowledge his failed predictions and rationalises events by expanding time frames, arguing he underestimated the ability of food production to expand (due to misguided expert analyses), and misremembering his own forecasts
-other obvious misses were his call for an end to affluence with significant resource scarcity and price increases, and political disintegration
-of note is the ongoing feud Ehrlich had with economist Julian Simon who argued contrary positions in almost all respects
-the debate got to the point of a bet between he and Ehrlich about the price of five key metals after ten years, Simon won the bet as all the commodities fell in price
-Ehrlich rationalised the loss based upon a major recession that slowed growth
-Simon, Gardner argues, was very much like Ehrlich but in the opposite way as he made grandiose predictions with infinite timelines (e.g., all commodities would continue to see price decreases over any time period)
The Fans
-a large part of a person’s understanding  of reality may be the story/narrative told to them by ‘experts’
-when we believe such views and allow them to impact major decisions, we depend upon all the psychological aspects discussed so far to reduce our cognitive dissonance
-obvious contrary evidence is minimised or rationalised away; e.g., awareness led to an avoidance of the worst scenarios

Chapter 8: The End
-there are many who continue to argue accurate predictions are quite possible despite all the failures of experts to this point
-Gardner outlines some current example of attempts to make bold predictions and their subsequent failure
-he suggests these experts are beyond help but the lesson for everyone else should be to be skeptical of such predictions
-it would be better if we stopped seeking them but when a confident expert shares a story that resonates with our own beliefs and values about the world, we don’t consider their record in making predictions
-if their forecast is inaccurate, it is forgotten; but if it’s accurate, it’s celebrated
-thanks to hindsight bias and misremembering, we are not likely to be overly skeptical concerning predictions; in fact, these can convince us that we do often predict things accurately
-hindsight bias can also contribute to the illusion that while the future is uncertain, the past was not (when we know the outcome, we can find signals that made events obvious and known and we believe we saw them then also)
-we don’t remember our past selves worrying about an uncertain future and expecting lots of dire events to unfold that never did; we feel dread about an uncertain future but misremember the past and our worries then
-we talk about uncertain times ignorant of the fact the same has been discussed for decades/centuries
-we seek predictions/forecasts due to our feelings of uncertainty, believing we are experiencing change at an unprecedented pace (especially during times of crisis)
-forecasts of the future, regardless of their nature, satisfy our desire for certainty
-we believe because we want to believe
I Predict You Will Object
-while it is true that virtually all of our actions are founded upon a belief in the predictability of the future (e.g., we cross the street predicting the care stopped down the street will not speed up and hit us)
-many of these predictions are reliable and based on sound statistical analyses (e.g., how many middle-aged white men who are slightly overweight will likely die next year)
-other predictions are not as reliable as there is little data to analyse, although specific conditions/history can aid the forecast
-the likelihood of certain events happening can be extremely low, but they are never zero and even reliable predictions can be impacted by surprises
-“…the most we can ever hope to do is distinguish between degrees of probability with reasonable accuracy…A good model to keep in mind is weather, which can be forecast with considerable accuracy a day out. Two days out, accuracy declines but it’s still reasonably good. Three days, the forecasts are shakier. Two weeks out, weather forecasts are essentially useless…Like most of what’s interesting in life, weather is subject to chaos and all sorts of non-linear weirdness that limits  how far we can peer into the future. Those limits will never be eliminated.” (p.245-246)
-we make decisions based on what we think is going to happen but we need to consider what our actions will mean if our predictions are wrong
-the best decisions are those that have positive outcomes for a variety of futures
-Gardner uses anthropogenic climate change as a relevant example
-while he accepts that it is a real threat, he is skeptical of the models that claim the ability to make forecasts decades or centuries out
-climate scientists themselves state there is a lot they don’t understand; combine this with a complex, non-linear system and any prediction is hugely uncertain
-this suggests the models may overestimate the change and damage ahead, but it could also mean they underestimate it
-some proposals for addressing the issue do not seem positive depending on the eventual future (e.g., very expensive carbon sequestration if the models overestimate), while others look promising regardless (e.g., methane capture from landfills that can be used as fuel; carbon taxes that reduce fossil fuel use and encourage research into alternatives)
-good decisions can be made without accurate predictions
-having a rough sense of future possibilities will do
Doing It Better
-in an attempt to assess intelligence reports (geo/political predictions) Alan Barnes of the Canadian government’s Privy Council Office analysed 580 predictions within 51 intelligence memoranda over an 18-month span using a numerical scale (1-little prospect to 9-highly likely)
-results showed little overconfidence, had a high degree (0.014) of calibration (i.e., predictive likelihood aligned with reality) with the easiest calls being the most accurate and hardest the least
-when asked about the excellent results by Gardner, Barnes stated he was skeptical of them (not proof of the wonderful judgements of the intelligence reports)
-this response is the hallmark of foxes: modest of forecasting ability, comfortable with complexity and uncertainty, self-critical, avoid intellectual templates using numerous sources for information/ideas
-the cognitive style of foxes has three components that provide improved forecasting
-first, aggregation of various information source produces better results
-second, thinking about one’s thinking (aka metacognition) by reflecting on conclusions, questioning them and the assumptions built into them
-being not just aware of the biases that influence our judgements but being conscious of bias bias–perceiving our own thoughts to be objective and free of bias
-trying to catch and correct the biases in our own thinning contributes to avoiding them
-finally, having humility by way of admitting you cannot provide certainty and perhaps at best only high or low probability; and even if something is very likely to occur, it’s important to stree it may not
-this also means avoiding long-range forecasts
-as one fox told Tetlock, he begins worrying when he starts feeling certain
-foxes are aware they could be wrong and thus more likely to recognise errors and attempt to correct them
-as British parliamentarian Vince Cable, who correctly warned about an impending financial crisis prior to 2008, has stated: “those who claim to foresee the future are lying, even if by chance they are later proved right.” (p. 259)
A Final Objection
-Gardner admits that his skepticism about the human ability to make predictions is itself based on an assumptive prediction: chaos and the nonlinearity of complex systems, combined with limited understanding of decision-making and human consciousness limit our predictive abilities
-it is entirely possible that advances in our understandings and computing ability may improve our capabilities enough to counter these impediments to forecasting
-our knowledge and understanding is always subject to revision
-there are, of course, the hedgehogs who argued otherwise and insist prediction is possible, particularly given their personal approach (e.g., Gardner outlines political prognosticator Bruce Bueno de Mesquita who claims the underlying foundation of his forecasts is that all people do what is in their self-interest and incorporates this premise into game theory to provide what he claims is a 90% accuracy rating–although a review by Tetlock was critical of this claim)
-claims of this nature should not be accepted at face value but subjected to critical analysis of any supporting evide, but such an approach has not really been in demand
A Spoonful of Skepticism
-skepticism of predictions should be the norm, but especially during tumultuous times
-an honest view of history would show that what is held as the problems of tomorrow are rarely the problems when tomorrow arrives
-and the same is true of overly-optimistic scenarios that get painted
-there are a range of possible futures and while this answer lacks the type of certainty humans desire, it seems the honest answer
-the best approach to looking into the future is how foxes do it: “It is informed by the past, it is revealing about the present, and it surveys a wide array of futures. It is infused with metacognition…It offers hopeful visions of what could be; it warns against dangers that also could be. It explores our values by asking us what we want to happen and what we don’t. And it goes no further. It raises issues, questions, and choices, and it suggests possibilities and probabilities. But it does not peddle certainties, and it does not predict.” (p. 266-267)
-numerous authors (e.g., Alvin Toffler, Arthur C. Clarke) have made it clear that predicting the future is futile and impossible, yet they follow such declarations with their own forecasts
-while we may recognise the fact that the world is unpredictable, we want certainty
-no one knows what the future holds
-“The best we can do is study, think, and choose as best we can in the spirit of building toward the future…Then hope for a little luck.” (p. 267)






Olduvai IV: Courage
In progress...

Olduvai II: Exodus
Click on image to purchase