Tag Archives: Decision Making

The Power of Habit: Why We Do What We Do in Life and Business, by Charles Duhigg

Our lives are a mass of unthinking actions, controlled by habit.  A conscious decision made long ago becomes an automatic behaviour, even though it might no longer lead to the outcome we would choose today.  Understanding the components of habit formation allows us to change them.  It might not be easy or quick, but it can be done.

Using entertaining, interesting anecdotes, Duhigg shows how habits are formed, and how they can be changed.  For example, even someone with an injury to the part of his brain that creates memories can learn new habits.  Clearly there is a part of the brain separate from memory that stores routine actions.  If you try to change a habit, that routine is still stored somewhere in your brain, ready to be activated without conscious thought.  To be successful, the old habit has to be replaced with a new one – you can train yourself to react with a new routine action in response to the same cue, as long as you still get some reward for the new action.  These concepts are illustrated with stories about diverse topics, including research on monkeys, sports teams, and advertising.

Habits don’t just affect individuals; they are also found in organizations and even whole societies.  Alcoa was transformed by a new focus on one habit – safety.  The new focus had side benefits, leading to process improvement, equipment upgrades, and ultimately, better products made more efficiently.  The habits of hospital staff can change the outcomes surgeries; the habits of communication within an organization can cause disaster or prevent it.  Stores and advertisers try to create habits that cause us to buy their products.  Societies can change habits of racism and exclusion to ones of acceptance.

Duhigg is careful not to downplay the difficulty of changing ingrained habits.  Every person and every habit is different, so there’s no secret formula that will work for everyone.  “Change might not be fast and it isn’t always easy.  But with time and effort, almost any habit can be reshaped.”

Algorithms to Live By: The Computer Science of Human Decisions, by Brian Christian and Tom Griffiths

This is a book about optimization.  A number of problems that come up in everyday life have been solved mathematically, and step-by-step instructions exist to solve those problems.  Other solutions to everyday problems are similar to ones developed by computer scientists.  It’s interesting to learn about these algorithms and their application to common problems, and Christian and Griffiths explain things in a very engaging and humorous way.

The first problem described is useful to anyone looking for a house or an apartment, and is known as the “secretary problem.”  At first, you don’t know what’s available, so there’s an information gathering phase – how nice an apartment or how well-qualified a secretary can you find?  Any candidates you look at during this phase are assumed to be gone forever if you decide not to make an offer.  After you’ve learned what’s available, you should take the next apartment or secretary that’s better than any you’ve seen so far.  But how many of the candidates should you look at in the information gathering phase?  Too few, and you won’t know enough; too many, and you may miss the best opportunity.  The answer is: 37%.  That’s the optimum number to give you the best chance of finding the best candidate.  In a nice bit of symmetry, the probability of finding the best secretary or apartment using this method is also 37%.  You might not get the best one, but at least you know you’ve done the best you can do.

You may have noticed that this problem also applies to dating and finding a mate, but applying the algorithm in real life leads to something that could have come from The Rosie Project.  One fellow used the algorithm and determined it was time to take the plunge.  As he put it, “I didn’t know if she was Perfect (the assumptions of the model don’t allow me to determine that), but there was no doubt that she met the qualifications for this step of the algorithm.  So I proposed….And she turned me down.”

Happily, the algorithm can be modified to work for cases where proposals can be rejected.  The fraction of the pool you should look at to optimize your search will change, as will your chance of success, but the basic strategy is the same.

You can optimize the process of selling a house, or finding a parking spot, or running a drug trial.  You can decide whether it’s better to try something new, or stick with what you know.  What’s surprising about the answer to the last question is that when you have very little information, it’s better to try the new thing, to explore.  Or what about optimal sorting methods?  Computer caches need to optimize the trade-off between small, expensive, but fast memory and larger, cheap, slow storage.  The optimal solution is to keep the most recently used pieces of data in memory, and let the others be displaced.  That’s just like your disorganized office mate with the messy pile of papers on his desk.  When he looks for a paper, he starts at the top.  When he finds what he wants he pulls it out, and places it on top of the pile when he’s finished with it, so the most recently used paper is on the top.  It’s actually an optimal sorting system, not just a messy pile.

There’s advice to avoid over-thinking, and about how to tackle hard problems that may not have tractable solutions by relaxing the constraints.  It may not give you the best solution, but it can give you a good one in reasonable amount of time.

Think Like a Freak: The Authors of Freakonomics Offer to Retrain Your Brain, by Steven D. Levitt and Stephen J. Dubner

Solving problems is hard, and it takes effort, research and analysis to answer a question.  But never fear, the Freakonomics guys are here to tell you how it’s done.  They look at bias in decision-making, and show how to make more objective decisions based on the data.  They take the “economics approach”, which means looking at things like how incentives drive behaviour and how resources are distributed.  The conclusions are sometimes surprising, the stories are entertaining, and the topics are unusual and varied.

The first step in the program is to admit it when you don’t know something.  That’s hard, too, because of our ingrained need to appear helpful and knowledgeable.  Once you admit you don’t know, you can run experiments and gather data, and then you can have fun debunking the experts.  You may have heard about studies in which wine tasters couldn’t distinguish cheap from expensive wines.  But the experts also gave very different rankings to the same wine, showing that wine ratings are pretty much bogus.

Next, you have to ask the right question, or frame the problem appropriately.  If you ask, “What’s wrong with our schools?” you might come up with different answers than if you ask, “Why do our kids do more poorly on tests than kids from other countries?”  The second question allows for solutions outside the schools – maybe the problem is related to home life, not schools.  And you have to ignore artificial barriers, getting past the assumption that “It can’t be done,” as illustrated by the story of a skinny hot dog eating champion who doubled the world record at his first competition.

Then you have to look at root causes, which can take you back hundreds of years.  The economic success of German towns today depends on whether they were Protestant or Catholic after the reformation.  The search for root causes takes courage if the results run counter to conventional wisdom, like the discovery that bacteria, not stress or spicy food, causes ulcers.

Counter-intuitive reactions to incentives is a big one.  People are very creative in finding ways to get around the rules.  Say you’re trying to reduce traffic and pollution, and limit access to a city based on a car’s licence plate number.  Will traffic and pollution improve?  No!  People just buy second cars, and they might be cheaper, more polluting cars to boot.  It’s hard to design an incentive that encourages the behaviour you want.  The right incentive may seem odd at first, but once found it can create a self-sustaining solution.

Under the guise of showing us how it’s done, Levitt and Dubner tell some entertaining stories about solving problems with surprising answers.

Superforecasting: The Art and Science of Prediction, by Philip E. Tetlock and Dan Gardner

Some people are so much better than others at making predictions they are called “superforecasters”.  They are unlikely to be the pundits you see on the news; those experts who can tell a clear, compelling story with boldness and conviction.  In fact, Tetlock’s research showed that when predicting the outcomes of most political and economic questions, the average expert does little better than random guessing.  But forecasting is a skill that can be improved, and Tetlock aims to make the world better by improving predictions and critical decisions made by governments, corporations, investors and voters.

Just as it took doctors a long time to start using statistical evidence from randomized controlled trials to decide what worked and what was snake oil, it has taken a long time for statistical, evidence-based methods to make their way into policy decisions.  Before about one hundred years ago, medical treatments were based on what was plausible, not what was measurable.  Now it’s time to get serious about measuring and improving forecasts in other areas of life, too.

US intelligence agencies invest a massive amount of effort to make predictions about everything from the likelihood of a Greek exit from the Eurozone to the possibility that North and South Korea will go to war.  Yet no one knows how good these forecasts are; nobody measures them, and they aren’t specific enough to be measurable anyway.  As part of an effort to fix that, Tetlock and fellow researchers started the Good Judgement Project.  Volunteers signed up to make predictions about future events.  The questions were selected to be hard, but not too hard; they had to have clear right or wrong answers; and they had to result in an answer within about a year.

The top forecasters did better not because they had higher IQ’s, or higher education levels, or access to better information.  The difference was in how they thought.  They didn’t adhere to one Big Idea; instead, they gathered lots of information from different sources, considered various possibilities and alternative viewpoints, and changed their minds when they obtained new evidence – they were intuitively Bayesian.  They considered carefully whether the likelihood of an event should be moved a few percentage points higher or lower than their previous estimate.  They broke problems down into their components, answering the sub-questions as best they could, and making their assumptions explicit.  When they made mistakes, they found out why, and improved.  Thus ordinary people like retired computer programmers or civil servants or housewives made better forecasts than professional intelligence analysts with access to classified information.

The way to get better at forecasting is to make forecasts, analyse how you did, adjust for next time, and repeat.  There’s a guide at the back of the book explaining how to become a superforecaster, and if you want practice as a citizen scientist, you can join the research effort at the Good Judgement Project.

 

The Signal and the Noise: Why So Many Predictions Fail – but Some Don’t, by Nate Silver

There are some books you know are going to be great fun as soon as you crack them open and get started, and Nate Silver’s book is one of them.  It’s all about how to make better predictions, how to learn from past mistakes and ultimately make better plans for the future.  The danger comes from the growth of information, which has been so rapid it threatens to overwhelm our ability to absorb and understand it.  The useful signal is in danger of being swamped by the irrelevant noise.

Silver’s claim to fame is his forecast of the 2008 US presidential election, which correctly predicted the results in 49 of fifty states, and all thirty-five senate elections.  He has also been successful predicting the statistics of major league baseball players, and at betting in poker.

Silver examines both prediction failures and successes.  One of the former is the failure of regulators, financial institutions, ratings agencies, investors and their advisors to foresee the economic crisis of 2008.  He describes the ratings assigned by the ratings agencies as “just about as complete a failure as it is possible to make in a prediction.”  One of the successes is weather forecasting – meteorologists have steadily improved the accuracy of weather forecasts over the past forty years or so, getting better at forecasting temperatures, severe weather, and hurricane landfall location.  There are also sections on earthquake prediction and why it is so difficult, on economic forecasting, the spread of disease, chess, poker, stock markets, economics, and climate change, with explanations of pitfalls that can lead to failed predictions.

These topics are covered in an interesting and engaging way.  I wouldn’t expect to like a long chapter about baseball statistics, for example, but Silver makes it entertaining.  Who else would describe weather forecasting as an exercise in metaphysics, and explain why?

So how to improve predictions and forecasting?  The answer is to use Bayesian reasoning.  Bayes’s Theorem is a bit daunting, because it uses strange terms like prior and conditional probability,  and it requires the forecaster to make several estimates of the probability of an event.  Put simply, it gives the probability that something is true or will happen if some other condition is true or some other event has happened.  It’s a powerful technique that’s easier to use than it sounds.  Part of it’s power is the ability it provides to revise the initial probability estimate in light of new information.  Silver shows very nicely how it can be used to interpret imperfect medical test results, with false positives and false negatives, to give an answer quite different from a naïve interpretation of the statistics.

So get out there and make lots of predictions, and revise them as information comes in – it’s the only way to get better.

Nudge: Improving Decisions About Health, Wealth, and Happiness, by Richard H. Thaler and Cass R. Sustein

I heard Terry O’Reilly mention this book on his CBC radio show, Under the Influence.  It sounded interesting, so I looked it up, but unfortunately, I was disappointed.  The book starts with a summary of biases in perception and decision-making, which I found provided a good reminder of work covered well elsewhere (e.g. Kahneman, “Thinking, Fast and Slow”; Eagleman, “Incognito: The Secret Lives of the Brain“).  It’s good to be reminded of sources of bias like framing, priming, anchoring, availability, representativeness, overconfidence, inertia and loss aversion.  I like to think that I am immune to these biases, but of course I’m not, as the little tests in the book demonstrate.

There’s a nice example of the framing bias, showing how information is presented can cause different responses:  electricity consumers given data about power consumption in their neighbourhood decreased their consumption of they were above average, but increased their consumption if they were below average.  But if the information was accompanied by a smiley face for those who consumed less than average and a sad face for those who consumed more, the above average consumers reduced their power use even more, and the below average consumers did not increase their consumption.

I was hoping for more examples like this, where a clever, subtle tweak caused a desirable change in behaviour or better outcome for society.  Instead, we see proposals for improvements in American social policy.  Some are related to changing the default options for things like company matched savings plans and organ donations  (if by default you are deemed to have opted out of matched savings or organ donation, then inertia ensures that most people opt out.  If the default is the reverse, it’s no surprise that participation rates rise dramatically).  Others would require more clarity and transparency in things like mortgage agreements health insurance.  Sometimes regulation is required when the free market doesn’t create the best outcomes for society.

Brainwashed: The Seductive Appeal of Mindless Neuroscience, by Sally Satel and Scott O. Lilienfeld

“Brainwashed” provides a worthwhile counterpoint to the recent spate of books about neuroscience.  In effect, the authors are saying, hang on a minute here, let’s not get ahead of ourselves.  Let’s think about the limitations and uncertainties of brain imaging, let’s focus on what we really know rather than the speculative extrapolations, and let’s not forget there are other sources of knowledge about how our brains work, from psychiatry and psychology, for example.

Satel and Lilienfeld are a psychiatrist and psychologist, respectively, so I was a bit worried their book would amount to saying, “Hey, what about us?” but it is much more than that.  They describe how brain imaging works; how functional MRI data are collected and processed, and just what those coloured brain images are showing.  processing and statistics are involved, so an image showing the human brain’s response to something like angry words or what it looks like when telling a lie is an average from several subjects, and not from an individual.  Interpretation of the images can be nuanced and complex; for example, detecting a lie using neuro-imaging may depend on the kind of lie being told (concealing knowledge, reciting a memorized lie, or making one up spontaneously).  So worries about “neurocops” and “thought police” are somewhat premature.  Headway is being made by marketers into how consumers make decisions, but “neuromarketing” has a long way to go before it will replace market tests and focus groups.  Likewise, treatment of brain disease still relies more on traditional forms of treatment than neuro-imaging.

The authors also respond to people like David Eagleman (see “Incognito: The Secret Lives of the Brain“), who suggest that determining responsibility is a slippery slope – we don’t blame someone who commits a crime because of a brain tumour, so we shouldn’t blame someone who has a more subtle defect that we can’t detect now, but might in the future.  Satel and Lilienfeld reject this amoral view, insisting on the reality of human willpower, deliberation and decision-making.

Willpower, by Roy F. Baumeister and John Tierney

This is another in the flood of books popularizing discoveries in brain function, psychology, and neuroscience.  As a collaboration between a scientist and a science journalist, it works well, and is a pretty fun read.

By presenting lots of laboratory studies in choice and decision-making in an entertaining way, it shows how humans have a single reservoir of brain-power which becomes depleted by decision-making and by exerting self-control.  It is replenished by food and rest and can be strengthened like a muscle by careful exercise.  Desirable behaviour can be turned into a habit so it no longer depletes your precious reservoir of will power.  But dieting?  It’s a lost cause.  Just when you need your self-control the most to resist temptation, your blood sugar is likely to be low, weakening your will power.

Wait: The Art and Science of Delay, by Frank Partnoy

I was attracted to this book by the intriguing premise that delay, even procrastination, can improve our decision-making.  An argument in favour of slowing down our fast-paced lives sounded refreshing.  I have read and enjoyed books like Daniel Kahneman’s “Thinking, Fast and Slow”, Nicholas Tassim Taleb’s “The Black Swan”, and “The Drunkard’s Walk” by Leonard Mlodinow, and was hoping for similar insights from Frank Partnoy.  Unfortunately, I was disappointed.

Partnoy starts off with a study showing how heart rate variation correlates to mental health.  There’s a digression to the well-known marshmallow experiment about delayed gratification:  children who resisted eating a marshmallow right away in exchange for two marshmallows later did better as adults later in life.  But this is not related to heart rates, and I’m left confused about cause and effect – does a healthy brain do a better job of regulating the heart, or does a healthy heart lead to better brain function?  Knowing this connection, is there anything I can do to improve my life?  Partnoy doesn’t say.

The section on athletes in super-fast sports like tennis or baseball leaves me similarly empty-handed.  I’m not an elite athlete, so knowing they can move extremely quickly, giving their brains more time to assimilate and process information before reacting, does not help me.  I did learn something from the story about It’s Just Lunch, an international dating network.  Partnoy’s message is that the participants should wait until the end of their lunch date to decide whether to go on a second date.  However, he also provides this useful tidbit: ‘The question “Do you like the taste of beer?” is the best predictor of whether someone will have sex with you on the first date; correct answer: “Yes” ‘.

I suppose it’s also helpful to know how long to delay an apology after behaving badly (long enough for all the information to come out and for the victim to express her feelings), but I felt the section on procrastination was really about setting priorities.

We may need to be very patient waiting for innovations to catch on (e.g. post-it notes).  We should delay decisions until the last possible moment so we maximize the time to gather and process information.  These are not exactly profound pieces of advice, and the latter is sometimes wrong: surely there are times when no more information is forthcoming, the decision is clear, and the best thing to do is to decide and move on.