The most difficult decisions are when one needs to choose between options that are equally good or equally bad. This is nearly tautological: if you’re choosing between a good option and a bad option, just choose the good option. Similarly, if there is a clear “lesser of two evils”, then that is the obvious choice.

However, if the choices have approximately the same magnitude of impact at face value, then choosing between them can be tortuously difficult.

I’ve been thinking about this in the context of two books I’ve read recently: Thinking in Bets (by Annie Duke), and Wanting (by Luke Burgis).

Pros/Cons List: One ubiquitous tool for decision making is the “pros and cons” list. I’d be shocked if you hadn’t made one before: split a sheet of paper down the middle, and list the positive aspects of the decision on one side, and the negative aspects on the other. Once you’ve done this, see where the balance of the pros/cons is, and side with the choice that is better, on balance.

In my experience, this is a nice process to get started with a decision, but it rarely leads to unambiguous results. The best you can hope for is that the correct decision is “secretly” obvious. – Sometimes, just the act of writing out the pros/cons is clarifying enough to make you realize that a prima facie “close choice” is actually an easy one.

However pro/con lists don’t consider the magnitude of each pro/con. One “con” could outweigh 10 “pros”, but this isn’t representable in this technique. Additionally, I find that I typically hedge items that I record as pros/cons with weasle words like “somewhat” or “maybe”. (e.g. “I’m somewhat more likely to enjoy working on Project A than Project B”)

Adding probabilistic reasoning can help with the latter issue.

Using Predictions: In some sense, all decisions are a predictions – you predict that the decision you make maximizes your utility, for some definition of utility:

It’s not about approaching our future predictions from a point of perfection. It’s about acknowledging that we’re already making a prediction about the future every time we make a decision, so we’re better off if we make that explicit. (Thinking in Bets, p189)

Acknowledging that decisions are predictions can help decompose the problem into quantitative “bets” on the future. Instead of listing pros/cons, you can list predictions on various outcomes of the decision. Examples:

  • Supposing I buy House A, what is the chance that I will still be living in it in 5 years? What is the chance that it appreciates in value over that time? What is the expected value of house repairs over that time?
  • Supposing I sign up for auto insurance Policy B, what is the chance that I will have an accident expensive enough to be higher than the plan’s deductible?
  • Supposing I join Company C, what is the chance that it will still be in business in X years? What is the chance that the work would be interesting enough to keep learning new things for X years? What is the chance that I’ll be promoted in X years?

Predicting the future is Very Hard, but it’s a skill that can be developed. Making personal predictions – predictions about your personal circumstances – is useful in many domains, not just Big Life Decisions. As long as you develop a reasonably good calibration for your predictions, in aggregate you’ll be well on your way to making better decisions.

Expected Value & Risk Appetite: For explicitly quantifiable predictions, it can be helpful to create a simple probabilistic model, so you can get a sense of the expected value of your predictions. One tool that I enjoy using for this is Guesstimate. Guesstimate is a probabilistic spreadsheet: it allows you to run simulations using uncertain inputs, chaining together probability distributions like you’d chain together cells in Excel. The output even tells you how sensitive outcomes are to each input. Very neat tool.

The simplest output of a probabilistic model is the expected value of a decision – the average/mean of the outcomes of a series of simulations. Using a tool that can render the full distribution of outcomes given your model can be useful, too. In many cases, you actually should anchor to the median outcome, not the average:

Even putting an Excel spreadsheet together can be helpful. Estimate a probability distribution of outcomes, and the value of each outcome. Make a graph. Notice anything?

Facing probabilities makes you consider your risk appetite. Which would you regret more, given your temperament, circumstances, and how much weight you put on the outcomes: missing out on a positive outcome, or experiencing a negative outcome? When choosing between two positive expected value bets, would you choose a distribution with lower median returns with large, low probability upside, or a distribution with higher median returns, with no large potential upside?

Thinking through these questions, we hit a limitation of models; they’re descriptive rather than prescriptive. Quantitative tools don’t address the decision weighting issue: given a multi-dimensional decision, how do you weight the aspects of each option?

Values Hierarchies: Personal Values is one framework for addressing the “weighted importance” issue of aspects of decisions. Luke Burgis advocates for constructing a personal “values hierarchy”, ideally prior to making a difficult decision, and inspecting each aspect of the decision in light of how they impact the values hierarchy.

A hierarchy of values is especially critical when choices have to be made between good things. If values are all equally important, or if there isn’t a clear understanding of how they relate to one another, mimesis [desire] becomes the primary driver of decision-making. (Wanting, p95)

This is more qualitative than past tools, but necessarily so. Predictions are, ideally, descriptive statements about the future. Decisions are prescriptive assertions about the present, made within your personal context. Prescriptive decisions need a subjective, qualitative component: even if you’re maximizing for something qualitative (e.g. “I want to maximize my earnings”), the choice of that utility function is subjective.

Values, though squishy, are a good framework for ultimately coming to a decision. Determine which values are important, rank them in order of importance, and weigh each aspect of a decision within the context of a value. You may decide that Learning, Novelty, and Stability are your ranked values. In which case, you may decide to get a masters degree (Learning), or go on a backpacking trip (Novelty), even though these may temporarily disrupt Stability.

“Values” don’t have to be lofty, descended-from-the-clouds, lifelong qualities either. It can be useful to have time-limited values (this is one of the main aspects of the Yearly Theme System). Values should ultimately have an anchoring effect: you identify qualities that are important to you, so that you have some scale by which to measure the myriad aspects of a tough decision.

“Time Travel”:

If none of the above techniques cleanly delivers a decision, a useful fallback is intuition. One useful exercise is “time traveling” to the day after you made the decision, and imagine living with the implications of that decision:

Let’s say you have two competing job offers: Company A and Company B. If you have two days to make the final decision, spend one day with each company in your imagination. On the first day, imagine with as much detail as possible that you’re working at Company A and fulfilling the desires that come along with that position—maybe it’s living in a new city, interacting with smart people, and being closer to your family. Pay close attention to your emotions and what’s going on inside your gut. The next day, spend the entire day doing the same thing, except at Company B. Compare. (Wanting, p145)

I used this technique in a recent decision and it was profoundly useful. For impactful decisions, you really do get the sense with this technique that you’re at a fork in the road.

In constructing and inhabiting a mental space for each decision, you essentially wrap up all the implications of a decision, toss it into your brain’s System 1, and see what falls out.

I also find that the timing prescribed for this approach is important: you really need to inhabit the decisions, without waffling, for a nontrivial period of time – ideally a day. It’s the aspect of inhabiting the post-decision mindset that is useful.

It’s also worth “time traveling” to think about the impact of the decision on a longer timescale. Thinking in Bets describes the 10-10-10 Process for doing this:

“Every 10-10-10 process starts with a question…. [W]hat are the consequences of each of my options in ten minutes? In ten months? In ten years?” (Thinking in Bets, p170)

Surprisingly few decisions have irreversible 10 year effects (though, that makes those decisions all the more important 😬). Though, as always, This Too Shall Pass.


I’d argue that an extension of “predictions are hard, especially about the future” is that “decisions are hard, especially about the future”. A final quote:

Whenever we make a choice, we are betting on a potential future. We are betting that the future version of us that results from the decisions we make will be better off. (Thinking in Bets, p49)