Algorithms to Live By

The Computer Science Of Human Decisions

Published: May 6, 2021 Reading Time: 10 minutes rating: 7
Table of contents
Share this


Even if you're a computer scientists (like myself) this is a great book that will teach you about important computer science principles using everyday situations and will help you to apply them to your daily life.

Much as we use checklists, rules of thumbs and mental models to guide our thinking and decision making processes, algorithms—and the theory behind them—can be a great addition to our mental toolbox.

Notes and Highlights

They don't need a therapist; they need an algorithm. The therapist tells them to find the right, comfortable balance between impulsivity and overthinking.

The algorithm tells them the balance is thirty-seven percent. What balance between new experiences and favored ones makes for the most fulfilling life?

Optimal Stopping

Full information means that we don't need to look before we leap. We can instead use the Threshold Rule, where we immediately accept an applicant if she is above a certain percentile. Heuristics

  • Useful because in the real world there’s usually no full information

Shoup argues that many of the headaches of parking are consequences of cities adopting policies that result in extremely high occupancy rates. If the cost of parking in a particular location is too low (or-horrors!nothing at all), then there is a high incentive to park there, rather than to park a little farther away and walk. So everybody tries to park there, but most of them find the spaces are already full, and people end up wasting time and burning fossil fuel as they cruise for a spot. Unintended Consequences

For people there's always a time cost. It doesn't come from the design of the experiment. It comes from people's lives.


The value of exploration , of finding a new favourite, can only go down over time, as the remaining opportunities to savor it dwindle.

The value of exploitation can only go up over time

The Gittins Index, then, provides a formal, rigorous justification for preferring the unknown, provided we have some opportunity to exploit the results of what we learn from exploring. **The old adage tells us that "the grass is always greener on the other side of the fence," but the math tells us why: the unknown has a chance of being better, even if we actually expect it to be no different.**

Exploration in itself has value, since trying new things increases our chances of finding the best.

"To try and fail is at least to learn; to fail to try is to suffer the inestimable loss of what might have been." - Chester Barnard, management theorist

Regret is the result of comparing what we actually did with what would have been best in hindsight. In a multi-armed bandit, Barnard's "inestimable loss" can in fact be measured precisely, and regret assigned a number: it's the difference between the total payoff obtained by following a particular strategy and the total payoff that theoretically could have been obtained by just pulling the best arm every single time. [Regret Minimization](

With a good strategy regret's rate of growth will go down over time, as you learn more about the problem and are able to make better choices.

The success of Upper Confidence Bound algorithms offers a formal justification for the benefit of the doubt. Following the advice of these algorithms, you should be excited to meet new people and try new things-to assume the best about them, in the absence of evidence to the contrary. In the long run, optimism is the best prevention for regret.

The deepest insight that comes from thinking about later life as a chance to exploit knowledge acquired over decades is this: life should get better over time. What an explorer trades off for knowledge is pleasure.

Shifting the bulk of one's attention to one's favorite things should increase quality of life. And it seems like it does: Carstensen has found that older people are generally more satisfied with their social networks, and often report levels of emotional well-being that are higher than those of younger adults.


Your closet presents much the same challenge that a computer faces when managing its memory: space is limited, and the goal is to save both oney and time. For as long as there have been computers, computer scientists have grappled with the dual problems of what to keep and how to arrange it.

Thinking algorithmically about the world, learning about the fundamental structures of the problems we face and about the properties of their solutions, can help us see how good we actually are, and better understand the errors that we make

There comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones. - Sherlock Holmes

The LRU (least recently used) principle is effective because of something computer scientists call "temporal locality": if a program has called for a particular piece of information once, it's likely to do so again in the near future.

"Much of your time using a modern browser (computer) is spent in the digital equivalent of shuffling papers." This “shuffling" is also mirrored exactly in the Windows and Mac OS task switching interfaces: when you press Alt + Tab or Command + Tab, you see your applications listed in order from the most recently to the least recently used.

Caching is just as useful when it's proximity, rather than performance, that's the scarce resource.

The definitive paper on self-organizing lists, published by Daniel Sleator and Robert Tarjan in 1985, examined (in classic computer science fashion) the worst-case performance of various ways to organize the list given all possible sequences of requests.

The mathematics of self-organizing lists suggests something radical: the big pile of papers on your desk, far from being a guilt-inducing fester of chaos, is actually one of the most well-designed and efficient structures available.

The mind has essentially infinite capacity for memories, but we have only a finite amount of time in which to search for them. Anderson made the analogy to a library with a single, arbitrarily long shelf-the Noguchi Filing System at Library of Congress scale. You can fit as many items as you want on that shelf, but the closer something is to the front the faster it will be to find.

The key to a good human memory then becomes the same as the key to a good computer cache: predicting which items are most likely to be wanted in the future.

A word is most likely to appear again right after it had just been used, and that the likelihood of seeing it again falls off as time goes on. In other words, reality itself has a statistical structure that mimics the Ebbinghaus curve.

If the fundamental challenge of memory really is one of organization rather than storage, perhaps it should change how we think about the impact of aging on our mental abilities. Recent work by a team of psychologists and linguists led by Michael Ramscar at the University of Tübingen has suggested that what we call "cognitive decline"-lags and retrieval errors-may not be about the search process slowing or deteriorating, but (at least partly) an unavoidable consequence of the amount of information we have to navigate getting bigger and bigger.


"If you're flammable and have legs, you are never blocking a fire exit."

Brian, for his part, thinks of writing as a kind of blacksmithing, where it takes a while just to heat up the metal before it's malleable. He finds it somewhat useless to block out anything less than ninety minutes for writing, as nothing much happens in the first half hour except loading a giant block of "Now, where was I?" into his head.

You should try to stay on a single task as long as possible without decreasing your responsiveness below the minimum acceptable limit. Decide how responsive you need to be-and then, if you want to get things done, be no more responsive than that. Scheduling GTD (Page 126)

On January 1, 2014, he [Donald Knuth] embarked on "The TeX Tuneup of 2014," in which he fixed all of the bugs that had been reported in his TeX typesetting software over the previous six years.

Bayes' Rule

He is careful of what he reads, for that is what he will write. He is careful of what he learns, for that is what he will know. - Anni Dillard


If you have high uncertainty and limited data, then do stop early by all means. If you don't have a clear read on how your work will be evaluated, and by when, then it's not worth the extra time to make it perfect with respect to your own.


The perfect is the enemy of the good. - Voltaire

An optimization problem has two parts: the rules and the scorekeeping. In Lagrangian Relaxation, we take some of the problem's constraints and bake them into the scoring system instead. )


Communication is one of those delightful things that work only in practice; in theory it's impossible.

In most scenarios the consequences of communication lapses are rarely so dire, and the need for certainty rarely so absolute. In TCP, a failure leads to retransmission rather than death, so it's considered enough for a session to begin with what's called a "triple handshake." The visitor says hello, the server acknowledges the hello and says hello back, the visitor acknowledges that, and if the server receives this third message then no further confirmation is needed and they re off to the races.

In Real-time communications, the humans provide the robustness themselves. As Cerf explains, "In the case of voice, if you lose a packet, you just say, 'Say that again, I missed something.'

The world's most difficult word to translate has been identified as "ilunga," from the Tshiluba language spoken in south-eastern DR Congo.… Ilunga means "a person who is ready to forgive any abuse for the first time, to tolerate it a second time, but never a third time." - BBC News

Now is better than never. Although never is often better than right now. - The Zen of Python

Game Theory

It may be disheartening to learn that today's selfish, uncoordinated drivers are already pretty close to optimal. It's true that self-driving cars should reduce the number of road accidents and may be able to drive more closely together, both of which would speed up traffic. But from a congestion standpoint, the fact that anarchy is only 4/3 as congested as perfect coordination means that perfectly coordinated commutes will only be 3/4 as congested as they are now. It's a bit like the famous line by James Branch Cabell: "The optimist proclaims that we live in the best of all possible worlds: and the pessimist fears this is true." Congestion will always be a problem solvable more by planners and by overall demand than by the decisions of individual drivers, human or computer, selfish or cooperative.

Unlimited vacation? Anecdotal reports thus far are mixedbut from a game-theoretic perspective, this approach is a nightmare. All employees want, in theory, to take as much vacation as possible. But they also all want to take just slightly less vacation than each other, to be perceived as more loyal, more committed, and more dedicated (hence more promotion-worthy). (Page 239)

Love is like organized crime. It changes the structure of the marriage game so that the equilibrium becomes the outcome that works best for everybody.

Whenever you find yourself on the side of the majority, it is time to pause and reflect - Mark Twain