ResearchPhilosophy and TechnologyOvercoming Cognitive Bias with Algorithms

Overcoming Cognitive Bias with Algorithms

This is a revised text of a lecture given at King’s College in March 2023

The judgments of human beings can be biased; they can also be noisy. Across a wide range of settings, use of algorithms is likely to improve accuracy.

I offer two related claims here. The first is that in important domains, algorithms can overcome the harmful effects of cognitive biases, which can have a strong hold on people whose job it is to avoid them, and whose training and experience might be expected to allow them to do so. 

My second claim is not in conflict with the first, but it is in a very different spirit. It is that no less than human beings, algorithms have great difficulty in solving (some) prediction problems. One clue is provided by the data in the very domains in which algorithms outperform human beings: Even when algorithms are superior, they are usually not spectacularly superior. They do better than human beings do across large populations, but they do not know what will happen in individual cases.

Jail and Bail

Some of the oldest and most influential work in behavioral science shows that statistical prediction often outperforms clinical prediction; one reason involves cognitive biases on the part of clinicians, and another reason is noise. Algorithms can be seen as a modern form of statistical prediction, and if they avoid biases and noise, no one should be amazed. What I hope to add here is a concrete demonstration of this point in some important contexts, with some general remarks about both bias and noise.

Consider some research from Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan, who explore judges’ decisions whether to release criminal defendants pending trial. Their goal is to compare the performance of an algorithm with that of actual human judges, with particular emphasis on the solution to prediction problems. It should be obvious that the question of whether to release a defendant has large consequences. If defendants are incarcerated, the long-term consequences can be very severe. Their lives can be ruined. But if defendants are released, they might flee the jurisdiction or commit crimes. People might be assaulted, raped, or killed. And while the decision whether to release criminal defendants pending trial is highly unusual in many ways, my goal here is to draw some general lessons, applicable to ordinary life, about the choice between decisions by human beings and decision by algorithms.

In some jurisdictions in the United States, the decision whether to allow pretrial release turns on a single question: flight risk. It follows that judges have to solve a prediction problem: what is the likelihood that a defendant will flee the jurisdiction? In other jurisdictions, the likelihood of crime also matters, and it too presents a prediction problem: what is the likelihood that a defendant will commit a crime? (As it turns out, flight risk and crime are closely correlated, so that if one accurately predicts the first, one will accurately predict the second as well.) Kleinberg and his colleagues built an algorithm that uses, as inputs, the same data available to judges at the time of the bail hearing, such as prior criminal history and current offense. Their central finding is that along every dimension that matters, the algorithm does much better than real-world judges. Among other things:

  • Use of the algorithm could maintain the same detention rate now produced by human judges and reduce crime by up to 24.7 percent. Alternatively, use of the algorithm could maintain the current level of crime reduction and reduce jail rates by as much as 41.9 percent. That means that if the algorithm were used instead of judges, thousands of crimes could be prevented without jailing even one additional person. Alternatively, thousands of people could be released, pending trial, without adding to the crime rate. It should be clear that use of the algorithm would allow any number of political choices about how to balance decreases in the crime rate against decreases in the detention rate.
  • A major mistake made by human judges is that they release many people identified by the algorithm as especially high-risk (meaning likely to flee or to commit crimes). More specifically, judges release 48.5 percent of the defendants judged by the algorithm to fall in the riskiest 1 percent. Those defendants fail to reappear in court 56.3 percent of the time. They are rearrested at a rate of 62.7 percent. Judges show leniency to a population that is likely to commit crimes.
  • Some judges are especially strict, in the sense that they are especially reluctant to allow bail—but their strictness is not limited to the riskiest defendants. If it were, the strictest judges could jail as many people as they now do, but with a 75.8 percent increase in reduction of crime. Alternatively, they could keep the current crime reduction, and jail only 48.2 percent as many people as they now do.

Two Biases

Why does the algorithm outperform judges? The most general answer is that it is less biased, and it is not at all noisy. A more specific answer is suggested by the third point above: judges do poorly with the highest-risk cases. (This point holds for the whole population of judges, not merely for those who are most strict.) The reason is an identifiable bias; call it Current Offense Bias. Kleinberg and his colleagues restrict their analysis to two brief sentences, but those sentences have immense importance. As it turns out, judges make two fundamental mistakes. First, they treat high-risk defendants as if they are low-risk when their current charge is relatively minor (for example, it may be a misdemeanor). Second, they treat low-risk people as if they are high-risk when their current charge is especially serious. The algorithm makes neither mistake. It gives the current charge something closer to its appropriate weight. It takes that charge in the context of other relevant features of the defendant’s background, neither overweighting nor underweighting it. The fact that judges release a number of the high-risk defendants is attributable, in large part, to overweighting the current charge (when it is not especially serious).

Intriguing and ingenious work by Ludwig and Mullainathan has suggested another reason that algorithms do better than human judges. Even after controlling for race, skin color, and demographics, judges give more weight than do algorithms to the defendant’s mugshot! As Ludwig and Mullainathan put it, “the mugshot predicts judge behavior: how the defendant looks correlates strongly with whether the judge chooses to jail them or not.” Perhaps unsurprisingly, judges are responsive to whether the mugshot shows that defendant as “well-groomed”: judges are more likely to release defendants whose faces are clean and tidy as opposed to unkempt, disheveled, and messy. Perhaps surprisingly, judges are more likely to release defendants whose mugshots show them as “heavy-faced” (with a wider or puffier face). Call it Mugshot Bias. We would not know that judges show Current Offense Bias, or Mugshot Bias, without the help of the algorithm.

Love and Romance

Let me now turn to my second claim.

Can algorithms predict whether you will fall in love with a stranger? Can they actually help people to find romantic partners? Thus far, the results on such counts are not promising. Samantha Joel and colleagues find that algorithms struggle to predict “the compatibility elements of human mating … before two people meet,” even if one has a very large number of “self-report measures about traits and preferences that past researchers have identified as being relevant to mate selection.” Joel and her colleagues suggest that romantic attraction may well be less like a chemical reaction with predictable elements than “like an earthquake, such that the dynamic and chaos-like processes that cause its occurrence require considerable additional scientific inquiry before prediction is realistic.” 

What are “dynamic and chaos-like processes”? It is worth pondering exactly what this means. Most modestly, it might mean that algorithms need far more data in order to make accurate predictions—far more, at least, than is provided by self-report measures about traits and preferences. Such measures might tell us far too little about whether one person will be attracted to another. Perhaps we need more data about the relevant people, and perhaps we should focus on something other than such measures. It is possible that algorithms cannot make good predictions if they learn (for example) that Jane is an extrovert and that she likes football and Chinese food. It is possible that algorithms would do better if they learn that Jane fell for John, who had certain characteristics that draw her to him, and also for Tom and Frank, who had the same characteristics. If so, perhaps she is most unlikely to fall for Fred, who has none of those characteristics, but quite likely to fall for Eric, who shares those characteristics with John, Tom, and Frank. 

On this view, the right way to predict romantic attraction is to say, “if you like X and Y and Z, you will also like A and B, but not C and D.” Or perhaps we should ask whether people who are like Jane, in the relevant respects, are also drawn to Eric—an approach that is not unrelated to that described above in connection with humor. Of course it would be necessary to identify the relevant respects in which people are like Jane, and that might be exceedingly challenging. 

More radically, we might read the findings by Joel and her colleagues to suggest that romantic attraction is not predictable by algorithms for a different reason: It depends on so many diverse factors, and so many features of the particular context and the particular moment, that algorithms will not be able to do very well in specifying the probability that Jane will fall for Eric. The reference to “dynamic and chaos-like processes” might be a shorthand way of capturing mood, weather, location, time of day, and an assortment of other factors that help produce a sense of romantic connection or its absence. Jane might smile at a certain moment at lunch, and Eric’s heart might flutter; or Jane might not smile at that moment, because she is distracted by something that happened in the morning, Eric might say something witty as sandwiches arise, because of something he read in the paper that morning, and that might initiate a chain of events that culminates in marriage and children. For romance, so much may depend on factors that cannot be identified in advance. 

Revolutions

In work that predated the rise of algorithms, the economist Timur Kuran urged that revolutions were unpredictable by their very nature. Kuran argued that an underlying problem lies in “preference falsification”: People do not disclose their preferences, which means that we cannot know whether they will, in fact, be receptive to a revolutionary movement. If we do not know what people’s preferences are, we will not know whether they might be willing to participate a rebellion once the circumstances become propitious. Kuran added that we cannot observe people’s thresholds for joining such a movement. How many people would be willing to join when a movement is at its early stages? Who will require something like strong minority support before joining it? Kuran also noted that social interactions are critical, and they too cannot be anticipated in advance. For a revolution to occur, people must see other people saying and doing certain things at certain times. How can we know, before the fact, who will see whom, and when, and doing what? The answer might well be that we cannot possibly do that. 

Kuran was not writing about algorithms, but they are unlikely to be able to do that, either. Algorithms will find it challenging or impossible to learn what people’s preferences are, and they might not be able to learn about thresholds. Even if they could do both, they would not (to say the least) have an easy time obtaining the data that would enable them to predict social interactions, and they might not even be able to identify their probability. In some ways, the challenge of predicting a revolution is not so different from the challenge of predicting a romantic spark.

Kuran did not deny that we might be able to learn something about (1) when a revolution is improbable in the extreme, and also (2) when a revolution is at least possible. For one thing, we might be able to make at least some progress in identifying private preferences – for example, by helping people feel safe to say that they dislike the status quo, perhaps by showing sympathy with that view, or perhaps by guaranteeing anonymity. Algorithms might be able to help on that count. Kuran wrote before the emergence of social media platforms, which give us unprecedented opportunities to observe hitherto unobservable preferences (for example, via Google searches, which might reveal widespread dissatisfaction with the current government). Perhaps algorithms can say something about probabilities, based on data of this kind. But if Kuran is right, they will not be able to say a lot, because their knowledge of preferences and thresholds will be limited, and because they will not be able to foresee social interactions. The general analysis should not be limited to revolutions. Preference falsification, diverse thresholds, and social interactions—one or more of these are in play in many domains. 

Hits

Consider the question whether books, movies, or musical albums are likely to succeed. Of course we might know that a new album by Taylor Swift is likely to do well, and that a new album by a singer who is both terrible and unknown is likely to fail. But across a wide range, a great deal depends on serendipity, and on who says or does what exactly when.

This point clearly emerges from research from a number of years ago, when Matthew Salganik, Duncan Watts, and Peter Dodds investigated the sources of cultural success and failure. Their starting point was that those who sell books, movies, television shows, and songs often have a great deal of trouble predicting what will succeed. Even experts make serious mistakes. Some products are far more successful than anticipated, whereas some are far less so. This seems to suggest, very simply, that those that succeed must be far better than those that do not. But if they are so much better, why are predictions so difficult?

To explore the sources of cultural success and failure, Salganik and his coauthors created an artificial music market on a preexisting website. The site offered people an opportunity to hear forty-eight real but unknown songs by real but unknown bands. One song, for example, by a band called Calefaction, was “Trapped in an Orange Peel.” Another, by Hydraulic Sandwich, was “Separation Anxiety.” The experimenters randomly sorted half of about 14,000 site visitors into an “independent judgment” group, in which they were invited to listen to brief excerpts, to rate songs, and to decide whether to download them. From those 7,000 visitors, Salganik and his coauthors could obtain a clear sense of what people liked best. The other 7,000 visitors were sorted into a “social influence” group, which was exactly the same except in just one respect: the social influence group could see how many times each song had been downloaded by other participants.  

Those in the social influence group were also randomly assigned to one of eight subgroups, in which they could see only the number of downloads in their own subgroup. In those different subgroups, it was inevitable that different songs would attract different initial numbers of downloads as a result of serendipitous or random factors. For example, “Trapped in an Orange Peel” might attract strong support from the first listeners in one subgroup, whereas it might attract no such support in another. “Separation Anxiety” might be unpopular in its first hours in one subgroup but attract a great deal of favorable attention in another.  

The research questions were simple: would the initial numbers affect where songs would end up in terms of total number of downloads? Would the initial numbers affect the ultimate rankings of the forty-eight songs? Would the eight subgroups differ in those rankings? You might hypothesize that after a period, quality would always prevail—that in this relatively simple setting, where various extraneous factors (such as reviews) were highly unlikely to be at work, the popularity of the songs, as measured by their download rankings, would be roughly the same in the independent group and in all eight of the social influence groups. (Recall that for purposes of the experiment, quality is being measured solely by reference to what happened within the control group.)

It is a tempting hypothesis, but that is not at all what happened. “Trapped in an Orange Peel” could be a major hit or a miserable flop, depending on whether a lot of other people initially downloaded it and were seen to have done so. To a significant degree, everything turned on initial popularity. Almost any song could end up popular or not, depending on whether or not the first visitors liked it. Importantly, there is one qualification to which I will return: the songs that did the very best in the independent judgment group rarely did very badly, and the songs that did the very worst in the independent judgment group rarely did spectacularly well. But otherwise, almost anything could happen. The apparent lesson is that success and failure are exceedingly hard to predict, whether we are speaking of algorithms or human beings. There are many reasons. Here is one: it is difficult to know, in advance, whether a cultural product will benefit from the equivalent of early downloads.

Early popularity might be crucial, and early popularity can turn on luck. Because of the sheer number of variables that can produce success or failure, algorithms might well struggle to make successful predictions at early stages (though they can do better if they are given data on an ongoing basis). And in the case of financial markets, there is a special problem: Once it is made, a prediction by a terrific algorithm will automatically be priced into the market, which will immediately make that prediction less reliable, and possibly not reliable at all. 

Back to the Future

I have made two claims here. The first is that in many domains, algorithms outperform human beings, because they reduce or eliminate bias. As Current Offense Bias and Mugshot Bias make clear, experienced judges (in the literal sense) can do significantly worse than algorithms. 

At the same time, there are some prediction problems on which algorithms will not do well; the reason lies in an absence of adequate data, and in a sense in what we might see as the intrinsic unpredictability of human affairs. (1) Algorithms might not be able to foresee the effects of social interactions, which can lead in all sorts of unanticipated directions. (2) Algorithms might not be able to foresee the effects of context, timing, serendipity, or mood (as in the case of romantic attraction or friendship). (3) Algorithms might not have local knowledge about relevant particulars, or knowledge about what is currently happening or likely to happen on the ground.  (4) Algorithms might not be able to identify people’s preferences, which might be concealed or falsified, but which might be revealed at an unexpected time (perhaps because of a kind of social permission slip, which is itself hard to anticipate). (5) Algorithms might not be able to anticipate breakthroughs or shocks (a technological discovery, a successful terrorist attack, a pandemic). 

These are disparate problems. In some cases (category (3) is the obvious example) some human beings might be able to do better than algorithms can do. In other cases (category (4) is the most obvious example) algorithms should be able to make progress over time. But in important cases, defined above all by category (1), the real problem is that the relevant data are simply not available in advance, which is why accurate predictions are not possible – not now, and not in the future, either.

Cass Sunstein
Robert Walmsley University Professor at Harvard Law School

Cass R. Sunstein is currently the Robert Walmsley University Professor at Harvard. He is the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School. In 2018, he received the Holberg Prize from the government of Norway, sometimes described as the equivalent of the Nobel Prize for law and the humanities. In 2020, the World Health Organization appointed him as Chair of its technical advisory group on Behavioural Insights and Sciences for Health. From 2009 to 2012, he was Administrator of the White House Office of Information and Regulatory Affairs, and after that, he served on the President’s Review Board on Intelligence and Communications Technologies and on the Pentagon’s Defense Innovation Board. Mr. Sunstein has testified before congressional committees on many subjects, and he has advised officials at the United Nations, the European Commission, the World Bank, and many nations on issues of law and public policy. He serves as an adviser to the Behavioural Insights Team in the United Kingdom.

Mr. Sunstein is author of hundreds of articles and dozens of books, including Nudge: Improving Decisions about Health, Wealth, and Happiness (with Richard H. Thaler, 2008), Simpler: The Future of Government (2013), The Ethics of Influence (2015), #Republic (2017), Impeachment: A Citizen’s Guide (2017), The Cost-Benefit Revolution (2018), On Freedom (2019), Conformity (2019), How Change Happens (2019), and Too Much Information (2020). He is now working on a variety of projects involving the regulatory state, “sludge” (defined to include paperwork and similar burdens), fake news, and freedom of speech.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Virginia Held, 2018 APA Eastern Division Dewey lecture, "Philosophy, Feminism, and Care"

2018 Eastern Division Dewey Lecture: Philosophy, Feminism, and Care

Below is the audio recording of Virginia Held’s John Dewey Lecture, “Philosophy, Feminism, and Care,” given at the 2018 Eastern Division Meeting. The full...