Framing Effect Bias

Published author, Jack Houghton, is our resident expert on Betting Psychology and is on hand to discuss the framing effect bias. And how it affects your punting.

Please enjoy this three part article. It’s full of engaging references and examples.

Humans, no matter their intelligence, are cognitively lazy. We have to be.

We’re faced with innumerable decisions every day and, if we gave all them our full, cerebral attention, our lives would be, at best, inefficient. At worst, over very quickly.


When a car is hurtling towards us as we cross a road, we don’t pause to calculate the speed of its approach and compare this to our speed of crossing.  We don’t assess the various probabilities of the car speeding up or slowing down. We don’t assess the road conditions for the effect they may have on braking. We just run.

In crossing the road, as in most other things we do, we rely on mental shortcuts. These are often long-ingrained, learned behaviours. We cross the road the way we do because, on thousands of occasions as children, we were told that roads were dangerous and that, at the sight of a fast-approaching car, we should get out of the way.

We are also riddled with cognitive biases, which also act as mental shortcuts. We have explored how a number of these operate – and effect our betting – elsewhere on the site. In short, though, these are ways that humans are neurologically hardwired to interpret life in certain ways, even when those interpretations are clearly irrational.

For most of human existence, thinking intuitively and automatically has served us well, and it perhaps explains why we have survived evolutionarily where other species have not. For example, we are infested with numerous biases that show us to prefer the familiar to the unusual. During pre-civilisation, this would have seen us violently eject any imposter in our territory, increasing the likelihood that our genes would survive to future generations.

In modern civilisation, though, which relies on complex and chaotic human interactions in an interconnected society, reacting based on these mental shortcuts can be disastrous. Whilst we may want to thump someone who is talking to our sexual partner, doing so is rarely the best decision.

Our cognitive biases don’t end with a preference for the recognisable, though.


DILEMMA-BASED EXPERIEMENT

In 2013, Valerie Reyna, a psychology professor at Cornell University, conducted a dilemma-based experiment. She asked participants to consider the following scenarios and decide which course of action they would choose:

“The U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people.

Do you save 200 people for sure, or choose the option with 1/3 probability that 600 will be saved and a 2/3 probability no one will be saved?”

And…

“The U.S. is preparing for the outbreak of an unusual disease, which is expected to kill 600 people.

Do you pick the option where 400 will surely die, or instead a 2/3 probability that all 600 will die and a 1/3 probability no one dies?”

When given the first scenario, most participants chose the first option. When given the second scenario, most chose the second option. In both scenarios, though, the options are statistically identical, demonstrating that it is something about how the options are framed verbally which affects the choices that participants make.

This demonstrates a well-established cognitive shortcoming known as the Framing Effect Bias, where humans make different decisions depending on the language that is used to express the choices available to us, especially when the framing revolves around what we might lose, rather than what we might gain.


EVERYDAY WE’RE GETTING FRAMED

Retailers and advertisers manipulate this bias regularly, which is why meat is described as 75% lean, rather than 25% fat. Adverts often attempt to create what linguist Norman Fairclough described as synthetic personalisation, a false impression that the product is made specifically for an individual. Because you’re worth it.

The emergence of spin-doctors in the last few decades has seen this verbal framing used in the political world. Whether it’s the likes of Lynton Crosby in Australia, or Karl Rove and Frank Luntz in the US, politicians are winning elections the world over based on the advice of political consultants who exploit the Framing Effect Bias in the language they have their clients use.

Reyna’s Cornell University study findings that such framing biases exist was not revolutionary, of course: in fact, she used virtually identical dilemmas to those employed in a 1981 experiment by Daniel Kahneman and Amos Tversky. What differed in this new experiment was that half of the participants were intelligence agents, and half were college students. And surprisingly, it was the intelligence agents who were significantly more likely to be hoodwinked by the framing of the scenarios.

Reyna concluded that the irrational decision-making of the agents was likely a result of their age and experience. Rather than helping the agents, their veteranship impeded them, as they were more likely to use mental shortcuts learned over the years, rather than carefully analysing the information in front of them. The college students, with less experience, were perhaps driven to concentrate more precisely on what they were being told, as they didn’t have the memory of seemingly similar decisions of the past to cognitively rely on.


SO, WHAT’S ALL THIS GOT TO DO WITH BETTING?

Well, on one level, it should serve as a warning for all punters that we need to be careful of, and relentlessly analytical with, any information we process. If we hear that a team has lost their star fullback, it is likely our brains will automatically view this as a catastrophic loss. The reality – that the star will be replaced by another of almost equal skill – is easy to overlook, especially when the media, in search of pre-match news, focus on the perceived loss, endlessly discussing its likely ramifications.

And we should be especially aware of framing bias as we age.  The older we are, the more likely that we will rely on what Daniel Kahneman refers to as “causal schemas”, the mental shortcuts of experience that likely explain the results Reyna’s study. If, over many years of punting, we see lots of frontrunners winning at Caulfield, we may tend to favour frontrunners when punting there in future, without ever confirming the existence or extent of any advantage, or of assessing its probabilistic influence on the outcome of any race.

Framing is not just about language, though. How we frame information in other ways can also change how we perceive it.


TAKE THIS SCENARIO

Let’s imagine you feared nothing more than earthquakes, to the extent that, when relocating, the chance of a major earthquake affecting your new home would be the determining factor in your decision. Would you move somewhere where, each day, there was a one-in-ten-thousand chance of a major earthquake? What about if it was one-in-seventy-three-thousand?

The issue with such calculations is that, in such a narrow timeframe, the difference between the scenarios is hard for the human brain to quantify.

Framing it differently changes this. According to the United States Geological Survey, where the data above comes from too, Anchorage will experience a major tremor (bigger than 6.75 on the moment magnitude scale) once-every-thirty-years, whereas Salt Lake City will be largely spared, only seeing a major tremor once-every-two-centuries.

Depending on how much you feared the consequences of a major earthquake, you may decide to give both options a miss (after all, it’s also a choice between being surrounded by grizzly bears or Mormons), but what’s clear is that thinking about the data in a different timeframe allows us a very different perspective.


ONTO BETTING

The effects of different timeframes on how we see information can have a dramatic – and often unprofitable – effect on how we bet.

Take another scenario. You are watching a tennis match. I offer you odds of $1.77 on a player winning the next point. Do you take it? If I said that I would offer you $1.16 on the same player to win the entire match, would you take that?

It’s hard to answer those questions without knowing more information. The players in question are Roger Federer and David Ferrer.

Across their careers, they have played 17 times. In these matches, Federer has won 56.38% of contested points, which translates to the odds of $1.77 above. The $1.16 is the average price, according to my records, that has been available on Federer at the start of his matches against Ferrer (with a range from $1.03 (Miami 2006) to $1.35 (Madrid 2010)). Federer has won all 17 of their encounters.

This example is, of course, illustrative. It’s never possible to have this kind of historical perfect-vision when betting. It shows, though, how difficult it is to assess probability when it is presented in such different timeframes. A player with only a little over a 50:50 chance of winning a point may not seem like a strong bet to win a match, but even a small advantage in every point, when extrapolated over a match, can turn that player into a near certainty.

In the example above, our chances of making the best decisions depends on our ability to put together, or aggregate, lots of smaller data points. Aggregation of data, though, has its own risks.


THE TRUMP CARD

Take the 2016 Presidential Election. Data from FiveThirtyEight shows that in 48 out of the 50 most-educated counties, Clinton increased the vote share won by Obama in the 2012 election. Conversely, in 47 of the 50 least-educated counties, her vote share decreased. It would be tempting to extrapolate from this data, therefore, that the less-educated were voting for Trump. However, this would be an example of an ecological fallacy, where an analysis of a group is used to draw conclusions about an individual. In the case of the 2016 election, things are more complicated than they initially seem, with race, income, geography, and media consumption all showing significant correlations with voting patterns.

We need to beware of similarly fallacious extrapolations when betting.


THE DARREN WEIR EFFECT

Horse racing can provide an example. Across the world, large, successful training and breeding operations tend to dominate the world’s biggest races. Because of this, it can be easy to overvalue runners from those types of stables when pricing up individual races. That’s not to say that horses representing large training operations don’t have some advantages over other horses – the greater experience and wealth of their handlers may afford them better travel arrangements, for example – but it’s likely that such advantages are only marginal, and that individual differences in the ability and form of horses, rather than aggregate differences between their stables, are going to be the determining factors in deciding which horse wins. Therefore, it is past form that needs to be at the centre of pricing up these races, not the colour of the silks on the jockey’s back.

Similarly, how often are New Zealand rugby union players overvalued? Whilst it is true (although this might be difficult for any Australian to accept) that New Zealand is the best rugby playing nation in the world – three Rugby World Cups and a near-80% strike-rate is impressive for a nation of fewer than 5 million people – it is false to assume that individual New Zealand players are therefore necessarily better than their international contemporaries. Yet Northern Hemisphere clubs will pay high premiums for players from the South. How much of this value is because of the cache they bring from their home nations, though? As punters, we need to beware of falling foul of the same aggregation bias. How often have we overvalued a team because they have a new, star player from a glittering background?


CONCLUSION

Whenever we are assessing information as punters, we need to be aware of how that information is being framed in terms of the level-of-aggregation. Often, seemingly insignificant data is crucial when extrapolated over time. For example, an F1 car that can lap a tenth-of-a-second quicker will be 8 seconds ahead after 80 laps. Conversely, though, data presented to us in its aggregate form can seem significant in predicting events, when it is of little value. For example, draw and track biases can often dominate pre-race discussion, drowning out analysis of relative horse form.

Successful punters are always aware of the type of data they are dealing with, and will constantly question the extent to which it is relevant in their decision-making. Often, being able to identify where others are over-reacting to some of the aggregation biases above, and simply betting against the crowd, is enough to ensure a profit.

In the first two pieces, we have looked at how the language that is used, and its level-of-aggregation, can significantly affect the way we process information.


Both articles were written in the last week, and so have been at the forefront of my mind when punting. I’ve been betting on US Politics – specifically Donald Trump’s exit date – and have been careful not to over-interpret the highly-political language used around the Russia investigation, and to rely on longer-term and aggregated ratings of his job approval, rather than daily polls that are potentially politically biased and oversensitive to whatever Twitter-feud is dominating the headlines in any hour.

However, I’ve been much less aware of my tendency towards overconfidence when it comes to betting on politics (I’ve got a degree in it, why shouldn’t I be confident?). As well as my tendency to over-weigh certain sources of authority (I think talk-show host Seth Meyers is brilliant, so I tend to agree with whatever he says).

But then it’s been nearly a year since I wrote about the overconfidence effect and our tendency to obey authority, so why would I be thinking about them?

Like all humans, my brain tends towards the recent, and this recency bias can lead to poor judgement.

University of Cambridge economist Ha-Joon Chang neatly explores this when he asks students to choose the more important invention, the internet, or the washing machine. Most, as you’d expect, choose the internet. This is because, Chang argues, humans tend to overvalue the significance of more recent events: it feels as if the internet has revolutionised our lives, but when compared to household appliances, which allowed a whole gender the time to find paid employment outside of the home, the small efficiencies the web allows us seem insignificant.

This tendency to weigh more heavily recent information has been repeatedly demonstrated by psychologists (they refer to it as the “availability heuristic”) and it’s important to recognise this in-built bias if we are to be better punters.

Markets are often skewed because of it – in betting and finance. In 2010, after the BP oil spill in the Gulf of Mexico, the company’s share price dropped to less-than-half of its pre-spill value, as traders overemphasised news of the spill when compared to the context of the company as a whole.

Now, it’s easy to see this with the benefit of hindsight – at the time, there were suggestions that regulatory fines would be so punitive that the company may face bankruptcy – but it serves as a reminder that data that is new is no more important or valuable as that which already existed.

So, when a rugby team unexpectedly lose, or a horse produces a career-best performance, or a tennis player bombs out in the first round, or Donald Trump takes to Twitter to attack a Republican senator, we shouldn’t overreact. The information is important – of course it is – but it needs to be assessed in a measured way as part of everything you know, rather than being given special prominence.

The medical profession is well-aware of the dangers of how the order in which things are framed can make us biased towards certain pieces of information.


LET’S LOOK AT ANOTHER EXAMPLE

A son brings his aging father to the hospital and says his dad is having a heart-attack. The patient is triaged and prioritised as having a suspected heart-attack. A junior doctor orders tests to confirm the diagnosis. They then present the results to a consultant, who is unsure, as several of the tests are inconclusive. A final test is more convincing, though, and the consultant treats the patient as if they are having a heart attack.

In the scenario, four errors-in-thinking occur.

First, the consultant is guilty of the same recency bias that Ha-Joon Chang outlines with his washing machine versus the internet dilemma. The last medical test should be objectively assessed in conjunction with all the other tests, but, in this case, it is given greater prominence.

Second, the medical professionals are swayed by the initial assumption that the man is having a heart attack, and this directs much of the action they take. This is known as the primacy effect, or anchoring, where the first piece of information we receive is given greater prominence in our thinking. Although subsequent information may cause us to adjust our thinking, we tend to only adjust it in relation to the initial anchor, but find it very hard to rid ourselves of the anchor altogether.

The cognitive bias of anchoring has been repeatedly demonstrated in different environments, from how much we are prepared to pay for goods in negotiation to how we judge other people. Most stark, though, is the work of University of Virginia researchers, Wilson et al, who found that, even when participants were specifically told about anchoring and the effect it would have on their judgement, their estimates of how many doctors lived locally were still dramatically affected by how large or small the initial anchor was that they were given.

Third, the junior doctor exhibits a confirmation bias, a desire in all of us to corroborate what we already believe to be true, which we have explored in more detail elsewhere on this site.

And fourth, the consultant undervalues the tests that were done in the middle of the process of diagnosis. Work done on memory, specifically on what is known as the serial position effect, suggests this may be because we struggle to recall information in the middle of a list. In the case of the consultant, they have clear recall of both the anchor and the most recent test, but the results and significance of those interim tests, which have now been supplanted in their working memory, cease to be given the same weight as when they were causing the consultant to doubt the initial diagnosis.


BACK TO BETTING

These information order effects are crucial for punters to be aware of. If a cricket match is billed as a grudge-match between one team’s star batsman and the other team’s star bowler, and this discussion dominates much of the pre-match media coverage, it is easy for that information to become the anchor by which we assess the likely outcome of the match. If we go on to look at a wider range of information, but this then becomes supplanted by news that the pitch is deteriorating and that this will favour the bowler, we can easily make judgements which are wildly distorted from reality.

As with most cognitive biases, implementing unmodifiable processes by which we bet is our best way of combating these kinds of framing errors. Medical professionals are encouraged, when presenting any case to a colleague, to vary the order in which they provide the information they have, and would-be successful punters can borrow the technique.

One way to do this is to write out the information upon which you are basing any bet in a circle, in the same way that you might write down the letters of an anagram when trying to solve it. This information should be both positive (data encouraging you to make the bet) and negative (data discouraging you from making the bet). Once done, run through the reasoning for your bet several times, starting at different places in the circle and heading in different directions each time. If you consistently find the bet to be good value, no matter your starting-point or direction-of-travel, you can be more confident that your thinking is clear and sound.

Another benefit of this technique, especially for punters who are new to trying to make the game pay, is that you will find it illuminates the way your mind works, especially when it can be trusted and when it can’t. This is likely to make you more skeptical of your own ability, and more cautious as a result, which, as we will see in an upcoming article, is no bad thing.


Related Articles

Illusion of Control

Superstition and sport seem inseparable. Whether it is Steve Waugh carrying his grandfather’s red hanky in his pocket, Tony ...

Are you a better than an average punter?

You have agreed to take part in a questionnaire.  Please answer the following questions.

God-Shaped Hole