As winter approaches in the Northern Hemisphere, we offer up a think piece to get us all considering the decisions we make in the hills.
We are a data-driven society, and we are a society in the grips of technological determinism where everything should be quantified and optimized. The language of work and leisure is suffused with the language of technology, and yet, when we move through the mountains, we are operating in the domain of uncertainty with limited physical and cognitive resources. The confidence that weather and avalanche forecasting give us, the comforting glow of smartphone technology, the satisfying vocabulary of scientific and statistical terms—all of these combine to obscure the reality that the natural world, while completely indifferent to us, is also wildly unpredictable. However, many are determined to push the square peg of scientism into the round hole of reality.
This was evident in a viral social media post castigating the climbing community for its apparent poor communication and a general lack of honest appraisal and language around hazards. The post began with an equation which many people would assume has some basis in statistics, and then continued with a statistical comparison between driving, which you would also expect has some empirical basis. While I think the spirit behind the post—that we shouldn’t minimize the danger of mountain pursuits—is a valid one, the appeal to the authority of data is alluring and misleading. Although this might play nicely in a keynote address or Instagram, it has limited utility. It might sound like the right kind of claim, and scientism often has the character and form of an empirical idea, but not all bird shaped things are birds.
Most skiers and climbers would correctly assume avalanches pose the greatest danger to backcountry travel. If you had to ask at what rate avalanches kill people, I am sure that many would overstate the number. According to European Avalanche Warning Services (EAWS) data, deaths by avalanches are trending down despite an increase in backcountry travelers. U.S. data shows a similar trend but includes mechanized backcountry travel, which makes comparing with EAWS more difficult. There are various estimates of backcountry participants, some placing the numbers in the millions. Whatever number you choose to believe, this reduction in mortality has also coincided with an enormous increase in riders. It’s hard to draw any other conclusion besides this: education and forecasting are behind the reduction in avalanche mortality. Advances in our understanding of snowpacks and the mechanisms of avalanches have made the public communication of avalanche danger more salient and accurate. What we have, then, is a paradox for the marginal participant. Skiers and climbers overstate avalanche danger but, at the same time, are lulled into a false sense of security by avalanche forecasting.
As beginners we are often told by friends, mentors and Reddit strangers that doing an introductory course in avalanche safety is de rigueur before we set out on our human-powered turn career. We aren’t told why, however. Why should we prioritize analytical knowledge? Why should we prioritize practical skills around safety like snowpack sampling and crevasse rescue? And even if we have ticked all the pre-skinning safety boxes, we often aren’t told that this knowledge, and the skills associated with it, degrade almost immediately after we have assimilated it. Perhaps there is an assumption that the avalanche forecasters have our backs. But our safety in avalanche terrain is contingent on forecasters and our own analytical work and experience. Successful crevasse rescue is contingent on the abilities of the people at both ends of the rope. Analytics and forecasting are contingent on each other because the mountains are not orderly and predictable places.
We like to talk about risk and safety, but we make a category error when we describe the danger of climbing or backcountry skiing as risk. What we are dealing with is uncertainty. Risk and uncertainty aren’t interchangeable. Risk can be expressed as a number, like odds at the roulette table, where it is a calculation based on the known and fixed outcomes in the game. This may seem like a needless distinction or a distinction without a difference, but it informs our behavior and how we assess uncertainty. When it comes to the natural world, our data about dangers in the mountains suffers from problems of propensity and frequency. The heterogeneous nature of terrain (propensity) and the lack of sufficient data (frequency) mean that the actual probability of something happening, whether it be an avalanche or a rockfall, might be unknowable.
Frank Night, an economist and one of the founders of the Chicago school, had a helpful typology for us to assess the unknown. The first type is an a priori probability, which corresponds to propensities like the roulette table. The second is based on collecting empirical data and experimental evidence to determine statistical likelihood. The data on seatbelt efficacy is an example of the second type—billions of miles of driving data over decades, and the extensive use of crash test dummies in repeatable and replicable experiments. The third type, the type germane to what we face in the mountains, is where our actions are based on an estimate of danger. The first type is a deductive process, the second type is an inductive one, and the third involves forming an opinion on an estimate.
The latter is where intuition lies, and intuition is a function of experience and knowledge. There is no special learning involved in rolling dice, nor reasons to doubt buckling your seatbelt, but forming an opinion on an estimate of danger requires practice—with a capital P. Think of Practice as the Zen master thinks of Practice, or in the sense of a doctor’s Practice—the lifelong accumulation of knowledge through theoretical study, mentorship, and analysis.
Ever since Adam Smith’s ideas (about humans being rational actors who seek to maximize self-interest) were first challenged, the prevailing message that psychologists and cognitive scientists have presented to the world is that they hold a dim view of human levels of rationality. Again and again, calamities and crises are framed in terms of poor decision making and the inability of humans to live up to a Platonic or Smithian ideal.
“Despite our technological determinism and the thoroughgoing push to quantify everything, our judgements about safety in uncertain environments are not nearly as objective and sophisticated as we believe. And this is ok, but it creates an imperative for us to make safe travel in cold and steep places a lifelong practice.”
According to the cognitive scientist Berndt Brehmer, the pessimism regarding our abilities as decision makers is due to a focus on comparing actual decisions with normative models, i.e., what we did compared with what we should have done. Modern cognitive science seems determined to keep sticking the knife in. Amos Tversky and Danel Kahneman, who challenged rational choice theory, famously identified cognitive biases displayed by respondents in their experiments. These biases are mental traps: systematic error patterns among cohorts of people that deviate from normative models. Take, for example, availability bias, where we are more likely to overestimate the chance of something occurring if it can easily be brought to mind. Overestimating avalanche deaths after a spate of fatalities in the news is an example.
This is related to recency bias, where we might overestimate the chance of something happening if we’ve recently learned of a similar event. While Kahneman and Tversky did Nobel Prize-worthy work to take decision making away from the rigid confines of mathematical economics, they have been criticized for not thoroughly explaining how decision-making works in reality.
After reading their research, you’d be amazed that humans can navigate the world at all. But we don’t live in a normative decision model world; we live in a world of naturalistic decision making. We live in a world of imperfect knowledge, often relying on a best-fit scenario to navigate complex environments. The same process that makes us susceptible to cognitive biases—our ongoing attempts to parse the world’s complexity—also equips us to navigate this environment with limited cognitive resources. Despite our technological determinism and the thoroughgoing push to quantify everything, our judgements about safety in uncertain environments are not nearly as objective and sophisticated as we believe. And this is ok, but it creates an imperative for us to make safe travel in cold and steep places a lifelong practice.
There is no way to accurately reframe the uncertainty of being in the mountains. Even our best attempts to quantify avalanche danger only produce a set of guidelines that are an adjunct to, rather than a replacement of, informed analysis and decision making. We can make good judgments absent all the available information and make good inferences about avalanche likelihood within the context of great uncertainty. We don’t have to satisfy conditions of propensity and frequency because moving safely in the mountains is not a matter of statistical probability or actuarial tables but rather the best use of readily available heuristics and the most efficient use of limited cognitive resources. Navigating glaciated terrain with avalanche risk is a scenario of such complexity and embodied uncertainty that it requires a leap of faith and a best-fit model of decision making. Despite what you may have heard from gloomy psychologists, humans are very good at parsing their environments—walking and chewing gum. We aren’t unique in this process; some animals and higher-order primates do it equally well. Heuristics are strategies deployed to make simple and accurate judgements around complex problems.
These “fast and frugal” heuristics, or rules of thumb, build on Herbert A. Simon’s concept of Bounded Rationality: this is the idea that rationality is limited during naturalistic decision making, and individuals will select a satisfactory rather than optimal solution. It is the triumph of the adequate over the ideal, and we are natural satisficers because we are cognitive misers.
These frugalities are obvious in the real world—we don’t need complex cognitive strategies or do differential equations in our heads to determine the trajectory of a ball we are trying to catch. The gaze heuristic explains how we catch a ball, search for a good hold, or pick a line down a couloir. The key thing to remember is that complex judgements don’t necessarily require complex thinking, and this claim is at the heart of one of the competing ideas about rationality. Don’t mistake this frugality for flippancy—that somehow, our intuition will always serve us well. There are the (aforementioned) cognitive biases we all fall victim to, and it might help to think of intuition like we do fitness—the steady accumulation of strain (experience and knowledge) that allows us to move efficiently and quickly (effectively and safely).
While Kahneman and Tversky focused on the pitfalls of naturalistic decision making, Herbert Simon and Gerd Gigerenzer had a less pessimistic view of humans, insisting that behavior is a function of individual cognition and the environment. According to Simon, our decision-making performance functions like a pair of scissors “whose two blades are the structure of the task environment and the computational capabilities of the actor.” Gigerenzer developed this into a concept of ecological rationality, claiming that we have specialized heuristics for specific tasks depending on the context. For example, the take-the-best heuristic is a strategy for deciding between two alternatives based on the first in an ordered set of cues ranked by validity. In deciding whether to dig a snowpit, these cues might be recent snow, wind loading, aspect, or slope angle.
The main tools available to the authorities responsible for avalanche safety are forecasting models, decision making frameworks (DMFs), and practical education. Of those three, government agencies are mainly concerned with forecasting and tend to outsource the DMF and education parts to NGOs (like AIARE) or the private sector. The EAWS has a standardized danger scale in its forecasts based on the likelihood and size of an avalanche in a particular region (at least 100 km2). DMFs use a probabilistic approach that applies a harm reduction method to minimize avalanche hazards. These include slope angle, aspect, group size, alongside terrain and elevations specified in avalanche bulletins. Like forecasting, DMFs rely on statistical data and pre-defined rules. Studies that have attempted to evaluate forecasting and DMFs using historical data have reduced ecological validity in the data (mountains aren’t the same as other mountains or even as themselves from year to year). Still, it is accepted that current forecasting and decision aids would have prevented most historically reported accidents. While we have collected a great deal of avalanche data, problems remain due to the fundamental nature of the environment:
- Avalanche data is often incomplete, with many unreported or unobserved events, especially in remote areas. This makes it difficult to establish reliable probability distributions.
- Avalanche hazard varies greatly depending on location, aspect, elevation, weather patterns, and other factors that change over space and time. Probabilities derived from limited data may not accurately represent this variability.
- Avalanche risk is heavily influenced by human decisions like route selection, group size, and risk mitigation measures. These human factors are difficult to quantify probabilistically.
- The processes leading to avalanche release are highly complex, involving intricate snowpack properties, terrain interactions, weather and seasonal factors. Simple probability models may oversimplify this complexity.
Therefore, while probabilistic data can provide high-level guidance, an overreliance on quantitative hazard values derived from limited avalanche records could lead to overconfidence. Plenty of people go into the backcountry because they have seen a mild forecast; they are the green-equals-go crowd or those who fail to realize that a moderate danger level still accounts for 30% of all fatalities.
The best forecasting and DMFs still emphasize knowledge-based assessment and situational analysis. They require that we read the whole report, don’t just go off the color-coded warnings like they are a traffic light, and that the report is the preamble to analyzing the snowpack where we intend to travel. It is also instructive to ask mountain professionals what they do. Because, if it isn’t apparent, doing something for a living means doing it safely. Returning to our idea of practice, a lifetime of safe travel requires good decisions almost all of the time.
A 2020 study led by Markus Landrø examined how experts make decisions in the mountains. Participants were aware of forecasts and the most common DMFs, but they favored an analytical approach rather than the probabilistic approach of most DMFs. Experts performed snowpack tests, favored intuition, and used decisive factors at the right time and place. This would confirm that the idea of a probabilistic risk assessment by avalanche forecasters, while helpful, has less utility than in-situ analysis of the snowpack, especially in remote areas. We reduce uncertainty as we travel, and if we can’t reduce uncertainty by analysis, then the choice to turn around becomes the obvious one. We can think of the ideal process in a three-step way:
- We are primed by the information in the avalanche forecast.
- We perform in-situ analysis that provides greater environmental validity.
- We always rely on our intuition which is a function of experience.
In his book Staying Alive in Avalanche Terrain, Bruce Temper identified the gap between avalanche skills and sports skills, which, in my opinion, is required reading. There is also a gap between Adam Smith’s ideal rational decision making and decision making in practice. Running parallel to this is the epistemic gap between probability and uncertainty. These gaps are populated with fast and frugal heuristics and their obverse, cognitive biases. If we are merely “overclocked primates,” as Kevin Drum suggested, let’s use our thin layer of cognition to maximize the cutting performance of Herbert Simon’s metaphorical decision-making scissors.
We spend hours obsessing over gear, trawling websites, and Reddit for boot and ski reviews. What’s the perfect layering system? How light can I make these crampons before they fall apart? The same effort goes into optimizing training and nutrition. It’s hard to devote similar intensity to safety, even though it should be a similar obsession that’s integral to the practice of skiing. Especially if thinking about safety forces us to contemplate existential danger.
Avalanche, glacier and terrain safety require continual attention and the habitual process of learning by doing. The lifetime likelihood of getting caught in an avalanche doesn’t reset to zero each time we arrive in the backcountry. It is the sum of all the decisions we make. We will make mistakes, but hopefully, those mistakes will be within the margins of safety set by experience and learning. As the American poet James Russell Lowell said, “Mishaps are like knives, that either serve us or cut us, as we grasp them by the blade or the handle.” Making safe decisions means knowing ourselves and learning about the wild and strange places we travel through. The study of snow and terrain, aspect and slope, is the most sincere way to appreciate the beauty of the mountains. To love their indifference, the moments of solitude they afford, and to realize that being out and exposed in the mountains is as much a journey inwards, we must make sure that we return home.