Risk Homeostasis and the Paradox of Warning

Over the years, people have pointed me to the theory of risk homeostasis, as put forth by Dr. Gerald Wilde, Professor Emeritus of Psychology, Queen’s University.

How do we balance risk and safety? The synopsis of Wilde’s theory is that if you perceive a change will make you safer, then you actually may be prone to take more risk, thus negating the actual risk reduction. However, if you want to be safer than you will make real tangible reductions in risk. I have two thoughts that immediately come to mind when I hear this kind of discussion coming my way:

  1. If the risk reduction is in fact effective, then it is effective, and you might want to take on that additional risk. That is to say that if you increase the capacity of your risk “cup”, so to speak, then you are indeed able to take on more risk beyond the level you were at prior to the increased capacity. It is a misnomer to say “see, I still got hurt” without factoring the level of hurt you would be at without the risk reductions. Soldiers do not wear armor because they want to put themselves more in harms way, they are forced to put themselves in danger even without perceptions of safety and thus desire better protection.
  2. Measuring perception is like measuring taste. Maybe people in one sample group are all accustomed to pumpkin and associate it with spending comforting fall evenings with family eating pie, while another sample group has never tasted such orange goop before and knows only jack-o-lanterns their neighbors leave rotting outside to be scary. Which group’s perception, when measured, is going to provide a reliable indicator of the next sample group? Both, neither…? Exposure (time) and culture are definitely factors that can skew measures of perception.

At the end of the day it seems Wilde is suggesting that the only accurate measure for reduction of risk is an agent’s personal desire to be safe.

This is a dangerous problem, especially in any major domain shift in engineering, where customers have no idea how to assess technology risk. Wants become more like cult thinking or mysticism, which gets in the way of scientifically measured safety.

Someone wanting a “safe” ride isn’t at all the same as someone wanting a “safe robot” ride, because the latter often ends up being an unhinged belief about robotic capability (e.g. absence of skill to audit defects) yet everyone can measure basic safety of a ride (e.g. zero crashes).

The more you want something, apparently in Wild’s world, the more likely you will get it, and perhaps vice versa. Yet he confesses that the problem with wants is that their definition hinges on proper information and a rational actor who will know how to decipher the data and make a proper decision instead of just “belief”.

We want to eat, not make ourselves ill, but do we have reliable enough data in our hands to know whether an industrialized burger from industrialized ingredient packing plant will increase our risk disproportionately to other lunch options including a butcher’s hand made patty? (Hint: automation technology lacking transparency often is fraud at high speed and scale, as predicted and documented for over a century.)

Wilde’s writing is full of insightful examples and anecdotes and definitely worth reviewing. Here’s a sample from chapter six that discusses “Intervention by education“:

Other victims of the “lulling effect” have been reported, e.g. children under the age of five. In 1972, the Food and Drug Administration in the USA ordered manufacturers of painkillers and other selected drugs to equip their bottles with “child-proof” lids. These are difficult to open for children (and sometimes for adults as well) and often go under the name of “safety caps,” a misleading name, as we will see. Their introduction was followed by a substantial increase in the per capita rate of fatal accidental poisonings in children. It was concluded that the impact of the regulation was counterproductive, “leading to 3,500 additional (fatal plus non-fatal) poisonings of children under age 5 annually from analgesics”.[17] These findings were explained as the result of parents becoming less careful in the handling and storing of the “safer” bottles”. “It is clear that individual actions are an important component of the accident-generating process. Failure to take such behavior into account will result in regulations that may not have the intended impact”. Indeed, safety is in people, or else it is nowhere.

If parents can be blamed for the lack of effectiveness of safety caps, does a government that passes such near-sighted safety legislation go guilt-free? Does an educational agency that instills a feeling of overconfidence in learner drivers go guilt-free? Does a traffic engineering department that gives pedestrians a false sense of safety remain blameless; or a government that requires driver education at a registered driving school before one is allowed to take the licensing test? Is it responsible to call a seatbelt a “safety belt”, to propagate through the media such slogans as “seatbelts save lives”, “speed kills”, “to be sober is to be safe”, “use condoms for safe sex”, or others of the same ilk?

In any event, it is interesting to note that accident countermeasures sometimes may increase danger, rather than diminish it. If stop signs are installed at junctions in residential areas and at all railway crossings that have no other protection, if flashing lights appear at numerous intersections, if warning labels are attached to the majority of consumer products, these measures will eventually lose their salience and their credibility. They amount to crying wolf when no such beast is in the area. And in the rare event it is, the warning will no longer be received and there may be a victim.

This is why over-use of warnings may be dangerous. A warning that is not perceived as needed will not be heeded–even when it is needed. “A warning can only diminish danger as long as there is danger.” This is the paradox of warning. It sounds puzzling, but what it means is that warning signs can only make people behave more cautiously if they agree that their behaviour would probably have been more risky if they had not seen the warning sign.

Over-use of warnings may be dangerous.

Important to consider this when technology companies are caught harming people but say “we posted warnings”. Maybe their warnings were used in ways that increased risks by simultaneously making customers falsely believe they aren’t necessary, the most dangerous version of the paradox — more risks taken than “if they had not seen the warning sign”.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.