Home >without pilot >Bias with previous results may lead to "disaster"
Feb 14By smartai.info

Bias with previous results may lead to "disaster"

Imagine a female pilot flying a flight on a regular route, and during the flight the weather conditions deteriorate, and despite her awareness of the dangers of passing through a storm, and that it is better to deviate from the flight path or take an alternative route according to the training she received, she says that she I've been down this path before and in similar circumstances and nothing happened in the end. Do you continue on the same path or change it?

If you think there is no harm in continuing on, you have fallen into the cognitive error known as "consequence bias". Studies have shown that we often judge a decision or behavior by its final outcome, ignoring the various factors that may contribute to success or failure, which makes us not see an error in our thinking that may lead to a disaster.

In the previous example, the decision to continue the flight entailed great risks, and the pilot may have escaped from imminent disaster thanks to luck, but the “consequences bias” led to blindness to risks and the assumption that the risk was exaggerated or that the superior skill of the pilot was responsible for the completion flight, making the female pilot possibly take more risks the next time. The luckier she is, the less she cares about danger.

Byads to previous results make us more reckless in our decisions on the one hand, and less consider the consequences of incompetence and unethical behavior with regard to colleagues on the other.

The results may be really frightening, as studies suggest that this bias contributed to the occurrence of many famous disasters, including the crash of the space shuttle Columbia and the "Deepwater Horizon" oil spill disaster.

Skip topics that may interest you and continue reading. Topics that may interest you

topics that may interest you. End

Does it always matter in the end?

Researchers first spotted the final outcome bias error in the 1980s, thanks to a groundbreaking study of medical decisions.

The researchers gave the participants a description of several scenarios that include the risks and benefits of different medical interventions, and then asked the participants to judge the correctness of the doctor's decision or not.

Skip the podcast and read onMorahakaty

Teenage taboos, hosted by Karima Kawah and edited by Mais Baqi.

The episodes

The end of the podcast

For example, the participants were told that a doctor had decided to perform a heart operation on a patient that would prolong his life for many years of good health with a small chance that the patient would die during the operation. He died compared to telling them that the patient lived, though the benefit and harm are the same in both cases.

Our brains are greatly affected by outcome bias in a way that makes it easier to understand how participants feel that the doctor should be punished for the patient's death, although this feeling is not based on logic, as the doctor had no better way to balance matters and the probability of success and failure at the time of the operation. However, as soon as we learned of the patient's death, the idea of ​​the doctor's mistake persisted, and then the participants questioned his competence.

Being biased in previous results can lead to 'disaster'

"The brain finds it difficult to separate random things, along with decision quality, that jointly contribute to the outcome," says Krishna Savani of Nanyang Technological University in Singapore.

Several studies have confirmed the same results as the 1988 study, and confirmed that negative results lead us to blame a person for events beyond his control, even when we know all the facts that excuse his decision.

Studies have also shown that the matter is true in the case of success as well, as a positive result may lead us to ignore the consequences of wrong decisions that should not have been taken, and thus tolerate behavior that should have been rejected.

In an experiment conducted by Francesca Gino, at Harvard Business School, the participants were told that a scientist manipulated the results to prove the effectiveness of a drug under test, and Gino observed that the participants were more tolerant of the scientist's behavior when the drug was successful and proven effective and not dangerous than if the drug caused serious side effects , Although the judgment on behavior is supposed to be the same in both cases because an employee acted irresponsibly, which could lead to serious consequences, if not now, then in the future.

The seriousness of this defect appears in thinking about employee promotion cases, as well as when a reward is given to an investor whose investments succeeded by chance or luck despite clear evidence of incompetence or immoral behavior, when the manager or official cannot separate between the relevance of the decision and the results. The subsequent, by contrast, shows that failure may damage someone's reputation even when there is no evidence of misconduct and the information available.

"It is unfortunate that people are either praised or blamed for things by chance, and it is also the case for the policies that governments and businesses make and for various decisions," says Savani.

Sports bias also affects sports, as evidenced by a recent study conducted by Arturo Rodriguez, from the University of Chile, regarding soccer experts' ratings of players on Goal.com. Experts with those final few minutes in their judgment on the player's performance throughout the match.

The experts' evaluation was also biased against players who did not score goals despite their good performance, as Rodriguez says, "The effect of penalty kicks was great on the evaluation of players, even those who did not participate in the strikes."

narrowly got away

But the most serious consequence of this bias is a lack of awareness of the consequences of taking risks. An aviation study, for example, examined pilots' assessments of flying in dangerous weather conditions with poor visibility.

The study concluded that the pilots had the tendency to reduce the risk of the flight if they had just heard that another pilot successfully passed the same flight under the same conditions, while the reality is that there is no guarantee of the success of a flight because another one survived, and the first one may have escaped miraculously. However, the pilots overlooked the danger because they were biased towards the final result.

A similar pattern was observed by Catherine Tinsley, a researcher at Georgetown University, in dealing with natural disasters such as hurricanes. When a person overcomes a storm without being damaged, he becomes less willing to secure himself from risks and floods before the next disaster.

Subsequent research conducted by Tinzley indicated that the phenomenon contributed to disasters and failures at the institutional level. The Columbia space shuttle of NASA was destroyed due to the separation of pieces of insulation from an external tank during the launch of the shuttle, which caused punctured parts to fly out. Shuttle suite. And parts of the insulating material had flown out on previous trips without anything happening, until one time something happened that was not taken into account!

Based on these results, Tinsley's team asked the participants to evaluate a hypothetical task that almost failed until a miraculous escape came, in order to judge the competence of the person responsible for the task.

Tinzley found that emphasizing factors such as safety and transparency made people better able to spot signs of danger, and participants became more aware of danger when they were told they would later have to explain their decision to a higher authority. It can be concluded from these results that institutions should focus on the individual's responsibility to monitor risk areas and reward those who discover them.

Savani affirms that we can as individuals protect ourselves from falling into bias due to previous results. It has been found that motivating people to think more consciously about the circumstances of a decision or a particular behavior protects against falling into bias. The aim is to detect the factors leading to an outcome, including the factor of chance or luck.

Savani recommends evaluating self-performance as well as the performance of others, considering if the result was different, and what factors might have led to it. If the result was different from what happened, would we have judged the decision with the same ruling?

In the example of the scientist who manipulated the results of a drug, even if the drug was eventually shown to be safe, imagining what would have happened if the worst-case scenario with many patients dying would have made one more attentive. Likewise, this may prevent an accident in the example of the pilot.

Whether one is an investor, a pilot, or a NASA scientist, an awareness of outcome bias precludes ignoring the risk simply by chance of success. Although life is not without risk, the outcome of which cannot always be predicted, awareness of this error in perception protects the person from slipping behind his bias towards results based on previous experiences.