Custom Search

Dealmates

http://www.dealmates.com.my?ru=85947

Monday, November 29, 2010

Distortions and deceptions in strategic decisions

Companies are vulnerable to misconceptions, biases, and plain old lies. But not hopelessly vulnerable.
Dan P. Lovallo and Olivier Sibony
2006 Number 1
The chief executive of a large multinational was trying to decide whether to undertake an enormous merger-one that would not only change the direction of his company but also transform its whole industry. He had gathered his top team for a final discussion. The most vocal proponent of the deal-the executive in charge of the company's largest division-extolled its purported strategic advantages, perhaps not coincidentally because if it were to go through he would run an even larger division and thereby be able to position himself as the CEO's undisputed successor. The CFO, by contrast, argued that the underlying forecasts were highly uncertain and that the merger's strategic rationale wasn't financially convincing. Other members of the top team said very little. Given more time to make the decision and less worry that news of the deal might leak out, the CEO doubtless would have requested additional analysis and opinion. Time, however, was tight, and in the end the CEO sided with the division head, a longtime protégé, and proposed the deal to his board, which approved it. The result was a massive destruction of value when the strategic synergies failed to materialize.
Does this composite of several real-life examples sound familiar? These circumstances certainly were not ideal for basing a strategic decision on objective data and sound business judgment. Despite the enormous resources that corporations devote to strategic planning and other decision-making processes, CEOs must often make judgments they cannot reduce to indisputable financial calculations. Much of the time such big decisions depend, in no small part, on the CEO's trust in the people making the proposals.
Strategic decisions are never simple to make, and they sometimes go wrong because of human shortcomings. Behavioral economics teaches us that a host of universal human biases, such as over optimism about the likelihood of success, can affect strategic decisions. Such decisions are also vulnerable to what economists call the "principal-agent problem": when the incentives of certain employees are misaligned with the interests of their companies, they tend to look out for themselves in deceptive ways.
Most companies know about these pitfalls. Yet few realize that principal-agent problems often compound cognitive imperfections to form intertwined and harmful patterns of distortion and deception throughout the organization. Two distinct approaches can help companies come to grips with these patterns. First, managers can become more aware of how biases can affect their own decision making and then endeavor to counter those biases. Second, companies can better avoid distortions and deceptions by reviewing the way they make decisions and embedding safeguards into their formal decision-making processes and corporate culture.
Distortions and deceptions
Errors in strategic decision making can arise from the cognitive biases we all have as human beings.

These biases, which distort the way people collect and process information, can also arise from interactions in organizational settings, where judgment may be colored by self-interest that leads employees to perpetrate more or less conscious deceptions (Exhibit 1).
Distortions
Of all the documented cognitive distortions, over optimism and loss aversion (the human tendency to experience losses more acutely than gains) are the most likely to lead people who make strategic decisions astray, because decisions with an element of risk-all strategic ones-have two essential components. The first is a judgment about the likelihood of a given outcome, the second a value or utility placed on it.
When judging the likelihood of potentially positive outcomes, human beings have an overwhelming tendency to be overoptimistic or overconfident: they think that the future will be great, especially for them. Almost all of us believe ourselves to be in the top 20 percent of the population when it comes to driving, pleasing a partner, or managing a business. In the making of strategic decisions, optimism not only generates unrealistic forecasts but also leads managers to underestimate future challenges more subtly-for instance, by ignoring the risk of a clash between corporate cultures after a merger.
When probabilities are based on repeated events and can therefore often be well defined, optimism is less of a factor. But loss aversion is still a concern. Research shows that if a 50-50 gamble could cost the gambler $1,000, most people, given an objective assessment of the odds, would demand an upside of $2,000 to $2,500.

Over optimism affects judgments of probability and tends to produce over commitment. Loss aversion influences outcome preferences and leads to inaction and under commitment. But the fact that over optimism and loss aversion represent opposing tendencies doesn't mean that they always counteract each other. Over optimism and loss aversion, though opposing tendencies, don't always counteract each other
Loss aversion wouldn't have such a large effect on decisions made in times of uncertainty if people viewed each gamble not in isolation but as one of many taken during their own lives or the life of an organization. But executives, like all of us, tend to evaluate every option as a change from a reference point-usually the status quo-not as one of many possibilities for gains and losses over time across the organization. From the latter perspective, it makes sense to take more risks. Most of the phenomena commonly grouped under the label of risk aversion actually reflect loss aversion, for if we integrated most gambles into a broader set, we would end up risk neutral for all but the largest risks. This truth has important implications for strategic decision making.
Deceptions
The strategic decisions that companies make result from interactions among their executives: a manager proposes an investment, for example, and an executive committee reviews and evaluates it. In this kind of setting, a conflict of interest often arises between an "agent" (in this case, the manager) and the "principal" (the corporation) on whose behalf the agent acts.

Such "agency problems," which occur when the agent's incentives aren't perfectly aligned with the principal's interests, can lead to more or less intentional deceptions-misleading information provided to others-that compound the problem of the agent's unintentional distortions. Recall the CEO who was grappling with the big merger decision: trusting the protégé (the head of the largest division) exposed the CEO to the risk that the merger's proponent was not only over optimistic but also attempting to further his own career by exaggerating the deal's upside or underestimating its risks.
When companies evaluate strategic decisions, three conditions frequently create agency problems. One is the misalignment of time horizons between individuals and corporations. Several consumer goods companies, for example, have noted that brand managers who rotate quickly in and out of their jobs tend to favor initiatives (such as introducing new product variants) with a short-term payback. These managers' deception, intentional or not, is to advance only certain projects-those aligned with their interests. The development of radically new products or other important projects with longer payback times can rarely succeed without a senior sponsor who is likely to be around longer.
Another problem that can generate harmful deceptions is the differing risk profiles of individuals and organizations. Consider a real-life example. A midlevel executive at a large manufacturing company decided not to propose a capital investment that had a 50-50 chance of either losing the entire $2 million investment or returning $10 million. Despite his natural loss aversion, the chance of a 5:1 gain should have enticed him into accepting the bet, and his superiors, for the same reasons, would have deemed it attractive. Instead, he worried that if the investment failed, his reputation and career prospects would take a blow, though he didn't anticipate being punished if the investment was forgone. As a result, he decided not to recommend it and thus in effect acted deceptively by not promoting an attractive investment. This asymmetry between results based on action and inaction is called the "omission bias," and here it magnified the executive's loss aversion.
The final agency issue arises from the likelihood that a subordinate knows much more than a superior does about a given issue. Higher-ranking executives must therefore make judgments about not just the merits of a proposal but also their trust in the person advancing it. This is unavoidable and usually acceptable: after all, what more important decision do CEOs make than choosing their closest associates? The tendency, however, is to rely too much on signals based on a person's reputation when they are least likely to be predictive: novel, uncertain environments such as that of the multinational that went ahead with the megamerger. We call the tendency to place too much weight on a person's reputation-and thus increase the exposure to deception-the "champion bias."
Furthermore, the multinational's merger decision exhibited an element of "sunflower management": the inclination of people in organizations to align themselves with the leader's real or assumed viewpoint. The CEO had expected to find dissenting voices among his senior executives. But except for the CFO, they believed that the CEO favored the deal and that the merger would proceed no matter what they said and thus kept their doubts to themselves for fear of harming their careers. In effect, they misled the CEO by suppressing what they really thought about the deal.
Improving individual decisions
Knowing that human nature may lead decision making astray, wise executives can use this insight to fortify their judgment when they make important decisions. To do so, however, they must know which bias is most likely to affect the decision at hand. Exhibit 2 offers a road map for the types of decisions where over optimism or excessive risk aversion will probably be the determining factor.
In general, the key to reducing over optimism is to improve the learning environment by generating frequent, rapid, and unambiguous feedback. In the absence of such an environment-for instance, when companies face rare and unusual decisions, which, unfortunately, are the most important ones-there is a bias toward optimistic judgments of the odds. The size of a decision determines the appropriate degree of risk aversion. For major ones, a certain amount of it makes sense-nobody wants to bet the farm. For smaller ones, it doesn't, though it often prevails for reasons we'll soon explore. Companies should see minor decisions as part of a long-term, diversified (and thus risk-mitigating) strategy.
As Exhibit 2 shows, companies don't always rationally factor risk into their decisions. In the large, infrequent ones
(for instance, the industry-transforming merger that went horribly wrong) represented in the exhibit's upper-left quadrant there is a tendency to take an overly optimistic view. In essence, faulty judgments lead executives to take risks they would have avoided if they had had an accurate judgment of the odds. Since executives facing such a rare decision can't benefit from their own experience, they should learn from the experience of other companies by collecting case studies of similar decisions to provide a class of reference cases for comparison.

No comments:

Post a Comment