Sunday, August 21, 2011

Baseball and Linguistic Uncertainty

In my youth I played an inordinate amount of baseball, collected baseball cards, and idolized baseball players. I've outgrown all that but when I'm in the States during baseball season I do enjoy watching a few innings on the TV.

So I was watching a baseball game recently and the commentator was talking about the art of pitching. Throwing a baseball, he said, is like shooting a shotgun. You get a spray. As a pitcher, you have to know your spray. You learn to control it, but you know that it is there. The ball won't always go where you want it. And furthermore, where you want the ball depends on the batter's style and strategy, which vary from pitch to pitch for every batter.

That's baseball talk, but it stuck in my mind. Baseball pitchers must manage uncertainty! And it is not enough to reduce it and hope for the best. Suppose you want to throw a strike. It's not a good strategy to aim directly at, say, the lower outside corner of the strike zone, because of the spray of the ball's path and because the batter's stance can shift. Especially if the spray is skewed down and out, you'll want to move up and in a bit.

This is all very similar to the ambiguity of human speech when we pitch words at each other. Words don't have precise meanings; meanings spread out like the pitcher's spray. If we want to communicate precisely we need to be aware of this uncertainty, and manage it, taking account of the listener's propensities.

Take the word "liberal" as it is used in political discussion.

For many decades, "liberals" have tended to support high taxes to provide generous welfare, public medical insurance, and low-cost housing. They advocate liberal (meaning magnanimous or abundant) government involvement for the citizens' benefit.

A "liberal" might also be someone who is open-minded and tolerant, who is not strict in applying rules to other people, or even to him or herself. Such a person might be called "liberal" (meaning advocating individual rights) for opposing extensive government involvement in private decisions. For instance, liberals (in this second sense) might oppose high taxes since they reduce individuals' ability to make independent choices. As another example, John Stuart Mill opposed laws which restricted the rights of women to work (at night, for instance), even though these laws were intended to promote the welfare of women. Women, insisted Mill, are intelligent adults and can judge for themselves what is good for them.

Returning to the first meaning of "liberal" mentioned above, people of that strain may support restrictions of trade to countries which ignore the health and safety of workers. The other type of "liberal" might tend to support unrestricted trade.

Sending out words and pitching baseballs are both like shooting a shotgun: meanings (and baseballs) spray out. You must know what meaning you wish to convey, and what other meanings the word can have. The choice of the word, and the crafting of its context, must manage the uncertainty of where the word will land in the listener's mind.


Let's go back to baseball again.

If there were no uncertainty in the pitcher's pitch and the batter's swing, then baseball would be a dreadfully boring game. If the batter knows exactly where and when the ball will arrive, and can completely control the bat, then every swing will be a homer. Or conversely, if the pitcher always knows exactly how the batter will swing, and if each throw is perfectly controlled, then every batter will strike out. But which is it? Whose certainty dominates? The batter's or the pitcher's? It can't be both. There is some deep philosophical problem here. Clearly there cannot be complete certainty in a world which has some element of free will, or surprise, or discovery. This is not just a tautology, a necessary result of what we mean by "uncertainty" and "surprise". It is an implication of limited human knowledge. Uncertainty - which makes baseball and life interesting - is inevitable in the human world.

How does this carry over to human speech?

It is said of the Wright brothers that they thought so synergistically that one brother could finish an idea or sentence begun by the other. If there is no uncertainty in what I am going to say, then you will be bored with my conversation, or at least, you won't learn anything from me. It is because you don't know what I mean by, for instance, "robustness", that my speech on this topic is enlightening (and maybe interesting). And it is because you disagree with me about what robustness means (and you tell me so), that I can perhaps extend my own understanding.

So, uncertainty is inevitable in a world that is rich enough to have surprise or free will. Furthermore, this uncertainty leads to a process - through speech - of discovery and new understanding. Uncertainty, and the use of language, leads to discovery.

Isn't baseball an interesting game?

Friday, August 12, 2011

(Even) God is a Satisficer

To 'satisfice' means "To decide on and pursue a course of action that will satisfy the minimum requirements necessary to achieve a particular goal." (Oxford English Dictionary). Herbert Simon (1978 Nobel Prize in Economics) was the first to use the term in this technical sense, which is an old alteration of the ordinary English word "satisfy". Simon wrote (Psychological Review, 63(2), 129-138 (1956)) "Evidently, organisms adapt well enough to 'satisfice'; they do not, in general, 'optimize'." Agents satisfice, according to Simon, due to limitation of their information, understanding, and cognitive or computational ability. These limitations, which Simon called "bounded rationality", force agents to look for solutions which are good enough, though not necessarily optimal. The optimum may exist but it cannot be known by the resource- and information-limited agent.

There is a deep psychological motivation for satisficing, as Barry Schwartz discusses in Paradox of Choice: Why More Is Less. "When people have no choice, life is almost unbearable." But as the number and variety of choices grows, the challenge of deciding "no longer liberates, but debilitates. It might even be said to tyrannize." (p.2) "It is maximizers who suffer most in a culture that provides too many choices" (p.225) because their expectations cannot be met, they regret missed opportunities, worry about social comparison, and so on. Maximizers may acquire or achieve more than satisficers, but satisficers will tend to be happier.

Psychology is not the only realm in which satisficing finds its roots. Satisficing - as a decision strategy - has systemic or structural advantages that suggest its prevalence even in situations where the complexity of the human psyche is irrelevant. We will discuss an example from the behavior of animals.

Several years ago an ecological colleague of mine at the Technion, Prof. Yohay Carmel, posed the following question: Why do foraging animals move from one feeding site to another later than would seem to be suggested by strategies aimed at maximizing caloric intake? Of course, animals have many goals in addition to foraging. They must keep warm (or cool), evade predators, rest, reproduce, and so on. Many mathematical models of foraging by animals attempt to predict "patch residence times" (PRTs): how long the animal stays at one feeding patch before moving to the next one. A common conclusion is that patch residence times are under-predicted when the model assumes that the animal tries to maximize caloric intake. Models do exist which "patch up" the PRT paradox, but the quandary still exists.

Yohay and I wrote a paper in which we explored a satisficing - rather than maximizing - model for patch residence time. Here's the idea. The animal needs a critical amount of energy to survive until the next foraging session. More food might be nice, but it's not necessary for survival. The animal's foraging strategy must maximize the confidence in achieving the critical caloric intake. So maximization is taking place, but not maximization of the substantive "good" (calories) but rather maximization of the confidence (or reliability, or likelihood, but these are more technical terms) of meeting the survival requirement. We developed a very simple foraging model based on info-gap theory. The model predicts that PRTs for a large number of species - including invertebrates, birds and mammals - tended to be longer (and thus more realistic) than predicted by energy-maximizing models.

This conclusion - that satisficing predicts observed foraging times better than maximizing - is tentative and preliminary (like most scientific conclusions). Nonetheless, it seems to hold a grain of truth, and it suggests an interesting idea. Consider the following syllogism.

1. Evolution selects those traits that enhance the chance of survival.

2. Animals seem to have evolved strategies for foraging which satisfice (rather than maximize) the energy intake.

3. Hence satisficing seems to be competitively advantageous. Satisficing seems to be a better bet than maximizing.

Unlike my psychologist colleague Barry Schwartz, we are not talking about happiness or emotional satisfaction. We're talking about survival of dung flies or blue jays. It seems that aiming to do good enough, but not necessarily the best possible, is the way the world is made.

And this brings me to the suggestion that (even) God is a satisficer. The word "good" appears quite early in the Bible: in the 4th verse of the 1st chapter of Genesis, the very first book: "And God saw the light [that had just been created] that it was good...". At this point, when the world is just emerging out of tohu v'vohu (chaos), we should probably understand the word "good" as a binary category, as distinct from "bad" or "chaos". The meaning of "good" is subsequently refined through examples in the coming verses. God creates dry land and oceans and sees that it is good (1:10). Grass and fruit trees are seen to be good (1:12). The sun and moon are good (1:16-18). Swarming sea creatures, birds, and beasts are good (1:20-21, 25).

And now comes a real innovation. God reviews the entire creation and sees that it is very good (1:31). It turns out that goodness comes in degrees; it's not simply binary: good or bad. "Good" requires judgment; ethics is born. But what particularly interests me here is that God's handiwork isn't excellent. Shouldn't we expect the very best? I'll leave this question to the theologians, but it seems to me that God is a satisficer.

Tuesday, August 9, 2011

No-Failure Design and Disaster Recovery: Lessons from Fukushima

One of the striking aspects of the early stages of the nuclear accident at Fukushima-Daiichi last March was the nearly total absence of disaster recovery capability. For instance, while Japan is a super-power of robotic technology, the nuclear authorities had to import robots from France for probing the damaged nuclear plants. Fukushima can teach us an important lesson about technology.

The failure of critical technologies can be disastrous. The crash of a civilian airliner can cause hundreds of deaths. The meltdown of a nuclear reactor can release highly toxic isotopes. Failure of flood protection systems can result in vast death and damage. Society therefore insists that critical technologies be designed, operated and maintained to extremely high levels of reliability. We benefit from technology, but we also insist that the designers and operators "do their best" to protect us from their dangers.

Industries and government agencies who provide critical technologies almost invariably act in good faith for a range of reasons. Morality dictates responsible behavior, liability legislation establishes sanctions for irresponsible behavior, and economic or political self-interest makes continuous safe operation desirable.

The language of performance-optimization  not only doing our best, but also achieving the best  may tend to undermine the successful management of technological danger. A probability of severe failure of one in a million per device per year is exceedingly  and very reassuringly  small. When we honestly believe that we have designed and implemented a technology to have vanishingly small probability of catastrophe, we can honestly ignore the need for disaster recovery.

Or can we?

Let's contrast this with an ethos that is consistent with a thorough awareness of the potential for adverse surprise. We now acknowledge that our predictions are uncertain, perhaps highly uncertain on some specific points. We attempt to achieve very demanding outcomes  for instance vanishingly small probabilities of catastrophe  but we recognize that our ability to reliably calculate such small probabilities is compromised by the deficiency of our knowledge and understanding. We robustify ourselves against those deficiencies by choosing a design which would be acceptable over a wide range of deviations from our current best understanding. (This is called "robust-satisficing".) Not only does "vanishingly small probability of failure" still entail the possibility of failure, but our predictions of that probability may err.

Acknowledging the need for disaster recovery capability (DRC) is awkward and uncomfortable for designers and advocates of a technology. We would much rather believe that DRC is not needed, that we have in fact made catastrophe negligible. But let's not conflate good-faith attempts to deal with complex uncertainties, with guaranteed outcomes based on full knowledge. Our best models are in part wrong, so we robustify against the designer's bounded rationality. But robustness cannot guarantee success. The design and implementation of DRC is a necessary part of the design of any critical technology, and is consistent with the strategy of robust satisficing.

One final point: moral hazard and its dilemma. The design of any critical technology entails two distinct and essential elements: failure prevention and disaster recovery. What economists call a `moral hazard' exists since the failure prevention team might rely on the disaster-recovery team, and vice versa. Each team might, at least implicitly, depend on the capabilities of the other team, and thereby relinquish some of its own responsibility. Institutional provisions are needed to manage this conflict.

The alleviation of this moral hazard entails a dilemma. Considerations of failure prevention and disaster recovery must be combined in the design process. The design teams must be aware of each other, and even collaborate, because a single coherent system must emerge. But we don't want either team to relinquish any responsibility. On the one hand we want the failure prevention team to work as though there is no disaster recovery, and the disaster recovery team should presume that failures will occur. On the other hand, we want these teams to collaborate on the design.

This moral hazard and its dilemma do not obviate the need for both elements of the design. Fukushima has taught us an important lesson by highlighting the special challenge of high-risk critical technologies: design so failure cannot occur, and prepare to respond to the unanticipated.

The Innovation Dilemma

"If it ain't broken, don't fix it."Sound advice, but limited to situations where "fixing it" only entails restoring past performance. In contrast, innovations entail substantive improvements over the past. Innovations are not just corrections of past mistakes, but progress towards a better future.

However, innovations often present a challenging dilemma to decision makers. Many decisions require choosing between options, one of which is both potentially better in the outcome but markedly more uncertain. In these situations the decision maker faces an "innovation dilemma."

The innovation dilemma arises in many contexts. Here are a few examples.

Technology. New and innovative technologies are often advocated because of their purported improvements on existing products or methods. However, what is new is usually less well-known and less widely tested than what is old. The range of possible adverse (or favorable) surprises of an innovative technology may exceed the range of surprise for a tried-and-true technology. The analyst who must choose between innovation and convention faces an innovation dilemma.

Investment. The economic investor faces an innovation dilemma when choosing between investing in a promising but unknown new start-up and investing in a well-known existing firm.

Auction. "Nothing ventured, nothing gained" is the motto of the risk-taker, while the risk-avoider responds: "Nothing ventured, nothing lost". The innovation dilemma is embedded in the choice between these two strategies. Consider for example the "winner's curse" in auction theory. You can make a financial bid for a valuable piece of property, which will be sold to the highest bidder. You have limited information about the other bidders and about the true value of the property. If you bid high you might win the auction but you might also pay more than the property is worth. Not bidding is risk-free because it avoids the purchase. The choice between a high bid and no bid is an innovation dilemma.

Employer decision. An employer must decide whether or not to replace a current satisfactory employee with a new candidate whose score on a standardized test was high. A high score reflects great ability. However, the score also contains a random element, so a high score may result from chance, and not reflect true ability. The innovation dilemma is embedded in the employer's choice between the current adequate employee and a high-scoring new candidate.

Natural resource exploitation. Permitting the extraction of offshore petroleum resources may be productive in terms of petroleum yield but may also present officials with significant uncertainty about environmental consequences.

Public health. Implementation of a large-scale immunization program may present policy officials with worries about uncertain side effects.

Agricultural policy. New technologies promise improved production efficiency or new consumer choices, but with uncertain benefits and costs and potential unanticipated adverse effects resulting from use of manufactured inputs such as fertilizers, pesticides, and machinery, and, more recently, genetically engineered seed varieties and information technology. (I am indebted to L. Joe Moffitt and Craig Osteen for these examples in natural resources, public health and agriculture.)

An essay like this one should - according to custom - end with a practical prescription: What to do about the innovation dilemma? You need to make a decision - a choice between options - and you face an innovation dilemma. How to choose? All I'll say is that the first step is to identify what you need to achieve from this decision. Recognizing the vast uncertainties which accompany the decision, choose the option which achieves the required outcome over the largest range of uncertain contingencies.

If you want more of an answer than that, consult your favorite decision theory (like info-gap theory, for instance).

I will conclude by drawing a parallel between the innovation dilemma and one of the oldest quandaries in political philosophy. In The Evolution of Political Thought C. Northcote Parkinson explains the historically recurring tension between freedom and equality.

Freedom. People have widely varying interests and aptitudes. Hence a society that offers broad freedom for individuals to exploit their abilities, will also develop a wide spread of wealth, accomplishment, and status. Freedom enables individuals to explore, invent, discover, and create. Freedom is the recipe for innovation. Freedom induces both uncertainty and inequality.

Equality. People have widely varying interests and aptitudes. Hence a society that strives for equality among its members can achieve this by enforcing conformity and by transferring wealth from rich to poor. The promise of a measure of equality is a guarantee of a measure of security, a personal and social safety net. Equality reduces both uncertainty and freedom.

The dilemma is that a life without freedom is hardly human, but freedom without security is the jungle. And life in the jungle, as Hobbs explained, in "solitary, poor, nasty, brutish and short".

Monday, August 8, 2011

Doing Our Best: Economics and Optimization

According to leading economic scholars, economic agents are optimizers. Rational behavior is maximization in the pursuit of self interest. Firms try to maximize profits, households try to maximize utility, governments try to maximize social welfare. Those who fall short of the optimum are weeded out by competition. Optimization has its own moral imperative: we should all do our best, and if we don't, then it's the other guy's turn at bat.

Is this true? Should we rely on economic models which adopt maximization as an axiom?

Biological evolution is a powerful metaphor for economics. Consider a squirrel nibbling acorns, and noticing a stand of fine oaks in the distance. There are probably better acorns there, but also other squirrels and predators. How long should the squirrel forage here before moving there? What strategy should guide the decision? The squirrel needs a critical amount of energy to survive the night. Maximizing caloric intake is not necessary. Maximizing the reliability of achieving the critical intake is necessary. What is maximized is not the substantive "good" (calories), but confidence in satisfying a critical requirement.

Fifty years ago, Herbert Simon (Nobel Prize in economics, 1978) advanced the idea that economic agents lack information, understanding, and the ability to process data. These deficiencies, which he called their "bounded rationality", force agents to look for solutions which are good enough, though not necessarily optimal. The optimum may exist but it cannot be known by the resource- and information-limited agent. "Satisficing" is what Simon called this strategy of settling for a solution which is good enough, as opposed to optimizing.

But academic economists seem to take scarce notice of Simon's work. Like Twain's quip about the weather, they all talk about it (either weather or satisficing) but they don't do a damn thing. Rationality, we learn, is optimization of profit or utility.

This very conventional attitude to rationality may be related to the long list of unresolved economic paradoxes. One example began with a 1985 article by Mehra and Prescott (the latter won the 2004 Nobel Prize in economics) entitled "The equity premium: A puzzle". A decade later Kocherlakota published "The equity premium: It's still a puzzle". There are many theoretical explanations of the equity premium puzzle (EPP), but no consensus. In fact, there is no consensus that a long-term equity premium even exists.

What is the EPP, and what can we learn about optimization in economics?

Stocks are riskier than US government bonds, so the average return to stocks should be higher. Otherwise who would look at stocks? This is sound common sense, and many economists have shown that the annual return to stocks is higher than to bonds, typically by 7%, sometimes by as much as 20% or as little as 0.3%. Does this "risk premium" for stocks make sense in terms of rational (read: maximizing) behavior? The puzzle (assuming the premium is real) is that standard asset pricing models can explain the EPP only by assuming that investors are much too averse to risk. The observed behavior of investors in other risky settings would suggest that they would be willing to accept a much lower equity premium for stocks.

There are, as noted, many attempts to resolve the EPP, including that it is a statistical chimera. But what these explanations have usually not challenged is the assumption of optimization. Explanations of the EPP usually assume that investors try to maximize their returns rather than trying to achieve adequately large returns (e.g., larger than the competition).

Robustness to uncertainty is the key to understanding the role of satisficing in explaining the EPP.

Suppose we hear that so-and-so offers higher returns than anyone else, though there are many risks involved. While such large returns would be great, the investor would be satisfied with lower returns. Lower gains are offered by more than one alternative option. This means that by relinquishing the goal of maximizing (even on average), the investor is able to choose among more alternatives, all of which are equivalent in terms of expected average return. The investor who satisfices (rather than maximizes) can choose the alternative which would yield the required return over the greatest range of uncertain future scenarios. That is, the investor foregoes some aspiration for profit in exchange for some immunity against surprise. In other words, satisficing is more robust to uncertainty than optimizing. If satisficing - rather than maximizing - is in some sense a better bet, then it will tend to persist under uncertain competition.

Now we can understand that equity premia should be lower than predicted by theories which assume that investors try to maximize their returns. Satisficers have to satisfy their requirements (or those of their clients). This may mean beating the competition over the long run, but it does not mean being the best that is conceivable. Satisficing strategies tend to persist because they robustly meet critical requirements. And since satisficing entails a preference for less-than-maximal options, we should expect the market to lower the equity premium for risk.

There is a broader policy implication of the logic of satisficing. Going after critical requirements is a better bet for "survival", than going after what seems optimal. This is true of all decisions under uncertainty: monetary and  fiscal policy, resource economics, pollution control, climate change, etc. We must ask what outcomes are critical, not what are the best predicted outcomes. Critical requirements are usually more modest than the best anticipated outcome, so there will usually be numerous ways to achieve them. We should choose the option which will lead to the required outcome most robustly.

Reasoned pursuit of self-interest? Yes. Use of data, models and theories to inform our decisions? Yes. But economists who equate optimization of outcomes with economic rationality are mistaken. And regarding scholars in general, let's recall the advice of the ancient Jewish sage Akiva: ``Don't live in a town whose mayor is a scholar" (Talmud, Pesachim, ch.10, 112:a).