Monday, October 31, 2011

The Language of Science and the Tower of Babel


And God said: Behold one people with one language for them all ... and now nothing that they venture will be kept from them. ... [And] there God mixed up the language of all the land. (Genesis, 11:6-9)

"Philosophy is written in this grand book the universe, which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and to read the alphabet in which it is composed. It is written in the language of mathematics." Galileo Galilei

Language is power over the unknown. 

Mathematics is the language of science, and computation is the modern voice in which this language is spoken. Scientists and engineers explore the book of nature with computer simulations of swirling galaxies and colliding atoms, crashing cars and wind-swept buildings. The wonders of nature and the powers of technological innovation are displayed on computer screens, "continually open to our gaze." The language of science empowers us to dispel confusion and uncertainty, but only with great effort do we change the babble of sounds and symbols into useful, meaningful and reliable communication. How we do that depends on the type of uncertainty against which the language struggles.

Mathematical equations encode our understanding of nature, and Galileo exhorts us to learn this code. One challenge here is that a single equation represents an infinity of situations. For instance, the equation describing a flowing liquid captures water gushing from a pipe, blood coursing in our veins, and a droplet splashing from a puddle. Gazing at the equation is not at all like gazing at the droplet. Understanding grows by exposure to pictures and examples. Computations provide numerical examples of equations that can be realized as pictures. Computations can simulate nature, allowing us to explore at our leisure.

Two questions face the user of computations: Are we calculating the correct equations? Are we calculating the equations correctly? The first question expresses the scientist's ignorance - or at least uncertainty - about how the world works. The second question reflects the programmer's ignorance or uncertainty about the faithfulness of the computer program to the equations. Both questions deal with the fidelity between two entities. However, the entities involved are very different and the uncertainties are very different as well.

The scientist's uncertainty is reduced by the ingenuity of the experimenter. Equations make predictions that can be tested by experiment. For instance, Galileo predicted that small and large balls will fall at the same rate, as he is reported to have tested from the tower of Pisa. Equations are rejected or modified when their predictions don't match the experimenter's observation. The scientist's uncertainty and ignorance are whittled away by testing equations against observation of the real world. Experiments may be extraordinarily subtle or difficult or costly because nature's unknown is so endlessly rich in possibilities. Nonetheless, observation of nature remorselessly cuts false equations from the body of scientific doctrine. God speaks through nature, as it were, and "the Eternal of Israel does not deceive or console." (1 Samuel, 15:29). When this observational cutting and chopping is (temporarily) halted, the remaining equations are said to be "validated" (but they remain on the chopping block for further testing).

The programmer's life is, in one sense, more difficult than the experimenter's. Imagine a huge computer program containing millions of lines of code, the accumulated fruit of thousands of hours of effort by many people. How do we verify that this computation faithfully reflects the equations that have ostensibly been programmed? Of course they've been checked again and again for typos or logical faults or syntactic errors. Very clever methods are available for code verification. Nonetheless, programmers are only human, and some infidelity may slip through. What remorseless knife does the programmer have with which to verify that the equations are correctly calculated? Testing computation against observation does not allow us to distinguish between errors in the equations, errors in the program, and compensatory errors in both.

The experimenter compares an equation's prediction against an observation of nature. Like the experimenter, the programmer compares the computation against something. However, for the programmer, the sharp knife of nature is not available. In special cases the programmer can compare against a known answer. More frequently the programmer must compare against other computations which have already been verified (by some earlier comparison). The verification of a computation - as distinct from the validation of an equation - can only use other high-level human-made results. The programmer's comparisons can only be traced back to other comparisons. It is true that the experimenter's tests are intermediated by human artifacts like calipers or cyclotrons. Nonetheless, bedrock for the experimenter is the "reality out there". The experimenter's tests can be traced back to observations of elementary real events. The programmer does not have that recourse. One might say that God speaks to the experimenter through nature, but the programmer has no such Voice upon which to rely.

The tower built of old would have reached the heavens because of the power of language. That tower was never completed because God turned talk into babble and dispersed the people across the land. Scholars have argued whether the story prescribes a moral norm, or simply describes the way things are, but the power of language has never been disputed.

The tower was never completed, just as science, it seems, has a long way to go. Genius, said Edison, is 1 percent inspiration and 99 percent perspiration. A good part of the sweat comes from getting the language right, whether mathematical equations or computer programs.

Part of the challenge is finding order in nature's bubbling variety. Each equation captures a glimpse of that order, adding one block to the structure of science. Furthermore, equations must be validated, which is only a stop-gap. All blocks crumble eventually, and all equations are fallible and likely to be falsified.

Another challenge in science and engineering is grasping the myriad implications that are distilled into an equation. An equation compresses and summarizes, while computer simulations go the other way, restoring detail and specificity. The fidelity of a simulation to the equation is usually verified by comparing against other simulations. This is like the dictionary paradox: using words to define words.

It is by inventing and exploiting symbols that humans have constructed an orderly world out of the confusing tumult of experience. With symbols, like with blocks in the tower, the sky is the limit.

Monday, October 24, 2011

The End of Science?


Science is the search for and study of patterns and laws in the natural and physical worlds. Could that search become exhausted, like an over-worked coal vein, leaving nothing more to be found? Could science end? After briefly touching on several fairly obvious possible end-games for science, we explore how the vast Unknown could undermine - rather than underlie - the scientific enterprize. The possibility that science could end is linked to the reason that science is possible at all. The path we must climb in this essay is steep, but the (in)sight is worth it.

Science is the process of discovering unknowns, one of which is the extent of Nature's secrets. It is possible that the inventory of Nature's unknowns is finite or conceivably even nearly empty. However, a look at open problems in science, from astronomy to zoology, suggests that Nature's storehouse of surprises is still chock full. So, from this perspective, the answer to the question 'Could science end?' is conceivably 'Yes', but most probably 'No'.

Another possible 'Yes' answer is that science will end by reaching the limit of human cognitive capability. Nature's storehouse of surprises may never empty out, but the rate of our discoveries may gradually fall, reaching zero when scientists have figured out everything that humans are able to understand. Possible, but judging from the last 400 years, it seems that we've only begun to tap our mind's expansive capability.

Or perhaps science - a product of human civilization - will end due to historical or social forces. The simplest such scenario is that we blow ourselves to smithereens. Smithereens can't do science. Another more complicated scenario is Oswald Spengler's theory of cyclical history, whereby an advanced society - such as Western civilization - decays and disappears, science disappearing with it. So again a tentative 'Yes'. But this might only be an interruption of science if later civilizations resume the search.

We now explore the main mechanism by which science could become impossible. This will lead to deeper understanding of the delicate relation between knowledge and the Unknown and to why science is possible at all.

One axiom of science is that there exist stable and discoverable laws of nature. As the philosopher A.N. Whitehead wrote in 1925: "Apart from recurrence, knowledge would be impossible; for nothing could be referred to our past experience. Also, apart from some regularity of recurrence, measurement would be impossible." (Science and the Modern World, p.36). The stability of phenomena is what allows a scientist to repeat, study and build upon the work of other scientists. Without regular recurrence there would be no such thing as a discoverable law of nature.

However, as David Hume explained long ago in An Enquiry Concerning Human Understanding, one can never empirically prove that regular recurrence will hold in the future. By the time one tests the regularity of the future, that future has become the past. The future can never be tested, just as one can never step on the rolled up part of an endless rug unfurling always in front of you.

Suppose the axiom of Natural Law turns out to be wrong, or suppose Nature comes unstuck and its laws start "sliding around", changing. Science would end. If regularity, patterns, and laws no longer exist, then scientific pursuit of them becomes fruitless.

Or maybe not. Couldn't scientists search for the laws by which Nature "slides around"? Quantum mechanics seems to do just that. For instance, when a polarized photon impinges on a polarizing crystal, the photon will either be entirely absorbed or entirely transmitted, as Dirac explained. The photon's fate is not determined by any law of Nature (if you believe quantum mechanics). Nature is indeterminate in this situation. Nonetheless, quantum theory very accurately predicts the probability that the photon will be transmitted, and the probability that it will be absorbed. In other words, quantum mechanics establishes a deterministic law describing Nature's indeterminism.

Suppose Nature's indeterminism itself becomes lawless. Is that conceivable? Could Nature become so disorderly, so confused and uncertain, so "out of joint: O, cursed spite", that no law can "set it right"? The answer is conceivably 'Yes', and if this happens then scientists are all out of a job. To understand how this is conceivable, one must appreciate the Unknown at its most rambunctious.

Let's take stock. We can identify attributes of Nature that are necessary for science to be possible. The axiom of Natural Law is one necessary attribute. The successful history of science suggests that the axiom of Natural Law has held firmly in the past. But that does not determine what Nature will be in the future.

In order to understand how Natural Law could come unstuck, we need to understand how Natural Law works (today). When a projectile, say a baseball, is thrown from here to there, its progress at each point along its trajectory is described, scientifically, in terms of its current position, direction of motion, and attributes such as its shape, mass and surrounding medium. The Laws of Nature enable the calculation of the ball's progress by solving a mathematical equation whose starting point is the current state of the ball.

We can roughly describe most Laws of Nature as formulations of problems - e.g. mathematical equations - whose input is the current and past states of the system in question, and whose solution predicts an outcome: the next state of the system. What is law-like about this is that these problems - whose solution describes a progression, like the flight of a baseball - are constant over time. The scientist calculates the baseball's trajectory by solving the same problem over and over again (or all at once with a differential equation). Sometimes the problem is hard to solve, so scientists are good mathematicians, or they have big computers, (or both). But solvable they are.

Let's remember that Nature is not a scientist, and Nature does not solve a problem when things happen (like baseballs speeding to home plate). Nature just does it. The scientist's Law is a description of Nature, not Nature itself.

There are other Laws of Nature for which we must modify the previous description. In these cases, the Law of Nature is, as before, the formulation of a problem. Now, however, the solution of the problem not only predicts the next state of the system, but it also re-formulates the problem that must be solved at the next step. There is sort of a feedback: the next state of the system alters the rule by which subsequent progress is made. For instance, when an object falls towards earth from outer space, the law of nature that determines the motion of the object depends on the gravitational attraction. The gravitational attraction, in turn, increases as the object gets closer. Thus the problem to be solved changes as the object moves. Problems like these tend to be more difficult to solve, but that's the scientist's problem (or pleasure).

Now we can appreciate how Nature might become lawlessly unstuck. Let's consider the second type of Natural Law, where the problem - the Law itself - gets modified by the evolving event. Let's furthermore suppose that the problem is not simply difficult to solve, but that no solution can be obtained in a finite amount of time (mathematicians have lots of examples of problems like this). As before, Nature itself does not solve a problem; Nature just does it. But the scientist is now in the position that no prediction can be made, no trajectory can be calculated, no model or description of the phenomenon can be obtained. No explicit problem statement embodying a Natural Law exists. This is because the problem to be solved evolves continuously from previous solutions, and none of the sequence of problems can be solved. The scientist's profession will become frustrating, futile and fruitless.

Nature becomes lawlessly unstuck, and science ends, if all Laws of Nature become of the modified second type. The world itself will continue because Nature solves no problems, it just does its thing. But the way it does this is now so raw and unruly that no study of nature can get to first base.

Sound like science fiction (or nightmare)? Maybe. But as far as we know, the only thing between us and this new state of affairs is the axiom of Natural Law. Scientists assume that Laws exist and are stable because past experience, together with our psychological makeup (which itself is evolutionary past experience), very strongly suggests that regular recurrence can be relied upon. But if you think that the scientists can empirically prove that the future will continue to be lawful, like the past, recall that all experience is past experience. Recall the unfurling-rug metaphor (by the time we test the future it becomes the past), and make an appointment to see Mr Hume.

Is science likely to become fruitless or boring? No. Science thrives on an Unknown that is full of surprises. Science - the search for Natural Laws - thrives even though the existence of Natural Law can never be proven. Science thrives precisely because we can never know for sure that science will not someday end. 

Sunday, October 9, 2011

Squirrels and Stock Brokers, Or: Innovation Dilemmas, Robustness and Probability

Decisions are made in order to achieve desirable outcomes. An innovation dilemma arises when a seemingly more attractive option is also more uncertain than other options. In this essay we explore the relation between the innovation dilemma and the robustness of a decision, and the relation between robustness and probability. A decision is robust to uncertainty if it achieves required outcomes despite adverse surprises. A robust decision may differ from the seemingly best option. Furthermore, robust decisions are not based on knowledge of probabilities, but can still be the most likely to succeed.

Squirrels, Stock-Brokers and Their Dilemmas




Decision problems.
Imagine a squirrel nibbling acorns under an oak tree. They're pretty good acorns, though a bit dry. The good ones have already been taken. Over in the distance is a large stand of fine oaks. The acorns there are probably better. But then, other squirrels can also see those trees, and predators can too. The squirrel doesn't need to get fat, but a critical caloric intake is necessary before moving on to other activities. How long should the squirrel forage at this patch before moving to the more promising patch, if at all?

Imagine a hedge fund manager investing in South African diamonds, Australian Uranium, Norwegian Kroners and Singapore semi-conductors. The returns have been steady and good, but not very exciting. A new hi-tech start-up venture has just turned up. It looks promising, has solid backing, and could be very interesting. The manager doesn't need to earn boundless returns, but it is necessary to earn at least a tad more than the competition (who are also prowling around). How long should the manager hold the current portfolio before changing at least some of its components?

These are decision problems, and like many other examples, they share three traits: critical needs must be met; the current situation may or may not be adequate; other alternatives look much better but are much more uncertain. To change, or not to change? What strategy to use in making a decision? What choice is the best bet? Betting is a surprising concept, as we have seen before; can we bet without knowing probabilities?

Solution strategies.
The decision is easy in either of two extreme situations, and their analysis will reveal general conclusions.

One extreme is that the status quo is clearly insufficient. For the squirrel this means that these crinkled rotten acorns won't fill anybody's belly even if one nibbled here all day long. Survival requires trying the other patch regardless of the fact that there may be many other squirrels already there and predators just waiting to swoop down. Similarly, for the hedge fund manager, if other funds are making fantastic profits, then something has to change or the competition will attract all the business.

The other extreme is that the status quo is just fine, thank you. For the squirrel, just a little more nibbling and these acorns will get us through the night, so why run over to unfamiliar oak trees? For the hedge fund manager, profits are better than those of any credible competitor, so uncertain change is not called for.

From these two extremes we draw an important general conclusion: the right answer depends on what you need. To change, or not to change, depends on what is critical for survival. There is no universal answer, like, "Always try to improve" or "If it's working, don't fix it". This is a very general property of decisions under uncertainty, and we will call it preference reversal. The agent's preference between alternatives depends on what the agent needs in order to "survive".

The decision strategy that we have described is attuned to the needs of the agent. The strategy attempts to satisfy the agent's critical requirements. If the status quo would reliably do that, then stay put; if not, then move. Following the work of Nobel Laureate Herbert Simon, we will call this a satisficing decision strategy: one which satisfies a critical requirement.

"Prediction is always difficult, especially of the future." - Robert Storm Petersen

Now let's consider a different decision strategy that squirrels and hedge fund managers might be tempted to use. The agent has obtained information about the two alternatives by signals from the environment. (The squirrel sees grand verdant oaks in the distance, the fund manager hears of a new start up.) Given this information, a prediction can be made (though the squirrel may make this prediction based on instincts and without being aware of making it). Given the best available information, the agent predicts which alternative would yield the better outcome. Using this prediction, the decision strategy is to choose the alternative whose predicted outcome is best. We will call this decision strategy best-model optimization. Note that this decision strategy yields a single universal answer to the question facing the agent. This strategy uses the best information to find the choice that - if that information is correct - will yield the best outcome. Best-model optimization (usually) gives a single "best" decision, unlike the satisficing strategy that returns different answers depending on the agent's needs.

There is an attractive logic - and even perhaps a moral imperative - to use the best information to make the best choice. One should always try to do one's best. But the catch in the argument for best-model optimization is that the best information may actually be grievously wrong. Those fine oak trees might be swarming with insects who've devoured the acorns. Best-model optimization ignores the agent's central dilemma: stay with the relatively well known but modest alternative, or go for the more promising but more uncertain alternative.

"Tsk, tsk, tsk" says our hedge fund manager. "My information already accounts for the uncertainty. I have used a probabilistic asset pricing model to predict the likelihood that my profits will beat the competition for each of the two alternatives."

Probabilistic asset pricing models are good to have. And the squirrel similarly has evolved instincts that reflect likelihoods. But a best-probabilistic-model optimization is simply one type of best-model optimization, and is subject to the same vulnerability to error. The world is full of surprises. The probability functions that are used are quite likely wrong, especially in predicting the rare events that the manager is most concerned to avoid.

Robustness and Probability

Now we come to the truly amazing part of the story. The satisficing strategy does not use any probabilistic information. Nonetheless, in many situations, the satisficing strategy is actually a better bet (or at least not a worse bet), probabilistically speaking, than any other strategy, including best-probabilistic-model optimization. We have no probabilistic information in these situations, but we can still maximize the probability of success (though we won't know the value of this maximum).

When the satisficing decision strategy is the best bet, this is, in part, because it is more robust to uncertainty than another other strategy. A decision is robust to uncertainty if it achieves required outcomes even if adverse surprises occur. In many important situations (though not invariably), more robustness to uncertainty is equivalent to being more likely to succeed or survive. When this is true we say that robustness is a proxy for probability.

A thorough analysis of the proxy property is rather technical. However, we can understand the gist of the idea by considering a simple special case.

Let's continue with the squirrel and hedge fund examples. Suppose we are completely confident about the future value (in calories or dollars) of not making any change (staying put). In contrast, the future value of moving is apparently better though uncertain. If staying put would satisfy our critical requirement, then we are absolutely certain of survival if we do not change. Staying put is completely robust to surprises so the probability of success equals 1 if we stay put, regardless of what happens with the other option. Likewise, if staying put would not satisfy our critical requirement, then we are absolutely certain of failure if we do not change; the probability of success equals 0 if we stay, and moving cannot be worse. Regardless of what probability distribution describes future outcomes if we move, we can always choose the option whose likelihood of success is greater (or at least not worse). This is because staying put is either sure to succeed or sure to fail, and we know which.

This argument can be extended to the more realistic case where the outcome of staying put is uncertain and the outcome of moving, while seemingly better than staying, is much more uncertain. The agent can know which option is more robust to uncertainty, without having to know probability distributions. This implies, in many situations, that the agent can choose the option that is a better bet for survival.

Wrapping Up

The skillful decision maker not only knows a lot, but is also able to deal with conflicting information. We have discussed the innovation dilemma: When choosing between two alternatives, the seemingly better one is also more uncertain.

Animals, people, organizations and societies have developed mechanisms for dealing with the innovation dilemma. The response hinges on tuning the decision to the agent's needs, and robustifying the choice against uncertainty. This choice may or may not coincide with the putative best choice. But what seems best depends on the available - though uncertain - information.

The commendable tendency to do one's best - and to demand the same of others - can lead to putatively optimal decisions that may be more vulnerable to surprise than other decisions that would have been satisfactory. In contrast, the strategy of robustly satisfying critical needs can be a better bet for survival. Consider the design of critical infrastructure: flood protection, nuclear power, communication networks, and so on. The design of such systems is based on vast knowledge and understanding, but also confronts bewildering uncertainties and endless surprises. We must continue to improve our knowledge and understanding, while also improving our ability to manage the uncertainties resulting from the expanding horizon of our efforts. We must identify the critical goals and seek responses that are immune to surprise. 

Thursday, October 6, 2011

Beware the Rareness Illusion When Exploring the Unknown

Here's a great vacation idea. Spend the summer roaming the world in search of the 10 lost tribes of Israel, exiled from Samaria by the Assyrians 2700 years ago (2 Kings 17:6). Or perhaps you'd like to search for Prester John, the virtuous ruler of a kingdom lost in the Orient? Or would you rather trace the gold-laden kingdom of Ophir (1 Kings 9:28)? Or do you prefer the excitement of tracking the Amazons, that nation of female warriors? Or perhaps the naval power mentioned by Plato, operating from the island of Atlantis? Or how about unicorns, or the fountain of eternal youth? The Unknown is so vast that the possibilities are endless.

Maybe you don't believe in unicorns. But Plato evidently "knew" about the island of Atlantis. The conquest of Israel is known from Assyrian archeology and from the Bible. That you've never seen a Reubenite or a Naphtalite (or a unicorn) means that they don't exist?

It is true that when something really does not exist, one might spend a long time futilely looking for it. Many people have spent enormous energy searching for lost tribes, lost gold, and lost kingdoms. Why is it so difficult to decide that what you're looking for really isn't there? The answer, ironically, is that the world has endless possibilities for discovery and surprise.

Let's skip vacation plans and consider some real-life searches. How long should you (or the Libyans) look for Muammar Qaddafi? If he's not in the town of Surt, maybe he's Bani Walid, or Algeria, or Timbuktu? How do you decide he cannot be found? Maybe he was pulverized by a NATO bomb. It's urgent to find the suicide bomber in the crowded bus station before it's too late - if he's really there. You'd like to discover a cure for AIDS, or a method to halt the rising global temperature, or a golden investment opportunity in an emerging market, or a proof of the parallel postulate of Euclidean geometry.

Let's focus our question. Suppose you are looking for something, and so far you have only "negative" evidence: it's not here, it's not there, it's not anywhere you've looked. Why is it so difficult to decide, conclusively and confidently, that it simply does not exist?

This question is linked to a different question: how to make the decision that "it" (whatever it is) does not exist. We will focus on the "why" question, and leave the "how" question to students of decision theories such as statistics, fuzzy logic, possibility theory, Dempster-Shafer theory and info-gap theory. (If you're interested in an info-gap application to statistics, here is an example.)

Answers to the "why" question can be found in several domains.

Psychology provides some answers. People can be very goal oriented, stubborn, and persistent. Marco Polo didn't get to China on a 10-hour plane flight. The round trip took him 24 years, and he didn't travel business class.

Ideology is a very strong motivator. When people believe something strongly, it is easy for them to ignore evidence to the contrary. Furthermore, for some people, the search itself is valued more than the putative goal.

The answer to the "why" question that I will focus on is found by contemplating The Endless Unknown. It is so vast, so unstructured, so, well ..., unknown, that we cannot calibrate our negative evidence to decide that whatever we're looking for just ain't there.

I'll tell a true story.

I was born in the US and my wife was born in Israel, but our life-paths crossed, so to speak, before we were born. She had a friend whose father was from Europe and lived for a while - before the friend was born - with a cousin of his in my home town. This cousin was - years later - my 3rd grade teacher. My school teacher was my future wife's friend's father's cousin.

Amazing coincidence. This convoluted sequence of events is certainly rare. How many of you can tell the very same story? But wait a minute. This convoluted string of events could have evolved in many many different ways, each of which would have been an equally amazing coincidence. The number of similar possible paths is namelessly enormous, uncountably humongous. In other words, potential "rare" events are very numerous. Now that sounds like a contradiction (we're getting close to some of Zeno's paradoxes, and Aristotle thought Zeno was crazy). It is not a contradiction; it is only a "rareness illusion" (something like an optical illusion). The specific event sequence in my story is unique, which is the ultimate rarity. We view this sequence as an amazing coincidence because we cannot assess the number of similar sequences. Surprising strings of events occur not infrequently because the number of possible surprising strings is so unimaginably vast. The rareness illusion is the impression of rareness arising from our necessary ignorance of the vast unknown. "Necessary" because, by definition, we cannot know what is unknown. "Vast" because the world is so rich in possibilities.

The rareness illusion is a false impression, a mistake. For instance, it leads people to wrongly goggle at strings of events - rare in themselves - even though "rare events" are numerous and "amazing coincidences" occur all the time. An appreciation of the richness and boundlessness of the Unknown is an antidote for the rareness illusion.

Recognition of the rareness illusion is the key to understanding why it is so difficult to confidently decide, based on negative evidence, that what you're looking for simply does not exist.

One might be inclined to reason as follows. If you're looking for something, then look very thoroughly, and if you don't find it, then it's not there. That is usually sound and sensible advice, and often "looking thoroughly" will lead to discovery.

However, the number of ways that we could overlook something that really is there is enormous. It is thus very difficult to confidently conclude that the search was thorough and that the object cannot be found. Take the case of your missing house keys. They dropped from your pocket in the car, or on the sidewalk and somebody picked them up, or you left them in the lock when you left the house, or or or .... Familiarity with the rareness illusion makes it very difficult to decide that you have searched thoroughly. If you think that the only contingencies not yet explored are too exotic to be relevant (a raven snatched them while you were daydreaming about that enchanting new employee), then think again, because you've been blinded by a rareness illusion. The number of such possibilities is so vastly unfathomable that you cannot confidently say that all of them are collectively negligible. Recognition of the rareness illusion prevents you from confidently concluding that what you are seeking simply does not exist.

Many quantitative tools grapple with the rareness illusion. We mentioned some decision theories earlier. But because the rareness illusion derives from our necessary ignorance of the vast unknown, one must always beware.

Looking for an exciting vacation? The Endless Unknown is the place to go.