Decision theory in economics, psychology, philosophy, mathematics, and statistics is concerned with identifying the values, uncertainties and other issues relevant in a given decision, its rationality, and the resulting optimal decision. It is very closely related to the field of game theory.
Normative and descriptive decision theory
Most of decision theory is normative or prescriptive, i.e., it is concerned with identifying the best decision to take, assuming an ideal decision maker who is fully informed, able to compute with perfect accuracy, and fully rational. The practical application of this prescriptive approach (how people ought to make decisions) is called decision analysis, and aimed at finding tools, methodologies and software to help people make better decisions. The most systematic and comprehensive software tools developed in this way are called decision support systems.
Since people usually do not behave in ways consistent with axiomatic rules, often their own, leading to violations of optimality, there is a related area of study, called a positive or descriptive discipline, attempting to describe what people will actually do. Since the normative, optimal decision often creates hypotheses for testing against actual behaviour, the two fields are closely linked. Furthermore it is possible to relax the assumptions of perfect information, rationality and so forth in various ways, and produce a series of different prescriptions or predictions about behaviour, allowing for further tests of the kind of decision-making that occurs in practice.
In recent decades, there has been increasing interest in what is sometimes called ‘behavioral decision theory’ and this has contributed to a re-evaluation of what rational decision-making requires (see for instance Anand, 1993).
What kinds of decisions need a theory?
Choice under uncertainty
This area represents the heart of decision theory. The procedure now referred to as expected value was known from the 17th century. Blaise Pascal invoked it in his famous wager (see below), which is contained in his Pensées, published in 1670. The idea of expected value is that, when faced with a number of actions, each of which could give rise to more than one possible outcome with different probabilities, the rational procedure is to identify all possible outcomes, determine their values (positive or negative) and the probabilities that will result from each course of action, and multiply the two to give an expected value. The action to be chosen should be the one that gives rise to the highest total expected value. In 1738, Daniel Bernoulli published an influential paper entitled Exposition of a New Theory on the Measurement of Risk, in which he uses the St. Petersburg paradox to show that expected value theory must be normatively wrong. He also gives an example in which a Dutch merchant is trying to decide whether to insure a cargo being sent from Amsterdam to St Petersburg in winter, when it is known that there is a 5% chance that the ship and cargo will be lost. In his solution, he defines a utility function and computes expected utility rather than expected financial value (see for a review).
In the 20th century, interest was reignited by Abraham Wald’s 1939 paper pointing out that the two central procedures of sampling–distribution based statistical-theory, namely hypothesis testing and parameter estimation, are special cases of the general decision problem. Wald’s paper renewed and synthesized many concepts of statistical theory, including loss functions, risk functions, admissible decision rules, antecedent distributions, Bayesian procedures, and minimax procedures. The phrase “decision theory” itself was used in 1950 by E. L. Lehmann.
The revival of subjective probability theory, from the work of Frank Ramsey, Bruno de Finetti, Leonard Savage and others, extended the scope of expected utility theory to situations where subjective probabilities can be used. At this time, von Neumann’s theory of expected utility proved that expected utility maximization followed from basic postulates about rational behavior.
The work of Maurice Allais and Daniel Ellsberg showed that human behavior has systematic and sometimes important departures from expected-utility maximization. The prospect theory of Daniel Kahneman and Amos Tversky renewed the empirical study of economic behavior with less emphasis on rationality presuppositions. Kahneman and Tversky found three regularities — in actual human decision-making, “losses loom larger than gains”; persons focus more on changes in their utility–states than they focus on absolute utilities; and the estimation of subjective probabilities is severely biased by anchoring.
Castagnoli and LiCalzi (1996), Bordley and LiCalzi (2000) recently showed that maximizing expected utility is mathematically equivalent to maximizing the probability that the uncertain consequences of a decision are preferable to an uncertain benchmark (e.g., the probability that a mutual fund strategy outperforms the S&P 500 or that a firm outperforms the uncertain future performance of a major competitor.). This reinterpretation relates to psychological work suggesting that individuals have fuzzy aspiration levels (Lopes & Oden), which may vary from choice context to choice context. Hence it shifts the focus from utility to the individual’s uncertain reference point.
Pascal’s Wager is a classic example of a choice under uncertainty. The uncertainty, according to Pascal, is whether or not God exists. Belief or non-belief in God is the choice to be made. However, the reward for belief in God if God actually does exist is infinite. Therefore, however small the probability of God’s existence, the expected value of belief exceeds that of non-belief, so it is better to believe in God. (There are several criticisms of the argument.)
Intertemporal choice
This area is concerned with the kind of choice where different actions lead to outcomes that are realised at different points in time. If someone received a windfall of several thousand dollars, they could spend it on an expensive holiday, giving them immediate pleasure, or they could invest it in a pension scheme, giving them an income at some time in the future. What is the optimal thing to do? The answer depends partly on factors such as the expected rates of interest and inflation, the person’s life expectancy, and their confidence in the pensions industry. However even with all those factors taken into account, human behavior again deviates greatly from the predictions of prescriptive decision theory, leading to alternative models in which, for example, objective interest rates are replaced by subjective discount rates.
Competing decision makers
Some decisions are difficult because of the need to take into account how other people in the situation will respond to the decision that is taken. The analysis of such social decisions is more often treated under the label of game theory, rather than decision theory, though it involves the same mathematical methods. From the standpoint of game theory most of the problems treated in decision theory are one-player games (or the one player is viewed as playing against an impersonal background situation). In the emerging socio-cognitive engineering, the research is especially focused on the different types of distributed decision-making in human organizations, in normal and abnormal/emergency/crisis situations.
Signal detection theory is based on decision theory.
Complex decisions
Other areas of decision theory are concerned with decisions that are difficult simply because of their complexity, or the complexity of the organization that has to make them. In such cases the issue is not the deviation between real and optimal behaviour, but the difficulty of determining the optimal behaviour in the first place. The Club of Rome, for example, developed a model of economic growth and resource usage that helps politicians make real-life decisions in complex situations.
Paradox of choice
Observed in many cases is the paradox that more choices may lead to a poorer decision or a failure to make a decision at all. It is sometimes theorized to be caused by analysis paralysis, real or perceived, or perhaps from rational ignorance. A number of researchers including Sheena S. Iyengar and Mark R. Lepper have published studies on this phenomenon. This analysis was popularized by Barry Schwartz in his 2004 book, The Paradox of Choice.
Statistical decision theory
Several statistical tools and methods are available to organize evidence, evaluate risks, and aid in decision making. The risks of Type I and type II errors can be quantified (estimated probability, cost, expected value, etc.) and rational decision making is improved.
Probability theory
The Advocates of probability theory point to:
- the work of Richard Threlkeld Cox for justification of the probability axioms,
- the Dutch book paradoxes of Bruno de Finetti as illustrative of the theoretical difficulties that can arise from departures from the probability axioms, and
- the complete class theorems, which show that all admissible decision rules are equivalent to the Bayesian decision rule for some utility function and some prior distribution (or for the limit of a sequence of prior distributions). Thus, for every decision rule, either the rule may be reformulated as a Bayesian procedure, or there is a (perhaps limiting) Bayesian rule that is sometimes better and never worse.
Alternatives to probability theory
The proponents of fuzzy logic, possibility theory, Dempster–Shafer theory and info-gap decision theory maintain that probability is only one of many alternatives and point to many examples where non-standard alternatives have been implemented with apparent success; notably, probabilistic decision theory is sensitive to assumptions about the probabilities of various events, while non-probabilistic rules such as minimax are robust, in that they do not make such assumptions.
General criticism
A general criticism of decision theory based on a fixed universe of possibilities is that it considers the “known unknowns”, not the “unknown unknowns”: it focuses on expected variations, not on unforeseen events, which some argue (as in black swan theory) have outsized impact and must be considered – significant events may be “outside model”. This line of argument, called the ludic fallacy, is that there are inevitable imperfections in modeling the real world by particular models, and that unquestioning reliance on models blinds one to their limits.
For instance, a simple model of daily stock market returns may include extreme moves such as Black Monday (1987), but might not model the market breakdowns following the September 11 attacks.