Brian Weatherson

Abstract

Orthodox Bayesian decision theory requires an agent’s beliefs representable by a real-valued function, ideally a probability function. Many theorists have argued this is too restrictive; it can be perfectly reasonable to have indeterminate degrees of belief. So doxastic states are ideally representable by a set of probability functions. One consequence of this is that the expected value of a gamble will be imprecise. This paper looks at the attempts to extend Bayesian decision theory to deal with such cases, and concludes that all proposals advanced thus far have been incoherent. A more modest, but coherent, alternative is proposed. Keywords: Imprecise probabilities, Arrow’s theorem.

1.

Introduction

Orthodox Bayesian decision theory requires agents’ doxastic states to be represented by a probability function, the so-called ‘subjective probabilities’, and their desires to be represented by a real-valued utility function. Once these idealisations are in place, decision theory becomes relatively straightforward. The best choice is the one with the highest expected utility according to the probability function. Because of Newcomb-like problems there is little consensus on how we ought to formalise ‘expected utility according to a probability function’, but in the vast bulk of cases the different approaches will yield equivalent results. The main problem for orthodoxy is that the idealisations made at the start are highly questionable. Many writers have thought that it is no requirement of rationality that agent’s epistemic states be representable by a single probability function. Others have thought that even if this is an ideal, it is so demanding that we cannot expect humans to reach it. One attractive amendment to orthodoxy is to permit agents’s epistemic state to be represented by a set of probability functions. This idea was first suggested by two economists, Gerhard Tintner (1941) and A. G. Hart (1942). It has since been rediscovered and popularised by Smith (1961), Levi (1974, 1980), Williams (1976, 1978), Jeffrey (1983) and van Fraassen (1990, 1995). An almost identical proposal is worked out in great detail in Walley (1991). There are many motivations for this, not least of which are that it allows agents to be completely represented by a finite number of constraints and it allows a consistent representation of ignorance. The set of probability functions representing an agent’s epistemic state is conveniently called her representor. We say that an agent’s degree of belief in p is vague over the set of values Pr(p) takes for each element Pr of her representor. Once we make this amendment, however, our neat decision theory vanishes. Even assuming Newcomblike problems to be resolved, all expected utility calculations tell us is the utility of each decision according to each probability function. In other words, different functions in a representor will usually produce different expected utilities for a choice. So the expected utility of an action is not a number, but a set. For simplicity, I will assume that these sets form intervals; on most of the theories mentioned above this follows from the way representors are constructed. I will also assume, somewhat arbitrarily, that the sets are closed intervals; nothing turns on this 1

and it does simplify the presentation. The important point is that these intervals may overlap. When they do, what ought an agent choose? This question has been addressed by many authors, as will be clear from the discussions below, but none have provided a satisfactory answer. Much of the discussion has taken place in the economics literature, so the focus has been on trades. This is of more than cosmetic importance. It has meant that the decision situation discussed contains a crucial asymmetry. Because there is a default position, refraining from trade, we can formulate clear distinction between acts and...

## Share this Document

Let your classmates know about this document and more at StudyMode.com