Using the idea of probability to describe the outcomes of real life phenomena has been an invaluable tool for many different fields. The concern of the present discussion is blackjack. Though to some a seemingly trivial topic, the use of probabilistic strategies in blackjack and other gambling games has earned many players a fair amount of reward (Thompson, 2009). Indeed, some of the earliest applications of probability were motivated by gambling games (Jardine, 2000). In blackjack, the use of probability underlies the popular strategy of card counting, in which one keeps track of the current probabilities of the next deal (Thompson, 2009; Berlekamp, 2005). Hence, the present discussion will first briefly overview probability in relation to random events and then present its applications to blackjack.

In drawing connections between mathematics, more specific to our case, probability, and blackjack first requires the definition of how random events and probability relate to outcomes of random events. Probability can be defined as the way in which mathematics describes randomness. Something is considered to be random if the individual outcomes of that something are subject to uncertainty but in the long run a pattern emerges, such that many of the individual outcomes can be predicted. Hence, a random event is an individual event whose outcome is considered to be uncertain. The use of probability in discerning the outcomes of random events is essentially not to be able to predict the outcomes of specific random events, but rather to be able to predict the frequency of occurrences of different outcomes, given a large aggregate of the events. Thus, the probability of a random event is the proportion of the occurrences of that event over the total number of potential times it could have occurred (Garfunkel, 2000).

It is indeed a curious truth that the total frequency distribution of outcomes of many thousands of chance events can be discerned with near perfect certainty. One can know by discovering the pattern of outcomes after many, many repetitions of a chance event. The simplest and perhaps most essential example is that of coin flips. Each individual coin flip outcome cannot be known with certainty. Indeed, it is a chance event that one side will come up over the other side. Yet as you continually flip a coin and record the outcomes, you will see that a pattern emerges. The proportion of outcomes yielding heads will eventually equal the proportion of outcomes yielding tails; in other words the probability of heads will become 0.5, and the probability of tails will become 0.5. Although this might seem obvious from the nature of a coin itself, being that it has only two sides, one cannot always trust their intuitions regarding probability. By definition, probability is a truly empirical idea and practice. Rather than theorizing about outcomes, you can only truly discern probability through direct observation of long runs of events (Garfunkel, 2000).

Being that probability is by definition empirical, the assignment of probabilities to a phenomenon require the adherence to several rules. These rules are not so much rules as in that they are arbitrarily imposed, but rather they are simply the natural features of describing the frequency of outcomes of random events. Probabilities can only be assigned values between 0 and 1, inclusive. 0 means that the event never occurs and 1 means that it always occurs. Since there are a variety of possible outcomes for each event, the sum of the possible outcomes together must equal 1. The probability that an event does not occur is 1 (1 being the sum of all possible outcomes) minus the probability that the event does indeed occur. If two events are independent of each other (that is, the behavior of one event has no causal influence on the other event) then the probability of both events occurring simultaneously is the probability of one...