Probability and Risk

From Math Images
Revision as of 19:32, 18 December 2012 by Reogura (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Basics

Risk exposes people to the possibility of loss and it cannot be minimized by how lucky someone feels on a particular day. It is quantifiable through mathematical analysis, which is a critical tool in understanding risks. In order to understand how risk is quantified, it is important to first understand probability. There are three major features of probability when two or more events are involved. These features are called independent, non-independent, and mutually exclusive events.

  • Two events are independent if the occurrence of one does not affect the probability of the next event occurring. This is what many lottery players neglect to understand—a set of numbers being the winning lottery numbers in one event does not affect the probability of those same numbers being selected in the next lottery event. When two events are independent, the probability of both of them occurring is the product of the probability of each occurring: P (A and B) = P (A) * P (B) (Jones, Stats: Probability Rules).

Independence plays an essential role in assessing risk. For example, after a long run of heads in a coin toss game, many are inclined to bet on tails, believing that it is ‘due.’ This is known as the gambler’s fallacy as this belief is completely misguided. The probability of the coin toss ending up in tails is always 1/2, regardless of the number of times the coin is tossed since each event is independent of the previous one (Bammer and Smithson, 94). This belief blinds players from realizing that probability dictates risk, not the player’s belief that heads is ‘due.’

  • Non-independent events are quantified conditional probabilities. This probability measures the outcome of an event based on the outcome of a previous event. For example, if every number chosen as the winning number in the four-digit lottery can only be selected once, the probability of which is number is the fourth digit in the sequence or p4 depends on p3, the outcome of p3 on p2, and the outcome of p2 on p1. If the selection of numbers are between 1 and 9, p1 = 1/9 because there are nine total outcomes, p2 = 1/8 because the number selected for the first number slot is eliminated, and p3 = 1/7 because two numbers are eliminated from the original nine, and p4 = 1/6 because three are eliminated.
  • Two events are mutually exclusive if the two cannot occur at the same time. Rolling a die demonstrates this well, as an outcome of rolling a die must be one of the six numbers on each face. It cannot simultaneously be 1 and 6. When events are mutually exclusive, the probability of one or the other occurring is the sum of each occurring: P (A or B) = P (A) + P (B) (Jones, Stats: Probability Rules).

Bayesian decision theory

Bayesian decision theory is a common form of risk assessment that is also a metaphor for probability. Most of us take for granted the smoothness in which many dangerous industries such as nuclear power, oil rigging, and Internet privacy function. These fields come with a high possibility of risk, and the people who keep it in check are quantitative analysts. They quantify and manage risk. Risk assessment is the process of identifying possible hazards and analyzing the probability that the hazard will occur (Ready.gov, Risk Assessment). Bayesian decision theory is a form of risk assessment that quantitative analysts use to identify uncertainties in a given situation and find the optimal decision through probabilistic means. Consider the Bayesian Doctor example: A patient isn’t feeling well and is examined by the doctor (University of Haifa, Bayesian Decision Theory Tutorial). Assume that there are two states of nature—the person either has the cold (w1) or a fatal infection (w2). The doctor knows from past trials that the probability of each occurring is:   P (w1) = 0.9 and P (w2) = 0.1 .

This set of probabilities is called prior probabilities because it is data obtained in the past from a previous calculation. The doctor can decide between one of two actions—either prescribe ibuprofen or antibiotics. Considering the prior possibilities, the doctor’s decision would be to diagnose the patient as having the cold. However, this diagnosis is incomplete, as the doctor also needs to quantify risk in order to make the optimal decision. Although P (w1) = 0.9, the doctor is still risking a 10% chance of losing a patient to a fatal infection. Consider these two possible actions a1 = prescribing ibuprofen and a2 = prescribing antibiotics, and their relationship to w1 and w2. Loss.png

We construct a loss matrix (L), which quantifies the loss incurred by making the wrong decision as shown in the image above. (University of Haifa, Bayesian Decision Theory Tutorial). There are four possibilities. First, L (a1, w1) has a cost = 0 because the doctor is prescribing the right treatment (ibuprofen) to a patient with the cold. Next, assume that L (a1, w2) has a cost = 10 because the doctor is prescribing the wrong treatment (ibuprofen) to a patient with a serious infection. Then, assume that L (a2, w1) has a cost = 1, since prescribing antibiotics to a patient with the cold is not as high-risk as prescribing only ibuprofen to a patient with a fatal infection. Finally, L (a2, w2) has a cost = 0 because the right medication (antibiotics) is being prescribed to a patient with an infection. Expected risk (R) is the measurement of the total risk that could be incurred when choosing each possible action. In the Bayesian Doctor’s case, choosing a1 results in the expected risk of R (a1), as represented in the image at the top of the page under Expected risk equation, which equals (0.9 * 0) + (0.1 * 10) = 1. Choosing a2 results in the expected risk of R (a2), which is represented in the second row of the same image at the top of the page, and it equals (0.9 * 1) + (0.1 * 0) = 0.9.

Therefore, the optimal decision is to give antibiotics to the patient.

As seen in the example above, probability plays a significant role in the Bayesian decision theory, which is used to minimize risk. Similar to cost-benefit analyses, the Bayesian theory measures the risk involved in deciding between two mutually exclusive events. The theory also depends upon conditional probabilities when finding the expected risks of each action because expected risk depends on which action—prescribing ibuprofen or antibiotics—is made. Taking advantage of this decision theory and the laws of probability, the doctor is able to save more lives than if she based her decision only on prior probabilities.