Реферат: Bayes Theory Essay Research Paper REVIEW OF

A second derivation of Bayes Theorem gives the rule for updating belief in a Hypothesis A (i.e. the probability of A) given additional evidence B. This is shown in equation 2

Where A* is the false outcome of an argument. The left hand term, P(A|B) is called the posterior probability, and it gives the probability of the hypothesis A after considering the evidence B. P(A|B) is called the likelihood, and it gives the probability of evidence assuming the hypothesis B and the background information is true.

In many situations, predictions of outcomes involve probabilities, one theory might predict that a certain outcome has a twenty percent chance of happening; another may predict a sixty percent chance of the same outcome. In these types of situations, the actual outcome would tend to shift our degree of belief from one theory to the other. As previously noted, Bayes theorem gives a way to calculate this experience or “degree of belief” (Logue 1995, pg.95). To construct an example of Bayes’ theorem, one begins by designing a mutually exclusive and all-inclusive hypothesis. That is, an hypothesis that includes all out comes. Next, one needs to spread out the degree of belief among them by assigning a probability based on what we believe to be true, to each hypothesis. This assignment would be between zero and one to make it all-inclusive. Not the often misused probability such as a person saying, “I am behind you one hundred and ten percent”, for such a statement to be true is not only physically impossible but certainly falls outside the requirement of being all inclusive. If one has no prior basis in either experience, or observation of the hypothesis, one simply spreads out he probabilities evenly among the hypothesis.

The next step in setting up the equation is to construct a list of possible outcomes. The list of possible outcomes, like the hypothesis, should also be mutually exclusive and all-inclusive. Each hypothesis is then calculated with its assigned conditional probability (either based on prior knowledge or assigned randomly if no prior knowledge is present) of each of the possible outcomes. This step simply assigns the probability of observing each outcome if that particular hypothesis is true. The unique part of Bayes’ theorem, and what was new in the 18th century, is one then makes note of which outcome actually occurred and can then compute revised prior probabilities for the hypothesis, (See equation 2), based on the actual outcome.

CHAPTER IV MEDICAL USE OF BAYES THEOREM

Suppose you undergo a medical test for a relatively rare cancer. Your doctor tells you the cancer has an incidence of 1% in the general population. In other words, the chance of you having the cancer is one in one hundred, i.e. a probability of 0.01. The test is known to be 89% reliable. That is, the test will not fail to find cancer when present, but will give a positive result in 11% percent of the cases where no cancer is present, this is known as a false positive.

When you are tested, the test yields a positive result. The question is, given the result of the test, what is the probability that you have cancer. It is easy to assume that if the test is nearly 75% accurate, and you test positive, then the likelihood you have the cancer is about 75%. That assumption is way off. The actual likelihood you have cancer is merely 3.9% (i.e., the probability is 0.039). Three point nine percent is still something to worry about with cancer but hardly as daunting as 75%. The problem is, that the 75% reliability factor for the test, has to be balanced against the fact that only 1% of the entire population has the cancer. Using Bayes’ method ensures you make proper use of the information available.

As I have discussed, Bayes’ method allows you to calculate the probability of a certain event C (in the above event, having the cancer), based on evidence (e.g. the result of the test), when you know (or can estimate):

(1) The probability of C in the absence of any evidence:

(2) The evidence of C

(3) The reliability of the evidence (i.e., the probability that the evidence is correct.

In this example, the probability in (1) is 0.01, the evidence in (2) is that the test came out positive, and the probability in (3) has to be computed from the 75% figure given. All three pieces of information are highly relevant, and to evaluate the probability that you have the cancer you have to combine them I the right manner. Bayes’ theorem allows us to do this.

To simplify the illustration, assume a population of 10,000, since we are only interested in percentages, the reduction in population size will not affect the outcome. Thus, in a population of 10,000, 100 will have cancer and 9,900 will not. Bayes method, as previously mentioned, is about improving an initial estimate after you have obtained the new evidence. In the absence of the test, all you could say about the likelihood of you having the cancer is there is a 1% chance that you do. Then you take the test, it shows positive. How do you revise the probability that you have the cancer?

There are, we know, 100 people in the population that do have the cancer, and for all of them, the test will show a positive result. But what of the 9,900 people that do not have the cancer, for 25% of them, the test will incorrectly give a positive result, thereby identifying 9900 X .025 = 2,475 people as having the cancer when they actually do not. Thus, overall, the test identifies a total of 100 + 2,475 = 2,575 people as having the cancer. Having tested positive, you are among that group (evidence tells you this). The question is, are you in the group that has the cancer or the group that does not but tested as if they did (false positive). Of the 2,575 that tested positive, 100 of them really do have cancer. To calculate the probability that you really have cancer, you take the number that really does (100) and divide it by the number that would have tested positive but do not have it (2,475) and you get the probability that you really do have the cancer. 100/2,475 = 0.039. In other words, there is a 3.9% chance that you have the cancer (see equation 4).

This calculation shows why it is important to take account of the overall incidence of the cancer in the population. This, in the Bayesian way of thinking, is known as prior probability. Being able to calculate results based on this prior probability, either known, speculated or not known, is the advantage of using Bayes’ theorem. In our case, in a population of 10,000 with a cancer having an incidence rate of 1%, a test reliability of 75% will produce 2,475 false positives. Thus far outweighs the number of actual cases, which are only 100. As a result, if your test comes back positive, the chances are overwhelming that you are in the false positive group. This data, as it is placed in Bayes’ theorem flows like this: Let P(H) represent the probability that the hypothesis is correct in the absence of any evidence – the prior probability. Therefore, H is the hypotheses that you have the cancer and P(H) is 0.01. You then take the test and get a positive result, this evidence of cancer we will call C. Let P(H|C) be the probability that H is correct given the evidence C. This is the revised estimate we want Bayes’ theorem to calculate. Let P(C|H) be the probability that C would be found if H did occur. In our example, the test always detects cancer when it is present so P(C|H) = 1. To find the new estimate, you have to calculate P(H-wrong), the probability that H does not occur, which is .099 in this case. Finally, you have to calculate P(C|H-wrong), the probability that the Cancer C would be found (i.e., a positive test) even though H did not actually occur (i.e., you do not have the cancer), which is 0.25 in the example. In equation 3, Bayes’ theorem states:

Using the formula for our example in equation 4:

The quantity such as P(H|C) is known as a conditional probability. That is, the conditional probability of H occurring given the evidence C.

CHAPTER V BAYES AND THE LAW

Bayes theorem is used in mathematics and, as I previously mentioned, in many professions. Amongst them is the practice of law. There have been instances where lawyers have taken advantage of the lack of mathematical sophistication among judges and juries by deliberately confusing the two conditional probabilities P(G|E), the probability that the defendant is guilty given the evidence, and P(E|G), the conditional probability that the evidence would be found assuming the defendant would be guilty. Intentional misuse of probabilities has been known to occur where scientific evidence such as DNA testing is involved, such as paternity suits and rape and murder cases. In such cases, prosecuting attorneys may provide the court with a figure for P(E), the probability that the evidence could be found among the general population, whereas the figure of relevance in deciding guilt is P(G|E). As Bayes’ formula shows, the two values can be very different, with P(G|E) generally much lower than P(E). Unless there is other evidence that puts the defendant into the group of possible suspects, such use of P(E) is highly suspect. The reason is that, as with the cancer test example, it ignores the initial low prior probability that a person chosen at random is guilty of the crime in question.

Instructing the court in the proper use of Bayesian inference was the winning strategy used by American long-distance runner Mary Slaney’s lawyers when they succeeded in having her 1996 performance ban overturned. Slaney failed a routine test for performance enhancing steroids at the 1996 Olympic games, resulting in the United States athletic authorities banning her from future competitions. Her lawyers demonstrated that the test did not take proper account of the prior probability and thus made a tacit initial assumption of guilt.

In addition to its use, or misuse, in court cases, Bayesian inference methods lie behind a number of new products on the market. For example, the paperclip advisor that pops up on the screen of users of Microsoft Office — the system monitors the user’s actions and uses Bayesian inference to predict likely future actions and provide appropriate advice accordingly. For another example, chemists can take advantage of a software system that uses Bayesian methods to improve the resolution of nuclear magnetic resonance (NMR) spectrum data. Chemists use such data to work out the molecular structure of substances they wish to analyze. The system uses Bayes’ formula to combine the new data from the NMR device with existing NMR data, a procedure that can improve the resolution of the data by several orders of magnitude.

Other recent uses of Bayesian inference are in the evaluation of new drugs and medical treatments, the analysis of human DNA to identify particular genes, and in analyzing police arrest data to see if any officers have been targeting one particular ethnic group.

CHAPTER VI BOMB BAYES OPEN

Another particularly fine example of Bayes’ theorem in action is displayed in the following example. On January 17, 1966, while attempting a mid-air refuel at 30,000 feet off the coast if Palmares Spain, a Strategic Air Command B-52 Statofortress Bomber (See illustration 1), collided with an air borne KC-135 Stratotanker fuel tanker aircraft. The collision killed all four of the crewmembers on the KC-135 and three of the seven crewmembers on the B-52.

When the collision occurred, the aircraft was carrying four Hydrogen Bombs. As result of the collision, all four bombs departed the aircraft in the air. Three of the four bombs were recovered almost immediately along the shore of Spain (See illustration 2). However, search teams could not locate the fourth bomb. It was lost and had presumably fallen to the bottom of the Mediterranean ocean.

1966 was a time of tremendous strain in the relations between the then Soviet Union and the United States of America. Tensions were running high partially because the on-going conflict in Vietnam and failed Bay-of-Pigs invasion of Cuba. The cold war was at its peak and competition for superior nuclear weapons design between the United States and the Soviet Union was fierce. When the United States government notified the government of Spain of the incident, its government, fearing a radiation leak, demanded a clean up and assurances from the United States that they would recover the bomb. The United States military knew the Soviets were aware of the accident and in an effort to retrieve the bomb and glean secrets from its design and construction, were looking for the bomb too. United States president Lyndon Johnson refused to believe his military’s assertions that there was a good probability the bomb could not and would not ever be recovered, by ether side, because of its presumed depth and condition. The president demanded its recovery.

A team of was assembled to try to pinpoint the location for a search and to attempt to retrieve the weapon once it was (if ever) located. The group was attempting to use Bayes’ Theorem. A group of mathematicians were assembled to construct a map of the sea bottom outside Palomares, Spain. Once the map was completed, the U.S. Navy assembled a group of submarine and salvage experts to place probabilities that Sontag describes as”?Las Vegas-style bets?” [pg. 63] of each of the different scenarios (outcomes), that might describe how the bomb was lost and what happened to it once it departed the aircraft. Each scenario left the bomb in not just a different place, but in a wide variety of places. Then, each possible location (all inclusive), was considered using the formula that was based on the probabilities created in the initial phase of the equation, that is the phase that set the initial probabilities.

The theory created by the 18th century theologian and amateur mathematician provided a way to apply quantitative reasoning to what we normally think of as a scientific method. That is, when several alternative theories, such as the case of the missing H-bomb, about an outcome exist, you can test them conducting experimental tests to see, whether or not those consequences actually occur. Put another way, if an idea predicts that something should happen and it does actually happen, it strengthens the belief in the validity of the idea. It acts as a spoiler too, if an actual outcome contradicts the idea, it may weaken the belief in the idea.

After a betting round to assign probabilities of the location of the bomb, the locations were then plotted again, sometimes great distances from where logic and acoustic science would have place them. The bomb, according to S.D. Bono, the Historian of the National Atomic Museum in Albuquerque NM, had been connected to two parachutes designed by the Sandia Corporation. “Sandia was the general contractor of the weapon system itself as well as the parachutes” (S.D. Bono, personal communication, February 14, 2001). The parachutes complicated the issue further because no one knew if they functioned or not. Whether they did or not, and what condition they were in, could have had dramatic effect on the bomb’s ultimate resting place. As part of applying Bayes’s theorem, the researchers asked the experts individually how they expected the event unfolded. Going over each part of the event with each participant. The team of mathematicians wrote out possible endings to the crash story and took bets (set probabilities), on which ending the believed to be most likely. After the betting was over, they used the odds created to assign probabilities to several locations identified by the betting. The site was again mapped, but this time, the most probable locations were marked according to relevancy.

К-во Просмотров: 298
Бесплатно скачать Реферат: Bayes Theory Essay Research Paper REVIEW OF