# All Science Fair Projects

## Science Fair Project Encyclopedia for Schools!

 Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary

# Science Fair Project Encyclopedia

For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.

# Confidence interval

In statistics, confidence intervals are the most prevalent form of interval estimation. If U and V are statistics (i.e., "observable" random variables) whose probability distribution depends on some unobservable parameter θ, and the relation

P(U < θ < V) = 0.9

then the random interval (U,V) is a "90% confidence interval for θ".

## How to misunderstand confidence intervals

It is very tempting to misunderstand this statement in the following way. We used capital letters U and V for random variables; it is conventional to use lower-case letters u and v for their observed values in a particular instance. The misunderstanding is the conclusion that

P(u < θ < v) = 0.9,

so that after the data has been observed, a conditional probability distribution of θ, given the data, is inferred. For example, suppose X is normally distributed with expected value θ and variance 1. (It is grossly unrealistic to take the variance to be known while the expected value must be inferred from the data, but it makes the example simple.) The random variable X is observable. (The random variable X − θ is not observable, since its value depends on θ.) Then X − θ is normally distributed with expectation 0 and variance 1; therefore

P( - 1.645 < X - θ < 1.645) = 0.9.

Consequently

P(X - 1.645 < θ < X + 1.645) = 0.9,

so the interval from X − 1.645 to X + 1.645 is a 90% confidence interval for θ. But when X = 82 is observed, can we then say that

$P(82-1.645<\theta<82+1.645)=0.9\ \mbox{?}$

This conclusion does not follow from the laws of probability because θ is not a "random variable"; i.e., no probability distribution has been assigned to it. Confidence intervals are generally a frequentist method, i.e., employed by those who interpret "90% probability" as "occurring in 90% of all cases". Suppose, for example, that θ is the mass of the planet Neptune, and the randomness in our measurement error means that 90% of the time our statement that the mass is between this number and that number will be correct. The mass is not what is random. Therefore, given that we have measured it to be 82 units, we cannot say that in 90% of all cases, the mass is between 82 − 1.645 and 82 + 1.645. There are no such cases; there is, after all, only one planet Neptune.

But if probabilities are construed as degrees of belief rather than as relative frequencies of occurrence of random events, i.e., if we are Bayesians rather than frequentists, can we then say we are 90% sure that the mass is between 82 − 1.645 and 82 + 1.645? Many answers to this question have been proposed, and are philosophically controversial. The answer will not be a mathematical theorem, but a philosophical tenet. Less controversial are Bayesian credible intervals , in which one starts with a prior probability distribution of θ, and finds a posterior probability distribution, which is the conditional probability distribution of θ given the data.

For users of frequentist methods, the explanation of a confidence interval can amount to something like: "The confidence interval represents values for the population parameter for which the difference between the parameter and the observed estimate is not statistically significant at the 10% level". Critics of frequentist methods suggest that this hides the real and, to the critics, incomprehensible frequentist interpretation which might be expressed as: "If the population parameter in fact lies within the confidence interval, then the probability that the estimator either will be the estimate actually observed, or will be closer to the parameter, is less than or equal to 90%". Users of Bayesian methods, if they produced a confidence interval, might by contrast say "My degree of belief that the parameter is in fact in the confidence interval is 90%". Disagreements about these issues are not disagreements about solutions to mathematical problems. Rather they are disagreements about the ways in which mathematics is to be applied.

[I will add an example of a "recognizable subset" here; i.e., a case in which the data themselves make the epistemic conclusion dubious.]

## Concrete practical example

Here is one of the most familiar realistic examples. Suppose X1, ..., Xn are an independent sample from a normally distributed population with mean μ and variance σ2. Let

$\overline{X}=(X_1+\cdots+X_n)/n,$
$S^2=\frac{1}{n-1}\sum_{i=1}^n\left(X_i-\overline{X}\,\right)^2.$

Then

$T=\frac{\overline{X}-\mu}{S/\sqrt{n}}$

has a Student's t-distribution with n − 1 degrees of freedom. Note that what distribution T has does not depend on the values of the unobservable parameters μ and σ2; i.e., it is a pivotal quantity. If c is the 95th percentile of this distribution, then

$P\left(-c

(Note: "95" and "90" are correct; this is a frequent occasion for careless mistakes.)

Consequently

$P\left(\overline{X}-cS/\sqrt{n}<\mu<\overline{X}+cS/\sqrt{n}\right)=0.9$

and we have a 90% confidence interval for μ.