Saturday, August 25, 2007

Binomial distribution

Binomial distribution
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Binomial Probability mass function
Probability mass function for the binomial distribution
The lines connecting the dots are added for clarity
Cumulative distribution function
Cumulative distribution function for the binomial distribution
Colors match the image above
Parameters n \geq 0 number of trials (integer)
0\leq p \leq 1 success probability (real)
Support k \in \{0,\dots,n\}\!
Probability mass function (pmf) {n\choose k} p^k (1-p)^{n-k} \!
Cumulative distribution function (cdf) I_{1-p}(n-\lfloor k\rfloor, 1+\lfloor k\rfloor) \!
Mean np\!
Median one of \{\lfloor np\rfloor-1, \lfloor np\rfloor, \lfloor np\rfloor+1\}
Mode \lfloor (n+1)\,p\rfloor\!
Variance np(1-p)\!
Skewness \frac{1-2p}{\sqrt{np(1-p)}}\!
Excess kurtosis \frac{1-6p(1-p)}{np(1-p)}\!
Entropy \frac{1}{2} \ln \left( 2 \pi n e p (1-p) \right) + O \left( \frac{1}{n} \right)
Moment-generating function (mgf) (1-p + pe^t)^n \!
Characteristic function (1-p + pe^{it})^n \!

In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.
Contents
[hide]

* 1 Examples
* 2 Specification
o 2.1 Probability mass function
o 2.2 Cumulative distribution function
* 3 Mean, variance, and mode
* 4 Explicit derivations of mean and variance
o 4.1 Mean
o 4.2 Variance
* 5 Relationship to other distributions
o 5.1 Sums of binomials
o 5.2 Normal approximation
o 5.3 Poisson approximation
* 6 Limits of binomial distributions
* 7 References
* 8 See also
* 9 External links

[edit] Examples

An elementary example is this: Roll a standard die ten times and count the number of sixes. The distribution of this random number is a binomial distribution with n = 10 and p = 1/6.

As another example, assume 5% of a very large population to be green-eyed. You pick 500 people randomly. The number of green-eyed people you pick is a random variable X which follows a binomial distribution with n = 500 and p = 0.05.

[edit] Specification

[edit] Probability mass function

In general, if the random variable K follows the binomial distribution with parameters n and p, we write K ~ B(n, p). The probability of getting exactly k successes is given by the probability mass function:

f(k;n,p)={n\choose k}p^k(1-p)^{n-k}

for k = 0, 1, 2, ..., n and where

{n\choose k}=\frac{n!}{k!(n-k)!}

is the binomial coefficient (hence the name of the distribution) "n choose k" (also denoted C(n, k) or nCk). The formula can be understood as follows: we want k successes (pk) and n − k failures (1 − p)n − k. However, the k successes can occur anywhere among the n trials, and there are C(n, k) different ways of distributing k successes in a sequence of n trials.

In creating reference tables for binomial distribution probability, usually the table is filled in up to n/2 values. This is because for k > n/2, the probability can be calculated by its complement as

f(k;n,p)=f(n-k;n,1-p).\,\!

So, one must look to a different k and a different p (the binomial is not symmetrical in general).

[edit] Cumulative distribution function

The cumulative distribution function can be expressed in terms of the regularized incomplete beta function, as follows:

F(k;n,p) = \Pr(X \le k) = I_{1-p}(n-k, k+1) \!

provided k is an integer and 0 ≤ k ≤ n. If x is not necessarily an integer or not necessarily positive, one can express it thus:

F(x;n,p) = \Pr(X \le x) = \sum_{j=0}^{\operatorname{Floor}(x)} {n\choose j}p^j(1-p)^{n-j}

For k ≤ np, upper bounds for the lower tail of the distribution function can be derived. In particular, Hoeffding's inequality yields the bound

F(k;n,p) \leq \exp\left(-2 \frac{(np-k)^2}{n}\right), \!

and Chernoff's inequality can be used to derive the bound

F(k;n,p) \leq \exp\left(-\frac{1}{2\,p} \frac{(np-k)^2}{n}\right). \!

[edit] Mean, variance, and mode

If X ~ B(n, p) (that is, X is a binomially distributed random variable), then the expected value of X is

\operatorname{E}(X)=np\,\!

and the variance is

\operatorname{Var}(X)=np(1-p).\,\!

This fact is easily proven as follows. Suppose first that we have exactly one Bernoulli trial. We have two possible outcomes, 1 and 0, with the first having probability p and the second having probability 1 − p; the mean for this trial is given by μ = p. Using the definition of variance, we have

\sigma^2= \left(1 - p\right)^2p + (0-p)^2(1 - p) = p(1-p).

Now suppose that we want the variance for n such trials (i.e. for the general binomial distribution). Since the trials are independent, we may add the variances for each trial, giving

\sigma^2_n = \sum_{k=1}^n \sigma^2 = np(1 - p). \quad

The mode of X is the greatest integer less than or equal to (n + 1)p; if m = (n + 1)p is an integer, then m − 1 and m are both modes.

[edit] Explicit derivations of mean and variance

We derive these quantities from first principles. Certain particular sums occur in these two derivations. We rearrange the sums and terms so that sums solely over complete binomial probability mass functions (pmf) arise, which are always unity

\sum_{k=0}^n \operatorname{Pr}(X=k) = \sum_{k=0}^n {n\choose k}p^k(1-p)^{n-k} = 1

[edit] Mean

We apply the definition of the expectation value of a discrete random variable to the binomial distribution

\operatorname{E}(X) = \sum_k x_k \cdot \operatorname{Pr}(x_k) = \sum_{k=0}^n k \cdot \operatorname{Pr}(X=k) = \sum_{k=0}^n k \cdot {n\choose k}p^k(1-p)^{n-k}

The first term of the series (with index k = 0) has value 0 since the first factor, k, is zero. It may thus be discarded, i.e. we can change the lower limit to: k = 1

\operatorname{E}(X) = \sum_{k=1}^n k \cdot \frac{n!}{k!(n-k)!} p^k(1-p)^{n-k} = \sum_{k=1}^n k \cdot \frac{n\cdot(n-1)!}{k\cdot(k-1)!(n-k)!} \cdot p \cdot p^{k-1}(1-p)^{n-k}

We've pulled factors of n and k out of the factorials, and one power of p has been split off. We are preparing to redefine the indices.

\operatorname{E}(X) = np \cdot \sum_{k=1}^n \frac{(n-1)!}{(k-1)!(n-k)!} p^{k-1}(1-p)^{n-k}

We rename m = n - 1 and s = k - 1. The value of the sum is not changed by this, but it now becomes readily recognizable

\operatorname{E}(X) = np \cdot \sum_{s=0}^m \frac{(m)!}{(s)!(m-s)!} p^s(1-p)^{m-s} = np \cdot \sum_{s=0}^m {m\choose s} p^s(1-p)^{m-s}

The ensuing sum is a sum over a complete binomial pmf (of one order lower than the initial sum, as it happens). Thus

\operatorname{E}(X) = np \cdot 1 = np

[edit] Variance

It can be shown that the variance is equal to (see: variance, 10. Computational formula for variance):

\operatorname{Var}(X) = \operatorname{E}(X^2) - (\operatorname{E}(X))^2.

In using this formula we see that we now also need the expected value of X2, which is

\operatorname{E}(X^2) = \sum_{k=0}^n k^2 \cdot \operatorname{Pr}(X=k) = \sum_{k=0}^n k^2 \cdot {n\choose k}p^k(1-p)^{n-k}.

We can use our experience gained above in deriving the mean. We know how to process one factor of k. This gets us as far as

\operatorname{E}(X^2) = np \cdot \sum_{s=0}^m k \cdot {m\choose s} p^s(1-p)^{m-s} = np \cdot \sum_{s=0}^m (s+1) \cdot {m\choose s} p^s(1-p)^{m-s}

(again, with m = n - 1 and s = k - 1). We split the sum into two separate sums and we recognize each one

\operatorname{E}(X^2) = np \cdot \bigg( \sum_{s=0}^m s \cdot {m\choose s} p^s(1-p)^{m-s} + \sum_{s=0}^m 1 \cdot {m\choose s} p^s(1-p)^{m-s} \bigg).

The first sum is identical in form to the one we calculated in the Mean (above). It sums to mp. The second sum is unity.

\operatorname{E}(X^2) = np \cdot ( mp + 1) = np((n-1)p + 1) = np(np - p + 1).

Using this result in the expression for the variance, along with the Mean (E(X) = np), we get

\operatorname{Var}(X) = \operatorname{E}(X^2) - (\operatorname{E}(X))^2 = np(np - p + 1) - (np)^2 = np(1-p).

[edit] Relationship to other distributions

[edit] Sums of binomials

If X ~ B(n, p) and Y ~ B(m, p) are independent binomial variables, then X + Y is again a binomial variable; its distribution is

X+Y \sim B(n+m, p).\,

[edit] Normal approximation
Binomial PDF and normal approximation for n = 6 and p = 0.5.
Binomial PDF and normal approximation for n = 6 and p = 0.5.

If n is large enough, the skew of the distribution is not too great, and a suitable continuity correction is used, then an excellent approximation to B(n, p) is given by the normal distribution

\operatorname{N}(np, np(1-p)).\,\!

Various rules of thumb may be used to decide whether n is large enough. One rule is that both np and n(1 − p) must be greater than 5. However, the specific number varies from source to source, and depends on how good an approximation one wants; some sources give 10. Another commonly used rule holds that the above normal approximation is appropriate only if

\mu \pm 3 \sigma = np \pm 3 \sqrt{np(1-p)} \in [0,n].

The following is an example of applying a continuity correction: Suppose one wishes to calculate Pr(X ≤ 8) for a binomial random variable X. If Y has a distribution given by the normal approximation, then Pr(X ≤ 8) is approximated by Pr(Y ≤ 8.5). The addition of 0.5 is the continuity correction. Warning: The normal approximation gives inaccurate results unless a continuity correction is used.

This approximation is a huge time-saver (exact calculations with large n are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1733. Nowadays, it can be seen as a consequence of the central limit theorem since B(n, p) is a sum of n independent, identically distributed 0-1 indicator variables.

For example, suppose you randomly sample n people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If you sampled groups of n people repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion p of agreement in the population and with standard deviation σ = (p(1 − p)/n)1/2. Large sample sizes n are good because the standard deviation gets smaller, which allows a more precise estimate of the unknown parameter p.

[edit] Poisson approximation

The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product np remains fixed. Therefore the Poisson distribution with parameter λ = np can be used as an approximation to B(n, p) of the binomial distribution if n is sufficiently large and p is sufficiently small. According to two rules of thumb, this approximation is good if n ≥ 20 and p ≤ 0.05, or if n ≥ 100 and np ≤ 10.[1]

[edit] Limits of binomial distributions

* As n approaches ∞ and p approaches 0 while np remains fixed at λ > 0 or at least np approaches λ > 0, then the Binomial(n, p) distribution approaches the Poisson distribution with expected value λ.

* As n approaches ∞ while p remains fixed, the distribution of

{X-np \over \sqrt{np(1-p)\ }}

approaches the normal distribution with expected value 0 and variance 1 (this is just a specific case of the Central Limit Theorem).

[edit] References

1. ^ NIST/SEMATECH, '6.3.3.1. Counts Control Charts', e-Handbook of Statistical Methods, [accessed 25 October 2006]

* Abdi, H. "[1] ((2007). Binomial Distribution: Binomial and Sign Tests.. In N.J. Salkind (Ed.): Encyclopedia of Measurement and Statistics. Thousand Oaks (CA): Sage.".

* Luc Devroye, Non-Uniform Random Variate Generation, New York: Springer-Verlag, 1986. See especially Chapter X, Discrete Univariate Distributions.

* Voratas Kachitvichyanukul and Bruce W. Schmeiser, Binomial random variate generation, Communications of the ACM 31(2):216–222, February 1988. DOI:10.1145/42372.42381

[edit] See also

* Bean machine / Galton board
* Beta distribution
* Hypergeometric distribution
* Multinomial distribution
* Negative binomial distribution
* Poisson distribution
* SOCR

[edit] External links

* Binomial Probability Distribution Calculator
* Binomial Probabilities Simple Explanation
* SOCR Binomial Distribution Applet
* CAUSEweb.org Many resources for teaching Statistics including Binomial Distribution

Image:Bvn-small.png Probability distributions [ view • talk • edit ]
Univariate Multivariate
Discrete: Benford • Bernoulli • binomial • Boltzmann • categorical • compound Poisson • discrete phase-type • degenerate • Gauss-Kuzmin • geometric • hypergeometric • logarithmic • negative binomial • parabolic fractal • Poisson • Rademacher • Skellam • uniform • Yule-Simon • zeta • Zipf • Zipf-Mandelbrot Ewens • multinomial • multivariate Polya
Continuous: Beta • Beta prime • Cauchy • chi-square • Dirac delta function • Coxian • Erlang • exponential • exponential power • F • fading • Fermi-Dirac • Fisher's z • Fisher-Tippett • Gamma • generalized extreme value • generalized hyperbolic • generalized inverse Gaussian • Half-Logistic • Hotelling's T-square • hyperbolic secant • hyper-exponential • hypoexponential • inverse chi-square (scaled inverse chi-square) • inverse Gaussian • inverse gamma (scaled inverse gamma) • Kumaraswamy • Landau • Laplace • Lévy • Lévy skew alpha-stable • logistic • log-normal • Maxwell-Boltzmann • Maxwell speed • Nakagami • normal (Gaussian) • normal-gamma • normal inverse Gaussian • Pareto • Pearson • phase-type • polar • raised cosine • Rayleigh • relativistic Breit-Wigner • Rice • shifted Gompertz • Student's t • triangular • truncated normal • type-1 Gumbel • type-2 Gumbel • uniform • Variance-Gamma • Voigt • von Mises • Weibull • Wigner semicircle • Wilks' lambda Dirichlet • Generalized Dirichlet distribution . inverse-Wishart • Kent • matrix normal • multivariate normal • multivariate Student • von Mises-Fisher • Wigner quasi • Wishart
Miscellaneous: Cantor • conditional • equilibrium • exponential family • infinitely divisible • location-scale family • marginal • maximum entropy • posterior • prior • quasi • sampling • singular
Retrieved from "http://en.wikipedia.org/wiki/Binomial_distribution"

Categories: Discrete distributions | Factorial and binomial topics | Probability and statistics
Views

* Article
* Discussion
* Edit this page
* History

Personal tools

* Sign in / create account

Navigation

* Main page
* Contents
* Featured content
* Current events
* Random article

interaction

* About Wikipedia
* Community portal
* Recent changes
* Contact Wikipedia
* Donate to Wikipedia
* Help

Search

Toolbox

* What links here
* Related changes
* Upload file
* Special pages
* Printable version
* Permanent link
* Cite this article

In other languages

* العربية
* Česky
* Dansk
* Deutsch
* Español
* Français
* Italiano
* עברית
* Lietuvių
* Magyar
* Nederlands
* 日本語
* Polski
* Português
* Русский
* Basa Sunda
* Suomi
* Svenska
* Українська
* 中文

Powered by MediaWiki
Wikimedia Foundation

* This page was last modified 01:58, 25 August 2007.
* All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.)
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a US-registered 501(c)(3) tax-deductible nonprofit charity.
* Privacy policy
* About Wikipedia
* Disclaimers

No comments: