Binomial distribution is related to the Probability Theory in mathematics. Binomial distribution is discrete Probability distribution and frequently used to model the number of successes in a sample of size ‘n’ that is made by replacement from a population of size ‘N’. |

Pp ( n | N ) =

^{n}C

_{k}p

^{n}q

^{N – n}( where q = 1 – p )

And

^{n}C

_{k}is known as binomial coefficient and can be written in the mathematical form as

^{n}C

_{k}= NỊ / nỊ ( N – n )Ị (p

^{n}) ( 1- p )

^{N-n}

The probability of obtaining more successes than the n observations in a binomial distribution is written as

P = k = n+1∑N

^{n}C

_{k}p

^{k}( 1 – p )

^{n- k}= Ip ( n + 1, N – n )

Where the parameter Ip can be defined for the two variables a , b

I (l, m) = B (y; l, m) / B (l, m)

where B (l, m) is the beta function and B (y; l, m) is the incomplete beta function.

Let's talk about Mean of binomial distribution. Binomial distribution mean is defined in mathematical terms as

ϒ = Np

Mean of n independent trials are equal to the sums of means of each individual trial i.e.

ϒn = k = 1 ∑ n = np

Example: the binomial distribution for a random variable Y having parameters n and p represents the sum of n independent variables Z which can assume the values 0 or 1. Consider the value 1 that is equal to p then the mean binomial distribution of each variable would be equal to

1 * p + 0 * ( 1 - p) = p.

When outcomes of an experiment are random, like sometimes outcome of that experiment is success or sometime outcome of that experiment is failure, then these trials are known as Bernoulli trials. For understanding the Bernoulli trials practically, we assume a collection of random variables as ‘Xj’ and the value of Random Variable varies with outcome like X

_{j}= 1, if the

j th outcome is a success and 0 if it is a failure and we assume S

_{n}= X

_{1}+ X

_{2}+ ….....+ X

_{n,}where ‘S

_{n}’ is the successes of ‘n’ trials. If ‘p’ is the Probability of success, and q = 1 − p, then the expected value can be calculated as-

E(S

_{n}) = E(X

_{1}) + E(X

_{2}) + · · · + E( X

_{n}) = np and central limit for Bernoulli trials is lim

_{n - >∞}P ( a < = [(S

_{n –}np) / √npq] < = b ) = ∫

_{a}

^{b}f(x), where ‘a’ and ‘b’ are two fixed Numbers.

Suppose we roll a dice once, it produces 1 as outcome, than we can calculate Bernoulli trial as shown below-

Total number of possible outcomes of rolling a dice is 6 and here we are given that outcome of rolling dice is 1, than probability of getting success is -

P(s) = 1/6 and probability of getting fail in this event is – P (f) = 5/6.

So, probability of both Bernoulli's trial is 1/6 and 5/6 respectively.

This is all about Bernouli trials.

Binomial Distribution is related to Probability Theory in mathematics. This is the discrete Probability distribution of the number of successes in a sequence of ‘n’ independent experiments in which every event in the Sample Space has a probability of success as ‘p’. If the number ‘n’ in ‘n’ independent experiments makes 1 i.e. (n = 1), then this Binomial Distribution is known as Bernoulli distribution.

The binomial distribution explains the behavior of a count variable if the variable satisfies the following conditions:

· The number of trials i.e. ‘n’ must be fixed.

· Each trial should be independent to each other.

· Each outcome in the binomial distribution represents the two outcomes most i.e. success (1) or failure (0).

· The probability of the success of each outcome is same i.e. ‘p’.

The sampling distribution of a count variable is only well explained by the binomial distribution in the cases when the population size is larger than the size of the sample. It will not be beneficial if we apply binomial distribution for the observations obtained from simple random samples unless the population size becomes at least 10 times larger than the sample size.

Probability mass function (mean and variance binomial distribution): Let’s assume a Random Variable ‘M’. If a random variable ‘M’ follows the binomial distribution having parameters ‘n’ sequences of success with probability ‘p’, then the probability of getting ‘M’ successes in ‘n’ trials is defined by probability mass function as-

f ( M ; n, p ) = p ( M ) = (

^{n}C

_{k}) p

^{k}( 1 – p )

^{n-k},

Where

^{n}C

_{k}is defined as,

^{n}C

_{k}= n Ị / k Ị ( n – k ) Ị,

This is the binomial coefficient in which ‘k’ denotes the number of success and (n – k) refers to the number of failures.

If ‘Y’ is binomially distributed random variable then the expected value of the random variable ‘Y’ is

E [ Y ] = np,

The variance of ‘Y’ is defined as

Var [ Y ] = np ( 1 – p ),

Suppose in an experiment there is a possibility of occurrence of two outcomes ‘1’ and ‘0’. The former has probability ‘p’ and the latter has the probability (1 – p). The Mean and variance of such distributions are equals to the sum of means and variance binomial distribution of each trial since general binomial distribution is a sum of ‘n’ independent trials i.e.

ϒ

_{n}=

_{k=1}∑

^{n}( ϒ ) = np,

σ

^{2}

_{n}=

_{k=1}∑

^{n}( σ

^{2}) = np ( 1 – p ),

Here ‘ϒ

_{n}’ represents ‘n’ independent trials and ‘σ

^{2}

_{n}’ represents the variances of ‘n’ independent variables.

The variance or covariance between two variables ‘x’ and ‘y’ can be written with the help of definition of covariance (it is a measure of how much two variables will change together) in the case of n = 1,

Cov (x, y) = E (x, y) - ( different mean of different variables ).

This is all about variance of binomial distribution.

Binomial distribution problems are based on Bernoulli Trials which has three important possible conditions given as follows:

1. We will get at least one consequence of two possible outcomes in each sample that are called as "success" or "failure".

2. From one sample to next, the Probability of success will remain same.

3. Consequences of samples will be mutually exclusive or are independent of one another.

Let us understand it through an example given as follows: Suppose in a game if a person throws a dice and gets 4, 5, or 6 (true), he wins otherwise he loses.

So, we just have two possible outcomes: either person wins or lose.

Probability of winning can be calculated as 3 / 6 = 0.5, which remains fixed for further wins too (they neither increase nor decrease). Thus all above mentioned conditions are met.

General formula for binomial Probability Distribution can be given as:

Suppose there are “p” Bernoulli trials or samples and probability of success on each sample is equals to 'pi'. So, calculating binomial probability distribution of these successes is given as:

P (X) =

^{p}P

_{X}pi

^{X}(1 - pi)

^{p - X}= p! / X! (p – X)! piX (1 - pi)

^{p – X},

For X = 0, 1, 2, 3, 4… p.

Where, P (X) represents the probability of number of successes that are achieved in sample and are equals to 'X'. So, 'X' is any binomial probability distribution Random Variable whereas p and pi are constants in calculations. So, only value of 'X' will vary throughout probability distribution.

Binomial distribution chart for certain values of 'X' can be given as follows depicting different Probabilities values for all: