Sales Toll Free No: 1-855-666-7446

Random Variable

Top

A random variable is a variable which is concerned with random variable Probability Theory or we can say in Probability random variable are used for prediction of occurrence of events. Random variable can be defined as variable which obtains its value from measurement on a system and the components of system can vary. A random variable can also be described as a result of an occurring event. The value of random variable is not fixed, value of random variable can vary and it can take different values.
The possible values of random variables are used to represent possible results or outcomes of an event which will occur in future. Random variables are of two types that are discrete and continuous.
Random variable mostly has real values but sometimes it can have complex Numbers, Boolean numbers etc. as its value. Most of the times we use real valued random variables in scientific researches or scientific experiments for making predictions. Random variables are also known as stochastic variables. So we can say that random variable is numeric result or outcome of an event.
Discrete Random variable- Random variables which can accept a finite Set of outcomes are called discrete Random variables.
Continuous Random Variable- Random variable which can accept any number of value are called continuous random variables.

Probability Distribution

Back to Top
The concept of Probability is very useful in business, economics and other fields to predict the future to some extent. The words probability, likely etc. that we use in our daily life means happening of an event is not certain. Therefore the probability means chance of happening of an event expressed quantitatively.
Probability is a concept which measures the degree of uncertainty and that of certainty also.
Mathematically, we have probability of an event A is: P(A) = m/n.
Where 'm' = possible outcomes favorable to the occurrence of A,
n = Total number of possible outcomes,
and n – m = possible outcomes unfavorable to the occurrence of A.
We also notice that P(A) + P(A-) = m/n + (n-m)/n = 1
P(A) + P(not A) = 1
An event which is based on what has actually happened is called as an experiment and variable which are use in this event is called as an experiment variable like (5, 6, 7, 8, e, f, g, h) is a variable in probability distribution which define as a probability experiment. If an experiment (Condition under identical consumptions) can result in two or more outcomes, it is called random experiments and variables which are use in this random experiment is called as a Random Variable. Both variable and random variables are used in probability distribution.
Now we discuss probability distribution, to understand this let’s take few examples.
Example 1: Find the probability of getting less than 6 in throwing a die once?
Solution : For finding the probability of less than 6 in throwing a die once, we use following steps -
Step 1: When a die is tossed, there are 6 possible outcomes represented by :
S = 1, 2, 3, 4, 5, 6
So, total number of events are n(S) = 6
Step 2: Each possible outcome has a certain random variable and we assume that random variable is X, than favorable events = a number less than 6 = A = 1, 2, 3, 4, 5.
So, total number of favorable events are n(A) = 5. Therefore probability of getting less than 6 in throwing one die is P(A) = n(A)/n(S) = 5/6.
Example 2: Find probability distribution of two coins, when they are tossed at same time?
Solution: For solving the above problem, we use following steps -
Step 1: Let we have two coins and when we tossed both coins at same time, then total possible outcomes or Sample Space is:
S = HH, TH, TT, HT
So, n(S) = 4
Step 2: Each possible outcome has a certain random variable and we assume that random variable is X, where X represents the number of heads. So, value of variable X can take the values as 0, 1, 2. X will be 0 when we have the outcome as two heads means in TT form. X will be 1 when we have the outcome as a one head and one tail (HT or TH). And X will be 2 when we have the outcome as two heads in HH form. Now we calculate probability distribution.
When X is 0, than probability is P(X) = 0/4 = 0
When X is 1, than probability is P(X) = 1/4 = 0.25
When X is 2, than probability is P(X) = 2/4 = 0.50
These examples are shows how we calculate probability distributions.

Discrete Random Variable

Back to Top
Random variable is related to Probability in mathematics. It’s also named as stochastic variable and defined as a numerical description of the outcome of an experiment. Since it is a variable so its value is not fixed and its value is the possible outcomes of the performed experiment. Random variables are real- valued Functions but they can also be considered as arbitrary types like Boolean values, complex Numbers, matrices, sequences, trees etc.
Basically random variables (i.e. real numbers) are used to make predictions based on the data obtained from the experiments. Random variables are developed for the analysis of games of chance. It’s the identity function for the experiment that is to be performed.
Discrete random variables are obtained by counting and they have the Integer values. Random variables are generally denoted by upper case letters or capital letters. Random variables are explained by their Probabilities.
For example let’s have a Random Variable ‘Y’ then-
P [ Y= y ] = 0.05 + 0.10 + 0.20 + 0.40 + 0.15 + 0.10 = 1.
For the notation P [Y, y], P ( y ) also can be used. It denotes the probability function and also named as probability mass function. If there are more than one functions then the cumulative probabilities are given as F (y) = ∑i≤y P (i). Here the probability F (y) is the probability that Y will have a value less than or equal to y. the function F (y) is named as cumulative distribution function (CDF).
A random variable ‘Y’ is to be called discrete its must that it can assume finite number of values. Like if two bulbs are selected from a certain lot then the number of the defective bulbs can be 0, 1, 2. Here The Range of the variable is from 0 to 2 and this random variable can have some selected values in the range 0 to 2. Means random variable can only specifies the specific values 0, 1, 2 not the values 1.1, 0.2 etc.
Discrete Probability Distribution is characterized by the probability mass function hence the distribution of the random variable would be discrete and it’s called the discrete random variable.
Continuous probability distribution is dependent on the probability density function. If the distribution of ‘Y’ is continuous then the variable ‘X’ is called as continuous random variable. Continuous random variable has a continuous range of values.
The concept behind the probability distribution and the random variables is to describe the mathematical discipline of Probability Theory. If from the population any value is measured like height of people. Durability of a matter, traffic floe etc. there are chances of having an error. For this problem numbers are used to describe a quantity. Probability distribution can be classified as discrete probability distribution or continuous probability distribution.
Let’s have some examples to understand discrete and continuous probability distribution.
· Assume the fire department mandates that all the fire fighters must weigh between 100 to 200 pounds then the weight of a fire fighter would be an example of continuous variable because the weight of fighters can have any value between 100 to 200 pounds.
· Assume the other example of flipping a coin. The number of heads then could be any integer value between zero and infinity, it could be any discrete number between the range so this is the example of discrete variable.

Mean of Discrete Random Variable

Back to Top
The Mean of discrete random variables is also known as the expected value or expectation or mean or the first moment in the Probability and Statistics, which is a very important branch of mathematics. The mean of Random Variable is basically the average of their weights (all values) that the random variables have. In case of continuous random variable the densities are used to get the average correspond to the Probabilities whereas in case of discrete random variables we use weights instead of densities. The expected value or the mean can also be defined as the Integration of random variables with respect to its measure of Probability. To get a mean of any given data Set we just need to add all the values and then dividing the result with the total Numbers of values in the data set.
We may also understand the expected of mean value by understanding the law of large numbers. The law of large number says that as the number of trials or experiments increases, then the mean of these trials will become closer to the true mean or the expected value of the distribution mean that is denoted by ‘µ’. If the expected value exist then the sample mean as sample size grows to the infinite. For example dice roll. The expected value is not predictable in real sense such like having 5.5 children.
For the large tails like Cauchy distribution; the mean value do not exist or cannot be calculated.
We can construct and compute the expected value which is equal to the probability of a given event; simply with the help of a function that is called Indicator function. We simply calculate the expectation of this function. The expectation of indicator function is one if the event occurs else it will be zero. For instance we justify estimating probabilities by frequencies with the help of the law of large numbers.
The mean of any Discrete Random Variable ‘Y’ is a Weighted Average or sum of all the possible values of the random variables that can be considered. The outcome of the mean of each random variable is ‘yi according to its probability that is denoted as pi. We use ‘µ’ to denote the expected value of ‘Y’.
Then the expected value or the mean value can be defined as:
µ = y1p1 + y2p2 + y3p3 + . . . . . . . . + ynpn,
µ = ∑ yipi.
The mean of a random variable is simple is medium to provide the average of the variables over many trials or observations.
The property of a random variable say ‘Y’ is that if it is adjusted by multiplying by a value say ‘c’ and adding a value say ‘e’ then the mean will become:
µe + cY = e + cµe; where ‘µ’ is the mean.
The main objective of this is to finding a unique or single value for a number of values or measurements in a given data set or sample. The related calculations of mean are Mode, arithmetic mean, geometrical mean, harmonic mean, root mean Square etc.
So this is all about the mean of discrete random variables.

Variance of a Discrete Random Variable

Back to Top
In Probability, when we calculate total number of possible outcomes of certain event, then this process is called as Sample Space and if a function is defined on this sample space, then this function is called as a Random Variable function and values of this random variable function are called as a random variable. Like we select a random player from American football team and measure his domestic record, then in this case collection of players is called as a sample space and random variable is domestic record and random variable function is d(R), where ‘d’ is domestic cricket and ‘R’ is record. From mathematical Point of view when we roll two die together, than we can find probability of sum of two outcomes as shown below-
Here Sample space S = (I,j) ; i<=1, j<=6 and random variable function F = i + j .
A random variable can accept finite number of values is called as a Discrete Random Variable, like we selected four bulbs from a given Set and number of defective bulbs may be 0, 1, 2, 3. So, in this situation we know range of defective bulbs and this kind of probability is known as discrete random variable. Now we will discuss how we calculate variance of discrete random variables:
We will use following steps to find out the variance of discrete random variable -
Step 1: First of all we will calculate the expected value that is E(X) of random variable ‘r’. For calculating the expected value, we use X = 1,2,3...........n.
Step 2: After Expected value of random variable, we will calculate the Square of expected value function of random variable means we calculate E(X – r)2 by using following table -
x m(x) (x – 7/2)2
1 - -
2 - -
3 - -
.
.
n - -
Step 3: After above two steps, we will calculate variance of discrete random variable by following formula -
v(X) = ∑ [E(X – r)2].
Suppose we throw a dice once and a random variable ‘X’ is generated then we can calculate the variance of random variable V(x) as-
Step 1: First of all we will find expected value of rolling a die-
Expected Numbers will be– 1, 2, 3, 4, 5, 6
So, expected value of ‘X’ is
r = E(X) = 1(1/6) + 2(1/6) + 3(1/6) + 4(1/6) + 5(1/6) + 6(1/6),
= 7 / 2
Step 2: Now we will calculate the square of expected value of random variable by using the following table -
x m(x) (x – 7/2)2
1 1/6 25/4
2 1/6 9/4
3 1/6 1/4
4 1/6 1/4
5 1/6 9/4
6 1/6 25/4
Step 3: Using the above table we can calculate variance of a discrete random variable -
V(X) = ∑ [E(X – r)2],
= 1/6[(25/4) + (9/4) + (1 / 4) + (1 / 4) + (9/4) + (25/4)
= 35/12
So, variance of discrete random is equals to 35/12.

Geometric Probability Distribution

Back to Top
Geometric Probability distribution encompasses distributions of different measurements related to various known figures like Circle, square, rectangle, parallelogram etc. Measurements that can be used for describing use of geometrical Probability Distribution are length, surface or outer area, perimeter and volume given under certain ruling situations. The way we used to deal with problems based on concept of probability, same calculations are involved in geometric distribution also. The only difference is, instead of calculating total number of consequences or outcomes and specific outcomes we deal with geometrical measures. These can be calculating total area and specific area of a geometrical shape using the formulae that have been defined for each shape. That is reason for why it is called Geometric Probability. Now an important doubt may arise to you that when might you use a geometric probability? For this let us consider an example where geometric probability involves area: If an arbitrary Point is chosen in Square at random, what will be its probability of lying in triangle ABC as shown in the figure given:

Statistically, geometric probability means referring to geometric probability distributions. For example, suppose you have a coin, then what do you think can be the probability that we get a head in third trial? This kind of probability is termed as a geometric probability and is symbolically written as G (x; P). Where, 'x' is total number of events possible and 'P' refers to Geometric Probability. Standard formula used for such distribution can be written as follows: Suppose any experiment consists of “X” samples and results in only one successful outcome. If probability of such an outcome is 'P', then geometric probability is given as: G (X; p) = p * qX – 1.