Sales Toll Free No: 1-855-666-7446

Linear Function

Top

Calculus is based upon the linear Functions. Linear Functions are the functions whose graphs consist of sections of one straight line throughout the function's Domain.
Linear function Statistics can be defined as the functions that have ‘x’ as the input variable and ‘x’ has an exponent of only 1.
Linear functions can refer to the following concepts
· A first degree polynomial function having one variable.
· A map between two vector spaces that refers vector addition and Scalar multiplication.
Linear function are said to be linear because the graphs of these functions in the Cartesian co- ordinate plane is Straight Line.
For example there are some functions below whose graph is a straight line
· g (x) = 2x + 4
· g (x) = x/2 - 3
Linear functions can be written in the following format
f (x) = mx + b,
(y – y1) = m (x - x1),
0 = Ax + By + C,
In vector Algebra a linear function means a linear map i.e. a map between two vectors spaces that refers vector addition and Scalar Multiplication.
The linear functions are those functions ‘f’ that can be expressed as,
f (x) = Kx,
where ‘K’ is a matrix.
A function g (x) = mx + b is called a linear map if and only if b = 0.
The form y = mx + b is named as the 'slope intercept form' of linear function. For a common graph (x,y) usually expressed as y = mx + b and in a formal function definition a linear function written as f (x) = mx + b.
We can derive this fact for the linear functions. Suppose a linear function which takes value ‘g’ (c1) at c1 and g (c2) at c2 by the following formula-
g (x) = g (c1) x – c2 / c1 – c2 + g (b) x – c1 / c2 – c1,
The first term will be zero when ‘x’ is ‘c2’ and is ‘g (c1)’ when ‘x’ is ‘c1’, while the second term is zero when ‘x’ is ‘c1’ and is g(c2) when ‘x’ is ‘c2’.
More convenient form for this function is-
g (x) = x g (c2) – g (c1) / c2 – c1] + [c2 f(c1) – c1 f (c2) / c2 – c1].
g (x) = mx + c,
Here the ‘m’ shows the Slope of this line. If we plot the graph on the ‘y’ axis then ‘c’ is called here the ‘y’ intercept of the line.
The description of this equation can be provided as a Set of or locus of (x, y) points and these points lie along a straight line. The variable m refers to the Slope of this line and the variable ‘b’ refers to the ‘y’ co-ordinate where the line crosses the y- axis named as 'y-intercept'.
Point slope form for the linear functions shows the equation of a line i.e.
(y – y1) = m (x - x1).

Statistical Modelling

Back to Top
In the form of mathematical equations the formalization of relationships between variables is known as Statistical model. This shows that how one or more random variables are related to the other random variables. A statistical model can be taken as a pair (Z, P) where ‘Z’ is the Set of possible observations and ‘P’ is the set of possible Probability distributions on ‘Z’. ‘P’ is here is a set of distinct elements which generates the observed data.
Statistical tests can be described in the form of statistical model. There is a similarity between tests and model.
In a formal way Statistical model can be defined as
"Statistical model ‘P’ is a collection of Probability Distribution functions or probability density Functions that is a collection of distributions, each of which is tabulated by a unique finite dimensional parameter ‘P’. The statistical model is that variability is represented using probability distributions which make the building blocks from which the model is constructed.

Statistical modeling has two main functions that are prediction and explanation.
Prediction is the calculation of the output when there is a given set of input values or the measurement of the output if when there is a change in a particular input.
Explanation is the measurement of the relationship of the variables. It is the measurement of the variation of the output on the dependent variable.
There are several different statistical model techniques.
Regression model: Regression model involves estimating the mathematical relationship between one variable and one or more explanatory variables or independent variables. The single variable is known as the response variable or dependent variable. The object of regression models is to forecast time series data. Regression analysis is used for the purpose of prediction and forecasting. It is also used to understand among the independent variables that which are related to the dependent variable. The performance of regression analysis methods in practice depends on the form of the process of data generation.
Some assumptions are there to understand regression those are-
The sample is representative of the population for the prediction of inference.
The error that is made is a Random Variable with a Mean of zero conditional on the explanatory variables.
Generally the independent variables are measured with less error or no error.
The predictors are linearly independent means it, is not possible to express any predictor as a linear Combination of the others.
Non parametric model is also the part of the statistical models but they are different from the parametric models because they are driven rather than structured with the parameters to be estimated.
Semi parametric models-
An example of semi parametric models is the regression model with a smoother component for greater flexibility.
Bayesian modeling is based on Probabilities rather than frequencies like the probability of ‘P’, given ‘Q’, ‘R’ and ‘S’, these are based on Baye's theorem i.e.
P (P | Q) = P (Q | A) P(P) / P(Q).

Linear Correlation Coefficient

Back to Top
A Linear Correlation coefficient in Statistics is basically a measure of the strength of association in between two or more variables. To measure the strength of the linear correlation and association between two variables the most commonly used correlation coefficient is Pearson Product Moment Correlation Coefficient. Here we will denote the linear correlation coefficient for sample by “r” and denote linear correlation coefficient for a population by R.
The sign of the linear correlation coefficient with an absolute value always defines the actual direction and the magnitude of the relation between two variables. Here are some important facts about the linear correlation coefficient:
1. The range for the linear correlation coefficients is between -1 to 1.
2. The linear relationship between two variables will be stronger if the absolute value of the coefficient is high.
3. Strongest linear relationship is determined by the linear correlation coefficient which is either -1 or 1.
4. Weakest relationship id determines by the linear correlation coefficient which is 0.
5. If the linear correlation coefficient is positive than it means that if one variable is high than the other variable tends to get higher.
6. Similarly in case of negative linear correlation coefficient if one variable is bigger than the other variable tends to get smaller.
The Pearson Product Moment Correlation Coefficient is used to calculate the linearity of relationship so if the coefficient is zero that’s simple Mean that the relationship is not linear.
Here is the formula for calculating the Linear Correlation Coefficient (r):
Linear correlation coefficient formula, r = ∑ (a*b) / sqrt (∑ a^2) (∑b^2)
Here the sign ∑ is a summation symbol where a is (ai - a) and b is (bi - b).
ai and bi are the ith value of the observation a and b respectively and a and b are the mean of all the values.
In case of population correlation coefficient the formula is:
R = [1/n] * ∑ [(ai - µa) / σa] * [(bi - µb) / σb]
n is the number of observations, ∑ is a symbol for sum up, ai and bi are the values for the ith observations, µa and µb are the population mean for the variables a and b respectively. σa and σb are the population Standard Deviation of a and b.
Similarly in case of Sample correlation coefficient the formula is:
r = [1/(n – 1)] * ∑ [(ai - a) / Sa] * [(bi - b) / Sb]
where n is the number of observations, ∑ is a symbol for sum up, ai and bi are the values for the ith observations, Sa is the sample standard deviation of a and Sb is sample standard deviation of b.
The Sample correlation coefficient always depends on how we collect the sample data. It is basically an unbiased estimation of the population correlation coefficient which is slightly different from this.
The formulas for the Coefficient Of Linear Correlation of Sample and Populations are derived by the first one formula. These days many correlation software packages and graphic calculators are available in the market.

Linear Regression Coefficient

Back to Top
For finding the relationship between two variables, we calculate linear regression coefficient. This coefficient of Linear Regression is useful for finding the strength and direction between two linear variables and we use following formula for evaluating the coefficient of linear regression-
Correlation coefficient r = N ∑x y - (∑x)(∑y) / √([N ∑x2 - (∑x)2]) . ([N ∑y2 - (∑y)2]),
Where N = data points in a Set,
∑x y = summation of all values of x y,
∑x = summation of all values of ‘x’,
∑y = summation of all values of ‘y’,
∑x2 = summation of all values of x2,
∑y2 = summation of all values of y2,
Now we discuss properties of linear regression coefficient:
There are following properties used in linear regression coefficient-
Property 1: The value of Linear Correlation coefficient is lie between -1 and 1 means
-1 <= r <= 1, where if r = -1, then this type of correlation is called as negative correlation and if r = 1, then this type of correlation is called as a positive correlation.
Property 2: If the values of two variables like ‘x’ and ‘y’ have positive and strong correlation, then this type of linear correlation is close to 1 means when value of ‘x’ is increased, then for value of ‘x’, value of ‘y’ also increases and if Linear Correlation Coefficient is exactly +1, then this type of linear correlation coefficient is called as a positive perfect fit correlation between two variables.
Property 3: If the values of two variables like ‘x’ and ‘y’ have negative and strong correlation, then this type of linear correlation is close to -1 means when value of ‘x’ is decreased, then for value of ‘x’, value of ‘y’ is also decreased and if linear correlation coefficient is exactly -1, then this type of linear correlation coefficient is called as a negative perfect fit correlation between two variables.
Property 4: If the values of two variables like ‘x’ and ‘y’ have no linear correlation, then this type of linear correlation is close to 0 means when value of ‘x’ does not depend upon value of ‘y’ and if linear correlation coefficient is exactly 0, then this type of linear correlation coefficient is called as a weak linear correlation between two variables.
Property 5: If linear correlation coefficient is +1 or -1, then in this situation all data points are present on a straight line and Slope of these data Point is positive, when linear correlation coefficient is exactly +1 and when linear correlation coefficient is exactly -1, then Slope of these data point is negative.
Property 6: Linear correlation coefficient is a dimensionless quantity, so this quantity is not present in unit terminology.
Property 7: When linear correlation coefficient is greater than 0.5, then this type of linear correlation coefficient is called as a strong correlation and when linear correlation coefficient is less than 0.5, then this type of linear correlation coefficient is called as a weak correlation.