# Correlation

**Correlation** is a measure of relationship between two variables. There are several relations involving variables, such as: linear, (in general) non-linear, and others. Also variables can have differing quantities of correlation to each other. **Pearson's Correlation Coefficient** or PCC is the most common linear coefficient measuring the degree of correlation between two variables. The PCC between two given variables, denoted \(r\), is a number between \(-1\) and \(+1\) inclusive. The meaning of this measure is the degree of strength at which these variables are positively \((\text{close to}\hspace{1mm}+1)\), negatively \((\text{close to}\hspace{1mm}-1)\) or not correlated \((\text{close to}\hspace{2mm}0)\). A value of \(+1\) means a **perfect positive** relation between the variables, that is, they are related by an increasing linear function. A value of \(-1\) means a **perfect negative** relation between the variables, that is, they are related by a decreasing linear function. Although possible, perfect correlations are extremely rare. A value of \(0\) means no predictive strength among the variables.

Jim is not only an accomplished baseball player but he is also a talented mathematics tenth grader. His motivation in analyzing performance first came to fruition when his physics professor explained the optimal way to hit a ball for maximum distance. Since each baseball player is unique in their skills, he wants to find out if he is better suited to be a hitter or a slugger. As a wise mathematician/baseball player will do, he keeps data of each of his batting turns from last season, approximate batting strength and their outcomes; either Ground Hit, Home Run, or Out. He knows that consistent hits require more direction than strength as opposed to home runs, so he builds three data sets. These are graphs and PCC's of the following: strength vs home runs, strength vs hits and strength vs outs.

If his decision will require only one of these three comparisons to make a choice, which one you think will give him the best measure of his abilities?

## Definitions

Pearson's Correlation Coefficient-General DefinitionThe Pearson's Correlation Coefficient between two random variables \(X\) and \(Y\) with respective means \(\mu_X\) and \(\mu_Y\), and standard deviations \( \sigma_X \) and \( \sigma_Y \) is:

\[r_{XY} = \frac{E[(X-\mu_X)(Y-\mu_Y)]}{\sigma_X \sigma_Y}\]

such that \(E[\bullet]\) is the expected value function.

Pearson's Correlation Coefficient-Sample DefinitionThe Pearson's Correlation Coefficient between two samples of size \(n\), \(x_i\) and \(y_i\) where \(i=1, 2, \ldots ,n\) is:

\[r_{xy} = \frac{\sum{(x_i - \bar{x})(y_i - \bar{y})}}{\sqrt{\sum{(x_i - \bar{x})^2}} \sqrt{\sum{(y_i - \bar{y})^2}}}\]

such that \(\bar{x}\) is the average of the sample values \(x_i\), similarly for \(y\).

The basic difference among these definitions is the knowledge about the data. The general definition relates two random variables for which its mean and standard deviation are known. In the study of statistics, most times, knowledge of the distribution of the values is assumed. When quantifying qualitative variables, it is important to know certain statistical measures (parameters) in order to build models to fit the given data. The sample definition comes into play when explaining relationships among variables.

## Interpretation

A correlation coefficient that is closer to \(+1\) implies a strong **positive correlation** among the variables. It roughly means that when one variable is increased, the other will increase with a similar ratio between consecutive values. Having strong positive relation measured by the PCC does not implies cause and effect from one variable to the other, but it gives reasons to further analyze a hypothesize statement. For example, if a PCC close to +1 is found in a study of sugar intake (one variable) compared to body fat percentage (the other variable) taken from a group of one hundred randomly selected people, then there is a positive correlation. The PCC relates these variables, but does not imply that increases in body fat percentage are due to increases in sugar intake alone. Further, and stronger, tests would be needed to draw specific and actionable conclusions.

A correlation coefficient that is closer to \(-1\), a **negative correlation**, does not mean the variables are not correlated, to the contrary, the only difference lies in the sign of the ratio between consecutive values (as previously explained). For example, if a PCC close to -1 is found in a study of physical activity (one variable) compared to stress levels (the other variable) taken from a group of one hundred randomly selected people, then there is a negative correlation. The PCC relates these variables, but does not imply that decrease in stress levels are due to increase in physical activity alone. Further, and stronger, tests would be needed to draw specific and actionable conclusions.

A correlation coefficient that is closer to \(0\) implies the variables are **not correlated**. Having no correlation does not implies having no linear relation, since if a horizontal linear pattern is observed, this means that when increasing one variable the other remains virtually unchanged. PCC is not a measure of independence, but independent variables will have a PCC of zero. When variables exhibit erratic patterns, their PCC will be close to zero.

## Real Life Examples

Examples of positive correlation in real life

- The more a person studies, the better his/her grades are.
- The higher the temperature climbs, the higher ice cream sales are.
- The lower the volume my earphones are, the lower my risk of eardrum damage.
- (Perfect!) The more water flow into a full bathtub, the more water in the floor of the bathroom.

Examples of negative correlation in real life

- The more stressful I feel, the less fun I have.
- The less unhealthy food I eat, the more my good cholesterol levels are.
- (Perfect!) The more air I take off a straw, the less pressure inside the straw.
- (Perfect!) The less money I spend, the more money I have in the long run.

## Correlation vs Causation

It is important to stress that having strong correlation does not imply causation. The fact that two variables (observations) are strongly correlated via a PCC measure does not mean that a change in one variable will cause a change in the other as a consequence, it can be purely coincidental. As an example, it can assumed that, in a city, the number of pigeons in a given location is strongly correlated to the number of people at that location at noon. If the number of people at that location dramatically decreased it can be expect that the number of pigeons will decrease as well, this analysis makes perfect sense. However, one way to explain this phenomenon might be the failure of the city to clean their streets and locations where people meet to have lunch. Another explanation could be that a high percentage of people will feed the pigeons at noon with their leftover bread. These arguments are dependent upon other complicated assumptions and not solely on the number of people in the city at this location.

## Hypothesis Testing

Considering that correlation does not implies causation, if conclusions are to be drawn from a PCC or other measures of correlation, more significant statistical tests must be administered. Correlation can be thought of as a first step towards explanation of observable phenomena. Since correlations are mostly used for modeling observable measures of influence among parameters, these observations may have been obtained as an insufficient sample size ratio compared to the population. One way to overcome this obstacle is to perform a hypothesis test on the validity of the PCC itself.

In statistical inference, properties (parameters) of the distribution of values of a population are analyzed by sampling data sets. Given assumptions on the distribution, i.e. a statistical model of the data, certain hypotheses can be deduced from the known behavior of the model. These hypotheses must be tested against sampled data from the population.

The null hypothesis (denoted \(H_0\)) is a statement that is assumed to be true. If the null hypotheses is rejected, then there is enough evidence (statistical significance) to accept the alternate hypothesis (denoted \(H_1\)). Before doing any test for significance, both hypothesis must be clearly stated and non-conflictive, i.e. mutually exclusive statements.

## We would like to find evidence that there is a linear relation between the Miami's annual average temperature (\(X\)) and the average number of Atlantic Basin hurricanes (\(Y\)). The table below is a 20 year sample taken from the years 1980 to 1999.

Average Annual Temperature in Miami Average Number of Annual Hurricanes 69.4 5 69.8 5 69.9 9 70.1 7 70.2 3 70.4 4 70.5 4 70.9 9 71 7 71.2 6 71.7 5 71.9 4 72.5 10 72.6 8 Using this sample, a hypothesis test will be performed with the following setting and the \(t\)-distribution with an \( \alpha=0.05\).

\(H_0\): There is no linear relation between \(X\) and \(Y\). That is \(r_{xy}=0\).

\(H_1\): There is a linear relation between \(X\) and \(Y\). That is \(r_{xy} \neq 0\).

Compute the Test Statistics \(t^*=\frac{r\sqrt{n-2}}{\sqrt{1-r^2}}\). Using PCC of \(r=0.3407\) we get \(t^*=1.3354\)

Proceed to perform the hypothesis test using the \(t\)-distribution with \(12\) degrees of freedom and \(t-score\) 1.3354. We must obtain the \(P-value\) to compare it to the significance level of \(\alpha=0.05\). In order to reject the null hypothesis we must find the probability of having a test-statistic more than 1.3354. Using a \(t\)-distribution table we get that the probability of getting a test-statistic smaller than 1.3354 is approximately 0.8975. Therefore, the probability of getting a test-statistic greater than 1.3354 is approximately 0.1025, since it is two tailed statement we multiply by \(2\) to get a \(P-value\) of 0.205. Since the \(P-value\) is greater than 0.05, we fail to reject the null hypothesis (\(H_0\)).

Conclusion:There is not enough evidence from this sample to support the claim that there is a linear relation between the Miami's annual average temperature and the average number of Atlantic Basin hurricanes.