Calculating Heteroscedasticity: Is Correlation Between Variables Strong/Weak?

  • MHB
  • Thread starter MarkFL
  • Start date
In summary, according to Mark's data, there does not appear to be a significant correlation between x and y. However, if there is significant Heteroscedasticity, it would invalidate the results of the linear regression.
  • #1
MarkFL
Gold Member
MHB
13,288
12
Hello all,

A friend of mine on another forum, knowing I am involved in the math help community, approached me regarding a question in statistics. Here's what he said:

I'm involved in a debate.

I have a set of data x and y.

I did a simple linear regression using excel and it shows Heteroscedasticity.

As it turns out, calculating Heteroscedasticity is beyond me since I'm not a Statistician.

What I need to know is; with the Heteroscedasticity and low R^2 value, can the correlation between the two variables be considered strong, medium, weak or invalid?

The two variables are for social science.

I just learned that the thresholds for social science (like economics) and natural science (like engineering) are completely different. As it turns out in social science R^2 value of 0.25-0.3 (max value is 1) is acceptable but in engineering that value is unacceptable.

To avoid ideological bias, I'm not going to tell what the two variables represent.

But the more statisticians assessing the data the better since I'll be relying on what others said here.

I told him I would pass along his data, along with his question, to a site where I know several folks knowledgeable in statistics participate. Here's a link to the data:

raw data.xlsx :: Free File Hosting - File Dropper: File Host for Mp3, Videos, Music, Documents.

Thanks to anyone who takes the time to visit the above link, download the data, and consider the question above. (Yes)
 
Physics news on Phys.org
  • #2
Hi friend of MarkFL, welcome to MHB if you make it here! ;)

I'm involved in a debate.

I have a set of data x and y.

I did a simple linear regression using excel and it shows Heteroscedasticity.

How did you get Heteroscedasticity from Excel?
I'm not aware of Excel having a test for Heteroscedasticity.
Or are you using a special add-in?

As it turns out, calculating Heteroscedasticity is beyond me since I'm not a Statistician.

Looking at your data visually (an advanced statistical technique that is also called eyeballing), it seems to me there is no Heteroscedasticity.
Instead it looks perfectly Homoscedastic (constant variance across the range of x) as required for a linear regression.
To be fair, I do not have the tools readily available to execute a test for Heteroscedasticity.
What kind of significance value do you have for it?

What I need to know is; with the Heteroscedasticity and low R^2 value, can the correlation between the two variables be considered strong, medium, weak or invalid?

If there is significant Heteroscedasticity, it is not valid to apply a linear regression, making an $R^2$ value invalid.
However, as I said, that does not seem to be the case here.

The two variables are for social science.

I just learned that the thresholds for social science (like economics) and natural science (like engineering) are completely different. As it turns out in social science R^2 value of 0.25-0.3 (max value is 1) is acceptable but in engineering that value is unacceptable.

The $R^2$ value of $0.25$ that we have here, is considered to show medium correlation.

And a test to evaluate if there is a significant correlation says, yes, there is a significant correlation between x and y with a $p$-value of $1.88\cdot 10^{-12}$.
The $p$-value is the probability that we're wrong about saying that there's a significant correlation.
Both the social sciences and engineering generally ask for a $p$-value less than $0.05$ to be considered significant.
It's just that in the social sciences we must carefully apply these statistical tests, since it's often pretty hard to achieve that level of significance, and it's much easier to make a subjective statement that is worthless without support from the numbers.
In engineering it's usually just obvious that there is a correlation and there's no real need to dive into careful significance tests.
 
  • Like
Likes MarkFL
  • #3
Hi I like Serena,

I've decided to just register instead of making Mark go back and forth with my questions. I've received several replies from other statisticians at another place that just confuse me even more.

The crux of the debate is whether x caused y, that is to say whether an increase of x will decreases y. So it's not just a correlation but a correlation-causation.
 
  • #4
loot said:
Hi I like Serena,

I've decided to just register instead of making Mark go back and forth with my questions. I've received several replies from other statisticians at another place that just confuse me even more.

The crux of the debate is whether x caused y, that is to say whether an increase of x will decreases y. So it's not just a correlation but a correlation-causation.

Hi loot,

I'm afraid that's a common pitfall in statistics.
Statistics does not say anything about cause-effect. It only says if there's a correlation.
And a correlation can mean many things, such as:
  • x causes y.
  • y causes x.
  • some unknown z causes both x and y, but x and y do not cause each other.
  • both x and y cause some unknown z, which we accidentally conditioned on.
  • x and y mutually cause each other.
  • and so on with other possible causal relationships.
This is why we try to choose an x such that it is independent as we call it.
Something that can not possibly be caused by y.
For instance the result of a math test in high school, which can not possibly be caused by the result of a statistics test in college. The causality the other way around is quite plausible though.
Or the result of a specific training for a test. The test result cannot cause the training, but the training might affect the test result.
 
  • #5
Hi,

According to a certain ideology, x is dogmatically believed to be independent of y and that y is the side effect of x.

I'm looking at the following possibilities:

1. x causes y
2. a known z causes both x and y, but x and y do not cause each other

I strongly believe in 2. but for now the debate can be temporarily settled as 1. if the correlation is very strong. The dogma itself assert that x and y are strongly correlated, where x is the cause of y.I'm kinda confused by your explanation;

R^2 value is 0.25, is considered to show medium correlation.

p-value is 1.88⋅10−12, there is a significant correlation between x and y.

My questions are:

a. What does significant mean? Medium correlation (same as indicated by R^2) or strong correlation (much stronger than indicated by R^2)?

b. For p-value; if less than 0.05 means we're likely to be right that there is a strong correlation, if above then we're likely to be wrong and there is little to no correlation?Thanks in advance for taking your time to explain these to me.
 
  • #6
loot said:
Hi,

According to a certain ideology, x is dogmatically believed to be independent of y and that y is the side effect of x.

I'm looking at the following possibilities:

1. x causes y
2. a known z causes both x and y, but x and y do not cause each other

I strongly believe in 2. but for now the debate can be temporarily settled as 1. if the correlation is very strong. The dogma itself assert that x and y are strongly correlated, where x is the cause of y.

Just saying, we need to be a bit careful with dogma's.

After all, a typical causal chain is:
1. We start with a dogma.
2. We ignore all correlations that contradict the dogma.
3. We selectively pick the one correlation that is aligned with the cause-effect implied in the dogma.
4. We conclude that the causal relationship in the dogma is true.

I hope it is clear from my previous posts that this is wrong: we cannot conclude a causal relationship based on a correlation.
Not to mention that evidence to the contrary should not be ignored.

loot said:
I'm kinda confused by your explanation;

R^2 value is 0.25, is considered to show medium correlation.

p-value is 1.88⋅10−12, there is a significant correlation between x and y.

My questions are:

a. What does significant mean? Medium correlation (same as indicated by R^2) or strong correlation (much stronger than indicated by R^2)?

Significant means that the chance that we're wrong about a specific statement is lower than typically 0.05.
We have to take into account what the statement is though.

In your case there is definitely a correlation, which is what the p-value shows beyond doubt.

However, the $R^2$ value that is medium indicates that we cannot accurately predict what for instance y will be based on x.
There is some 'noise'. Possibly other factors that we did not take into account that influence how x and y correlate to each other. Or perhaps we simply cannot measure x and y very accurately.
They are still definitely correlated though.

loot said:
b. For p-value; if less than 0.05 means we're likely to be right that there is a strong correlation, if above then we're likely to be wrong and there is little to no correlation?

There's a distinction in whether there is a correlation and how strong the correlation is.
As I said, there is a correlation, but it is of medium strength.
 
  • #7
Thanks a lot for the great explanation!

Some background about the debate:

x is actually Economic Freedom Index https://www.heritage.org/index/ranking

y is actually Press Freedom Index https://rsf.org/en/ranking#

The [Right Wing] Libertarian ideology imply that expansion of economic freedom also caused the expansion of free speech. Instinctively I believe this implication to be false and there is no actual causal relationship between the two.

While manually inputting the data I intuitively realized that both x and y actually has a causative correlation with z (type of government) but this intuition need to be proved empirically first.

The conclusion that I derived from all these is that economic freedom uber alles which is the foundation of American Libertarianism does not guarantee freedom of speech and thus the ideology is susceptible to become a tyranny - a result that contradict completely the ideology's dogma of liberty.
 
  • #8
Just as an addition to the conversation: to determine causality, as has already been mentioned, there must be much more than correlation. In science in general, Mill's Methods are what are generally used to determine causation. What is common to all five of Mill's Methods is this: variable manipulation in carefully-designed controlled experiments. If you do not have that, or if you cannot do that, you do not have causation. You cannot get causation from observational studies, nor can you get causation from computer models.

But I will say this: if you want causation, correlations are absolutely the best place to start. You can think of them as clues.
 
  • #9
Here's a nice and surprising article about a chase for causation after establishing a correlation.
Moreover, the author is using only observational results.
Developers Who Use Spaces Make More Money Than Those Who Use Tabs

The author makes a careful analysis of the various possible confounding factors... and finds none.
Still, he is very careful in pointing out that correlation is not causation up to and including in his conclusion, as he should.
Either way, the evidence that he presents is compelling.

For the record, I work as a professional developer... and I intend to stick with spaces just to be sure. ;)
 
  • #10
Everyone, thanks very much!
I learned a lot from these exchanges (including the methods to determine causation).
 

FAQ: Calculating Heteroscedasticity: Is Correlation Between Variables Strong/Weak?

What is heteroscedasticity?

Heteroscedasticity is a statistical term that refers to the unequal distribution of a variable's variance. In other words, it occurs when the variability of a variable is not consistent across the range of values for another variable.

How is heteroscedasticity different from homoscedasticity?

Heteroscedasticity is the opposite of homoscedasticity, which refers to the equal distribution of a variable's variance. In other words, homoscedasticity occurs when the variability of a variable is consistent across the range of values for another variable.

Why is it important to detect heteroscedasticity?

Detecting heteroscedasticity is important because it violates the assumptions of many statistical tests, such as linear regression. If heteroscedasticity is present, it can lead to biased and unreliable results, making it difficult to draw accurate conclusions from the data.

How is heteroscedasticity measured?

Heteroscedasticity can be measured using various statistical tests, such as the Breusch-Pagan test, the White test, and the Goldfeld-Quandt test. These tests compare the variance of the residuals (the difference between the actual values and the predicted values) across different groups or levels of another variable.

Is a strong correlation between variables always indicative of heteroscedasticity?

No, a strong correlation between variables is not always indicative of heteroscedasticity. While heteroscedasticity can lead to a strong correlation between variables, there can also be a strong correlation between variables without heteroscedasticity present. It is important to test for both a correlation between variables and heteroscedasticity separately to fully understand the relationship between the variables.

Back
Top