- #1
fog37
- 1,569
- 108
- TL;DR Summary
- Understanding the idea of "keeping the other predictors fixed" to interpret partial coefficients
Hello,
In the presence of NO multicollinearity, with a linear regression model like ##Y=3 X_1+2 X_2##, the predictors ##X_1, X_2## are not pairwise correlated.
It is like ##\Delta Y## = (##\Delta Y## due to ##X_1##) + (##\Delta Y## due to ##X_2##)
Thank you for any clarification.
In the presence of NO multicollinearity, with a linear regression model like ##Y=3 X_1+2 X_2##, the predictors ##X_1, X_2## are not pairwise correlated.
- When ##X_1## changes by 1 unit, the dependent variable ##Y## change by a factor of ##3##, i.e. ##\Delta Y =3##, while the other variables are kept fixed/constant, i.e. they are not simultaneously changing with ##X_1## and participating in the ##\Delta Y## being equal to 3. By analogy, it is like the predictors are working "decoupled" gears.
- However, when multicollinearity is present (##X_1## and ##X_2## are correlated), it is not true that as ##X_1## changes by 1 unit, the change ##\Delta Y=3## is not solely due to that unit change in ##X_1## alone while the other variables are fixed/constant. The number 3 is due to the explicit change of ##X_1## but also to the implicit change of ##X_2## (also by one unit?) caused by ##X_1##: changing the variable ##X_1## also changes automatically the variable ##X_2## which is not kept constant while ##X_1## changes.
It is like ##\Delta Y## = (##\Delta Y## due to ##X_1##) + (##\Delta Y## due to ##X_2##)
Thank you for any clarification.