- #1
maistral
- 240
- 17
- TL;DR Summary
- Can someone give a link, or perhaps a summary as to what are the fundamental differences between the two, and how they behave as objective functions in minimization?
I am working on an equation that is supposed to model two dependent variables Y and Z using four parameters a, b, c, and d (for regression) and a single independent variable X. What I am doing is that given a set of values for X, I am going to regress a, b, c, and d to fit Ycalc and Zcalc to Yexpt'l and Zexpt'l.
My problem is this: I tried using both MAPE, and SSE normalized via the standard deviation of each dependent variable as objective functions:
MAPE = 100/nX Σ[|Yi, calc - Yi, expt'l| / Yi, expt'l] + 100/nX Σ[|Zi, calc - Zi, expt'l| / Zi, expt'l]
SSE = Σ{[(Yi, calc - Yi, expt'l) / σYcalc]2} + Σ{[(Zi, calc - Zi, expt'l) / σZcalc]2}
All summations are from i = 1 to nX.
My issue is as follows: It always (at least, to my requirement) ends up with MAPE doing a better job of determining the parameters a, b, c, and d in fitting Y and Z. Why is this so? May I know what is the fundamental difference between the two, and why and why not should I use MAPE / SSE?
My problem is this: I tried using both MAPE, and SSE normalized via the standard deviation of each dependent variable as objective functions:
MAPE = 100/nX Σ[|Yi, calc - Yi, expt'l| / Yi, expt'l] + 100/nX Σ[|Zi, calc - Zi, expt'l| / Zi, expt'l]
SSE = Σ{[(Yi, calc - Yi, expt'l) / σYcalc]2} + Σ{[(Zi, calc - Zi, expt'l) / σZcalc]2}
All summations are from i = 1 to nX.
My issue is as follows: It always (at least, to my requirement) ends up with MAPE doing a better job of determining the parameters a, b, c, and d in fitting Y and Z. Why is this so? May I know what is the fundamental difference between the two, and why and why not should I use MAPE / SSE?