Machine Learning - Empirical Error

In summary, the conversation discusses the summation notation used to calculate the average error in a sample. The notation ##1_{h(x)\neq c(x)}## indicates that the function takes the value 1 if the event ##h(x) \neq c(x)## is true, and 0 otherwise. This notation is used to calculate the average error rate in a sample without further quantification.
  • #1
YoshiMoshi
236
10
Homework Statement
See Below
Relevant Equations
See below
1599949282377.png

I understand everything in this equation except for the summation. I understand it's the average error over the sample. But why do we need the "1"? Moreover wouldn't the error be the absolute value of the hypothesized value minus the concept value? Meaning
| h( x_i ) - c( x_i ) |
because you have to take the difference between the two to get the error? The original statement in the summation is just saying that the two are not equal. How is this an error?

The above snipping came from a book titled Foundations of Machine Learning by M. Mohri, Afshin Rostamizadeh, Ameet Talwalkar. It's for free on semantic scholar, and this is the beginning of chapter 2.

https://www.semanticscholar.org/pap...e9239469aba4bccf3e36d1c27894721e8dbefc44?p2df
 
Physics news on Phys.org
  • #2
I think the notation ##1_{h(x)\neq c(x)}## means it takes the value 1 if the subscript is true, i.e. ##h(x) \neq c(x)##, and 0 otherwise.

I guess as long as for each data point it's either an error or not, without further quantification, this calculates the average error rate in your sample.
 
  • #3
Hey thanks, that would make perfect sense.
 
  • #4
Office_Shredder said:
I think the notation ##1_{h(x)\neq c(x)}## means it takes the value 1 if the subscript is true, i.e. ##h(x) \neq c(x)##, and 0 otherwise.
That's in agreement with what's in the whitepaper. The author calls ##1_\omega## the "indicator function of the event ##\omega##."
 

FAQ: Machine Learning - Empirical Error

What is the definition of empirical error in machine learning?

Empirical error in machine learning refers to the difference between the predicted output of a machine learning model and the actual output. It is a measure of how well the model is able to generalize to new data.

How is empirical error calculated in machine learning?

Empirical error is typically calculated by using a loss function, which measures the difference between the predicted output and the actual output. The most commonly used loss functions include mean squared error, mean absolute error, and cross-entropy.

What is the relationship between empirical error and model complexity?

As model complexity increases, the empirical error tends to decrease. This is because more complex models are able to learn from the training data more accurately. However, if the model becomes too complex, it may start to overfit the training data and perform poorly on new data, leading to a higher empirical error.

Can empirical error be reduced to zero?

No, it is not possible to reduce empirical error to zero in machine learning. This is because there will always be some level of noise or randomness in the data, which cannot be captured by the model. However, the goal is to minimize the empirical error as much as possible.

How can empirical error be improved in machine learning?

Empirical error can be improved by using different techniques such as feature selection, regularization, and cross-validation. Additionally, using more data and fine-tuning model parameters can also help to reduce empirical error. It is important to find the right balance between model complexity and performance to achieve the lowest possible empirical error.

Back
Top