- #1
fog37
- 1,569
- 108
- TL;DR Summary
- understand how decision trees and leaf nodes behave in the case of regression...
Hello.
Decision trees are really cool. They can be used for either regression or classification. They are built with nodes and each node represents an if-then statement that gets evaluated to be either true or false. Does that mean there are always and only two edges/branches coming out of an internal node (leaf nodes don't have edges)? Or are there situations in which there can be more than 2 edges?
In the case of classification trees, the leaf nodes are the output nodes, each with a single class output (there can be more leaf nodes than the classes available). In the case of regression trees, how do the leaf nodes behave? The goal is to predict a numerical output (ex: the price of a house). How many leaf nodes are there? One for each possible numerical value? That would be impossible. I know the tree gets trained with a finite number of examples/instances and the tree structure and decision statements are formed...
Thank you for any clarification.
Decision trees are really cool. They can be used for either regression or classification. They are built with nodes and each node represents an if-then statement that gets evaluated to be either true or false. Does that mean there are always and only two edges/branches coming out of an internal node (leaf nodes don't have edges)? Or are there situations in which there can be more than 2 edges?
In the case of classification trees, the leaf nodes are the output nodes, each with a single class output (there can be more leaf nodes than the classes available). In the case of regression trees, how do the leaf nodes behave? The goal is to predict a numerical output (ex: the price of a house). How many leaf nodes are there? One for each possible numerical value? That would be impossible. I know the tree gets trained with a finite number of examples/instances and the tree structure and decision statements are formed...
Thank you for any clarification.