Random rotation ensembles
In machine learning, ensemble methods combine the predictions of multiple base learners to construct more accurate aggregate predictions. Established supervised learning algorithms inject randomness into the construction of the individual base learners in an effort to promote diversity within the resulting ensembles. An undesirable side effect of this approach is that it generally also reduces the accuracy of the base learners. In this paper, we introduce a method that is simple to implement yet general and effective in improving ensemble diversity with only modest impact on the accuracy of the individual base learners. By randomly rotating the feature space prior to inducing the base learners, we achieve favorable aggregate predictions on standard data sets compared to state of the art ensemble methods, most notably for tree-based ensembles, which are particularly sensitive to rotation.
| Item Type | Article |
|---|---|
| Copyright holders | © 2016 Rico Blaser and Piotr Fryzlewicz. |
| Keywords | Feature rotation, ensemble diversity, smooth decision boundary |
| Departments | Statistics |
| Date Deposited | 04 Jun 2015 13:47 |
| URI | https://researchonline.lse.ac.uk/id/eprint/62182 |
Explore Further
- http://jmlr.org/papers/v17/blaser16a.html (Publisher)
- http://www.jmlr.org/ (Official URL)