Abstract
This paper presents a novel way to speed up the evaluation time of a boosting classifier. We make a shallow (flat) network deep (hierarchical) by growing a tree from decision regions of a given boosting classifier. The tree provides many short paths for speeding up while preserving the reasonably smooth decision regions of the boosting classifier for good generalisation. For converting a boosting classifier into a decision tree, we formulate a Boolean optimisation problem, which has been previously studied for circuit design but limited to a small number of binary variables. In this work, a novel optimisation method is proposed for, firstly, several tens of variables i.e. weak-learners of a boosting classifier, and then any larger number of weak-learners by using a two-stage cascade. Experiments on the synthetic and face image data sets show that the obtained tree achieves a significant speed up both over a standard boosting classifier and the Fast-exit—a previously described method for speeding-up boosting classification, at the same accuracy. The proposed method as a general meta-algorithm is also useful for a boosting cascade, where it speeds up individual stage classifiers by different gains. The proposed method is further demonstrated for fast-moving object tracking and segmentation problems.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Avidan, S. (2006). SpatialBoost: adding spatial reasoning to adaboost. In Proc. ECCV, Graz, Austria.
Basak, J. (2004). Online adaptive decision trees. Journal of Neural Computation, 16, 1959–1981.
Brostow, G., Shotton, J., Fauqueur, J., & Cipolla, R. (2008). Segmentation and recognition using structure from motion point clouds. In Proc. ECCV, Marseilles.
Chen, J. (1994). Application of Boolean expression minimization to learning via hierarchical generalization. In Proc. ACM symposium on applied computing (pp. 303–307).
Cormen, T., Leiserson, C., Rivest, R., & Stein, C. (2001). Introduction to algorithms. Cambridge: MIT Press and McGraw-Hill.
Esposito, F., Malerba, D., Semeraro, G., & Kay, J. (1997). A comparative analysis of methods for pruning decision trees. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19, 476–491.
Freund, Y., & Mason, L. (1999). The alternating decision tree learning algorithm. In Proc. ICML.
Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139.
Friedman, J., Hastie, T., & Tibshirani, R. (2000). Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28(2), 337–407.
Grabner, H., & Bischof, H. (2006). On-line boosting and vision. In Proc. IEEE conf. CVPR (pp. 260–267).
Grossmann, E. (2004a). AdaTree: boosting a weak classifier into a decision tree. In IEEE workshop on learning in computer vision and pattern recognition (pp. 105–105).
Grossmann, E. (2004b) Adatree 2: boosting to build decision trees or Improving Adatree with soft splitting rules (Technical report).
Huang, C., Ai, H., Li, Y., & Lao, S. (2005). Vector boosting for rotation invariant multi-view face detection. In Proc. ICCV.
Kim, T.-K., Kim, H., Hwang, W., & Kittler, J. (2005). Component-based LDA face description for image retrieval and MPEG-7 standardisation. Image and Vision Computing, 23(7), 631–642.
Li, S. Z., & Zhang, Z. (2004). Floatboost learning and statistical face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9), 1112–1123.
Mason, L., Baxter, J., Bartlett, P., & Frean, M. (2000). Boosting algorithms as gradient descent. In Proc. advances in neural information processing systems (pp. 512–518).
Pham, M., & Cham, T. (2007). Fast training and selection of Haar features using statistics in boosting-based face detection. In Proc. ICCV.
Quinlan, J. (1996). Bagging, boosting, and c4.5. In Proc. national. conf. on artificial intelligence (pp. 725–730).
Rahimi, A., & Recht, B. (2008). Random kitchen sinks: replacing optimization with randomization in learning. In Proc. neural information processing systems.
Ross, D., Lim, J., Lin, R., & Yang, M. (2008). Incremental learning for robust visual tracking. International Journal of Computer Vision, 77(1), 125–141.
Rowley, H., Baluja, S., & Kanade, T. (1998). Neural network-based face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 22–38.
Schapire, R. E., & Singer, Y. (1998). Improved boosting algorithms using confidence-rated predictions. In Proc. the eleventh annual conference on computational learning theory (pp. 80–91).
Schwender, H. (2007). Minimization of boolean expressions using matrix algebra (Technical report). Collaborative Research Center SFB 475, University of Dortmund.
Sochman, J., & Matas, J. (2005). WaldBoost learning for time constrained sequential detection. In Proc. CVPR, San Diego, USA.
Torralba, A., Murphy, K. P., & Freeman, W. T. (2007). Sharing visual features for multiclass and multiview object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(5), 854–869.
Tu, Z. (2005). Probabilistic boosting-tree: learning discriminative models for classification, recognition, and clustering. In Proc. ICCV.
Viola, P., & Jones, M. (2001). Robust real-time object detection. In 2nd intl. workshop on statistical and computational theories of vision.
Viola, P., & Jones, M. (2004). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154.
Wu, B., & Nevatia, R. (2007). Cluster boosted tree classifier for multi-view, multi-pose object detection. In Proc. ICCV.
Xiao, R., Zhu, L., & Zhang, H. (2003). Boosting chain learning for object detection. In Proc. ICCV.
Yeh, T., Lee, J., & Darrell, T. (2007). Adaptive vocabulary forests for dynamic indexing and category learning. In Proc. ICCV.
Zhou, S. (2005). A binary decision tree implementation of a boosted strong classifier. In IEEE Workshop on analysis and modeling of faces and gestures (pp. 198–212).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kim, TK., Budvytis, I. & Cipolla, R. Making a Shallow Network Deep: Conversion of a Boosting Classifier into a Decision Tree by Boolean Optimisation. Int J Comput Vis 100, 203–215 (2012). https://doi.org/10.1007/s11263-011-0461-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263-011-0461-z