Abstract
The Mixture of Experts (ME) is one of the most popular ensemble methods used in pattern recognition and machine learning. This algorithm stochastically partitions the input space of the problem into a number of subspaces, experts becoming specialized on each subspace. The ME uses an expert called gating network to manage this process, which is trained together with the experts. In this paper, we propose a modified version of the ME algorithm which first partitions the original problem into centralized regions and then uses a simple distance-based gating function to specialize the expert networks. Each expert contributes to classify an input sample according to the distance between the input and a prototype embedded by the expert. As a result, an accurate classifier with shorter training time and smaller number of parameters is achieved. Experimental results on a binary toy problem and selected datasets from the UCI machine learning repository show the robustness of the proposed method compared to the standard ME model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Jacobs, R., Jordan, M., Barto, A.: Task decomposition through competition in a modular connectionist architecture: the what and where vision tasks. Tech rep. University of Massachusetts, Amherst, MA (1991)
Jacobs, R., Jordan, M., Nowlan, S., Hinton, G.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)
Waterhouse, S., Cook, G.: Ensemble methods for phoneme classification. In: Mozer, M., Jordan, J., Petsche, T. (eds.) Advances in Neural Information Processing Systems, vol. 9, pp. 800–806. The MIT Press, Cambridge (1997)
Avnimelech, R., Intrator, N.: Boosted mixture of experts: an ensemble learning scheme. Neural Comput. 11(2), 483–497 (1999)
Tang, B., Heywood, M., Shepherd, M.: Input partitioning to mixture of experts. International Joint Conference on Neural Networks, 227–232 (2002)
Wan, E., Bone, D.: Interpolating earth-science data using RBF networks and mixtures of experts. In: NIPS, pp. 988–994 (1996)
Ebrahimpour, R., Kabir, E., Yousefi, M.R.: Teacher-directed learning in view-independent face recognition with mixture of experts using overlapping eigenspaces. Computer Vision and Image Understanding 111, 195–206 (2008)
Puuronen, S., Tsymbal, A., Terziyan, V.: Distance functions in dynamic integration of data mining techniques. In: Proceedings of SPIE Data mining and knowledge discovery: theory, tools and technology II, vol. 4057, pp. 22–32. SPIE, Bellingham (2000)
Tsymbal, A., Puuronen, S.: Bagging and boosting with dynamic integration of classifiers. In: Zighed, D.A., Komorowski, J., Żytkow, J.M. (eds.) PKDD 2000. LNCS (LNAI), vol. 1910, pp. 116–125. Springer, Heidelberg (2000)
Jordan, M.I., Jacobs, R.A.: Hierarchical mixtures of experts and the EM algorithm. Neural Comp. 6, 181–214 (1994)
Murphy, P.M., Aha, D.W.: UCI Repository of Machine Learning Databases, Dept. of Information and Computer Science, Univ. of California, Irvine (1994)
Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Armano, G., Hatami, N. (2010). Mixture of Random Prototype-Based Local Experts. In: Graña Romay, M., Corchado, E., Garcia Sebastian, M.T. (eds) Hybrid Artificial Intelligence Systems. HAIS 2010. Lecture Notes in Computer Science(), vol 6076. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13769-3_67
Download citation
DOI: https://doi.org/10.1007/978-3-642-13769-3_67
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-13768-6
Online ISBN: 978-3-642-13769-3
eBook Packages: Computer ScienceComputer Science (R0)