Abstract
In this paper we examine the performance of both ranked-listed and categorized results in the context of known-item search (target testing). Performance of known-item search is easy to quantify based on the number of examined documents and class descriptions. Results are reported on a subset of the Open Directory classification hierarchy, which enable us to control the error rate and investigate how performance degrades with error. Three types of simulated user model are identified together with the two operating scenarios of correct and incorrect classification. Extensive empirical testing reveals that in the ideal scenario, i.e. perfect classification by both human and machine, a category-based system significantly outperforms a ranked list for all but the best queries, i.e. queries for which the target document was initially retrieved in the top-5. When either human or machine error occurs, and the user performs a search strategy that is exclusively category based, then performance is much worse than for a ranked list. However, most interestingly, if the user follows a hybrid strategy of first looking in the expected category and then reverting to a ranked list if the target is absent, then performance can remain significantly better than for a ranked list, even with misclassification rates as high as 30%. We also observe that this hybrid strategy results in performance degradations that degrade gracefully with error rate.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Hearst, M.A., Pedersen, J.O.: Reexamining the cluster hypothesis: Scatter/gather on retrieval results. In: Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 76–84.
Chen, H., Dumais, S.: Bring order to the web: Automatically categorizing search results. In: CHI 2000: Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 145–152. ACM Press, New York (2000)
Zeng, H.J., He, Q.C., Chen, Z., Ma, W.Y., Ma, J.W.: Learning to cluster web search results. In: SIGIR 2004: Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 210–217. ACM Press, New York (2004)
Kummamuru, K., Lotlikar, R., Roy, S., Singal, K., Krishnapuram, R.: A hierarchical monothetic document clustering algorithm for summarization and browsing search results. In: Proceedings of the 13th International Conference on World Wide Web, pp. 658–665 (2004)
Osinski, S., Weiss, D.: Carrot 2: Design of a flexible and efficient web information retrieval framework. In: Proceedings of the third International Atlantic Web Intelligence Conference, Berlin. LNCS, pp. 439–444. Springer, Heidelberg (2005)
Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. 2nd edn. Wiley-Interscience, New York (2000)
Azzopardi, L., Rijke, M.D.: Automatic construction of known-item finding test beds. In: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 603–604. ACM Press, New York (2006)
Vinay, V., Cox, I.J., Milic-Frayling, N., Wood, K.: Evaluating relevance feedback algorithms for searching on small displays. In: 27th European Conference on IR Research. ECIR (2005)
Bar-Ilan, J., Keenoy, K., Yaari, E., Levene, M.: User rankings of search engine results. J. American Society for Information Science and Technology 58(9), 1254–1266 (2007)
Su, L.T.: A comprehensive and systematic model of user evaluation of web search engines: Ii. an evaluation by undergraduates. J. American Society for Information Science and Technology 54(13), 1193–1223 (2003)
Broder, A.: A taxonomy of web search. SIGIR Forum 36(2), 3–10 (2002)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zhu, Z., Cox, I.J., Levene, M. (2008). Ranked-Listed or Categorized Results in IR: 2 Is Better Than 1. In: Kapetanios, E., Sugumaran, V., Spiliopoulou, M. (eds) Natural Language and Information Systems. NLDB 2008. Lecture Notes in Computer Science, vol 5039. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69858-6_12
Download citation
DOI: https://doi.org/10.1007/978-3-540-69858-6_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69857-9
Online ISBN: 978-3-540-69858-6
eBook Packages: Computer ScienceComputer Science (R0)