Skip to main content
Log in

Speech-based navigation and error correction: a comprehensive comparison of two solutions

  • Long Paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

Speech-based navigation and error correction can serve as a useful alternative for individuals with disabilities that hinder the use of a keyboard and mouse, but existing solutions available in commercial software are still error-prone and time-consuming. This paper discusses two studies conducted with the goal of improving speech-based navigation and error correction techniques. The first study was designed to improve understanding of an innovative speech-based navigation technique: anchor-based navigation. The second study was longitudinal, spanning seven trials, and was intended to provide insights regarding the efficacy of both traditional target/direction-based navigation and anchor-based navigation. Building on earlier studies that employed similar methodologies and interaction solutions, this paper also provides an informal evaluation of a new correction dialogue. Although the two solutions resulted in the same level of efficiency, the underlying strategies adopted were different, and the anchor-based solution allowed participants to generate better quality text and was perceived to be easier to use. These results suggest that the anchor-based solution could be a promising alternative, especially for novice users as they learn how to use speech-based dictation solutions. The findings of these studies need to be further validated with the involvement of users with disabilities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Burmeister, M., Machate, J., Klein, J.: Access for all: HEPHAISTOS—a personal home assistant. Extended Abstracts of CHI 97, 36–37. ACM Press, New York (1997)

    Google Scholar 

  2. Dai, L., Goldman, R., Sears, A., Lozier, J.: Speech-based cursor control using grids: modeling performance and comparisons with other solutions. Behav. Inf. Technol. 24(3), 219–230 (2005)

    Article  Google Scholar 

  3. De Mauro, C., Gori, M., Maggini, M., Martinelli, E.: Easy access to graphical interfaces by voice mouse. Available from the author at: demauro@dii.unisi.it. (2001)

  4. Fellbaum, K., Koroupetroglou, G.: Principles of electronic speech processing with applications for people with disabilities. J. Technol. Disabil. 20(2), 55–85 (2008)

    Google Scholar 

  5. Feng, J., Karat, C.-M., Sears, A.: How productivity improves in hands-free continuous dictation tasks: lessons learned from a longitudinal study. Interact. Comput. 17(3), 265–289 (2005)

    Article  Google Scholar 

  6. Feng, J., Sears, A.: Using confidence scores to improve hands-free speech-based navigation in continuous dictation systems. ACM Trans. Comput. Hum. Interact. 11(4), 329–356 (2004)

    Article  Google Scholar 

  7. Feng, J., Sears, A., Karat, C.-M.: A longitudinal evaluation of hands-free speech-based navigation during dictation. Int. J. Hum. Comput. Stud. 64, 553–569 (2006)

    Article  Google Scholar 

  8. Feng, J., Zhu, S., Hu, R., Sears, A.: Speech technology in real world environment: early results from a long term study. The Tenth International ACM SIGACCESS Conference on Computers and Accessibility. Halifax, Canada (2008)

  9. Goette, T.: Keys to the adoption and use of voice recognition technology in organizations. Inf. Technol. People 13(1), 67–80 (2000)

    Article  Google Scholar 

  10. Halverson, C., Horn, D., Karat, C.-M., Karat, J.: The beauty of errors: Patterns of error correction in desktop speech systems. Proceedings of INTERACT’99, 133–140. IOS Press (1999)

  11. Harada, S., Landay, J., Malkin, J., Li, X., Bilmes, J.: The vocal joystick: Evaluation of voice-based cursor control techniques. Proceedings of ASSETS 2006, 197–204. Portland, Oregon (2006)

  12. Hauptmann, A.G.: Speech and gestures for graphic image manipulation. Proceedings of CHI’89, 241–245 (1989)

  13. Kamel, H., Landay, J.: Sketching images eyes-free: A grid-based dynamic drawing tool for the blind. Proceedings of ASSETS 2002, 33–40 (2002)

  14. Karat, C.-M., Halverson, C., Karat, J., Horn, D.: Patterns of entry and correction in large vocabulary continuous speech recognition systems. Proceedings of CHI 99, 568–575 (1999)

    Google Scholar 

  15. Lai, J., Vergo, J.: MedSpeak: Report creation with continuous speech recognition. Proceedings of CHI 99, 431–438. ACM Press, New York (1997)

  16. Lewis, J.R.: Effect of error correction strategy on speech dictation throughput. Proceedings of the Human Factors and Ergonomics Society 43rd Annual Meeting, 457–461. Human Factors and Ergonomics Society, Santa Monica, CA (1999)

  17. Manaris, B., Harkreader, A.: SUITEKeys: A speech understanding interface for the motor-control challenged. Proceedings of the 3rd International ACM SIGCAPH Conference on Assistive Technologies (ASSETS’98), 108–115 (1998)

  18. McNair, A., Waibel, A.: Improving recognizer acceptance through robust, natural speech repair. Proceedings of the International Conference on Spoken Language Processing, 1299–1302 (1994)

  19. Mihara, Y., Shibayama, E., Takahashi, S.: The migratory cursor: accurate speech-based cursor movement by moving multiple ghost cursors using non-verbal vocalizations, Proceedings of ASSETS’05, 76–83 (2005)

  20. Olwal, A., Feiner, S.: Interaction Techniques Using Prosodic Features of Speech and Audio Localization, Proceedings of IUI 2005: International Conference on Intelligent User Interfaces, 284–286. San Diego, CA (2005)

  21. Oviatt, S.L.: Multimodal interactive maps: Designing for human performance. Hum. Comput. Interact. 12, 93–129 (1997)

    Article  Google Scholar 

  22. Oviat, S.L.: Multimodal interfaces. In: Jacko, J.A., Sears, A. (eds.) The human-computer interaction handbook, pp. 286–304. Lawrence Erlbaum Assoc, Mahwah (2003)

    Google Scholar 

  23. Phonetic alphabets, historic, English & others. Available via http://www.phonetic.org.au/alphabet.htm

  24. Price, K., Sears, A.: Speech-based text entry for mobile handheld devices: an analysis of efficacy and error correction techniques for server-based solutions. Int. J. Hum. Comput. Interact. 19(3), 279–304 (2005)

    Article  Google Scholar 

  25. Sears, A., Feng, J., Oseitutu, K., Karat, C.: Hands-free, speech-based navigation during dictation: difficulties, consequences, and solutions. Hum. Comput. Interact. 18(3), 229–257 (2003)

    Article  Google Scholar 

  26. Sears, A., Karat, C.-M., Oseitutu, K., Karimullah, A., Feng, J.: Productivity, satisfaction, and interaction strategies of individual with spinal cord injuries and traditional users interacting with speech recognition software. Univ. Access. Inf. Soc. 1, 4–15 (2001)

    Google Scholar 

  27. Sears, A., Lin, M., Karimullah, A.S.: Speech-based cursor control: understanding the effects of target size, cursor speed, and command selection. Univ. Access. Inf. Soc. 2(1), 30–43 (2002)

    Article  Google Scholar 

  28. Suhm, B., Myers, B., Wailbel, A.: Multimodal error correction for speech user interfaces. ACM Trans. Comput. Hum. Interact. 8(1), 60–98 (2001)

    Article  Google Scholar 

  29. Thomas, J.C., Basson, S., Gardner-Bonneau, D.: Universal access and assistive technology. In: Gardner-Bonneau, D. (ed.) Human factors and voice interactive systems Boston, pp. 135–146. Kluwer, MA (1999)

    Google Scholar 

  30. Tong, Q., Wang, Z.: Compensate the Speech Recognition Delays for Accurate Speech-Based Cursor Position Control. Lecture Notes Comp. Sci. 5611, 752–760. Springer, Berlin/Heidelberg (2009)

  31. Wolf, C., Zadrozny, W.: Evolution of the conversation machine: A case study of bringing advanced technology to the marketplace. Proceedings of CHI 98, 488–495. ACM Press, New York (1998)

Download references

Acknowledgments

This material is based upon work supported by the National Science Foundation (NSF) under grant no. IIS-9910607, CNS-0619379 and National Institute on Disabilities and Rehabilitation Research (NIDRR) under grant number H133G050354. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF and the NIDRR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinjuan Feng.

Appendices

Appendix A

See Table 1

Table 1 Speech-based commands available in TkTalk 4.0 TD

Appendix B

See Table 2

Table 2 Speech-based commands available in TkTalk 4.0 Anchor

Rights and permissions

Reprints and permissions

About this article

Cite this article

Feng, J., Zhu, S., Hu, R. et al. Speech-based navigation and error correction: a comprehensive comparison of two solutions. Univ Access Inf Soc 10, 17–31 (2011). https://doi.org/10.1007/s10209-010-0185-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-010-0185-9

Keywords