Abstract
In order to solve the problems of monocular depth estimation based on deep learning, such as the amount of computation and parameters of deep network architecture is too large and difficult to be applied in engineering equipment, a lightweight end-to-end monocular depth estimation model is proposed. Firstly, an improved feature extraction module is designed based on migration learning, and the effects of scaling the model’s depth, width, and input image resolution on the model’s accuracy and computational resources are also considered. An optimized combination of model expansion is used to ensure the model’s accuracy and save computational resources simultaneously. Secondly, the fusion loss function is designed to fully consider the characteristics between the predicted value and the real value of the image, reduce the storage requirements and computational complexity of the model, and maintain the accuracy of the model in the reasoning phase. The experimental results on the NYU Depth-V2 dataset show that the proposed method has an average accuracy improvement of 0.086 with a threshold ratio of 1.25, and an average structural similarity improvement of 0.006 in depth maps. The method in this paper uses only 5.9M parametric quantities and 4.4G multiplicative computations, which are 16.982M and 20.544G lower than the comparative literatures. The method in this paper can be deployed on the mobile robot with Raspberry Pi 4 4 GB and achieve an inference speed of 5 Hz, which is 2.86 times faster than the comparative literatures.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen, X., Kundu, K., Zhang, Z., Ma, H., Fidler, S., Urtasun, R.: Monocular 3D object detection for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2147–2156 (2016)
Ren, Z., Meng, J., Yuan, J.: Depth camera based hand gesture recognition and its applications in human-computer-interaction. In: 2011 8th International Conference on Information, Communications & Signal Processing, pp. 1–5 (2011)
Bai, H., Gao, L., El-Sana, J., Billinghurst, M.: Free-hand interaction for handheld augmented reality using an RGB-depth camera. In: SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications, SA 2013, pp. 1–4. Association for Computing Machinery, New York (2013)
Kong, Y., Fu, Y., Song, R.: Traversability analysis for quadruped robots navigation in outdoor environment. In: Liu, XJ., Nie, Z., Yu, J., Xie, F., Song, R. (eds.) ICIRA 2021. LNCS, vol. 13014, pp. 246–254. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-89098-8_23
Li, Y., Xiao, N., Huo, X., Wu, X.: Knowledge-enhanced scene context embedding for object-oriented navigation of autonomous robots. In: Liu, H., et al. (eds.) ICIRA 2022. LNCS, vol. 13455, pp. 3–12. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-13844-7_1
Zhang, Z.: Microsoft kinect sensor and its effect. IEEE MultiMedia 19(2), 4–10 (2012)
Yang, S.-W., Wang, C.-C.: On solving mirror reflection in LIDAR sensing. IEEE/ASME Trans. Mech. 16(2), 255–265 (2011)
Li, Y., Ibanez-Guzman, J.: Lidar for autonomous driving.: the principles, challenges, and trends for automotive Lidar and perception systems. IEEE Signal Process. Mag. 37(4), 50–61 (2020)
Alhashim, I., Wonka, P.: High quality monocular depth estimation via transfer learning. arXiv, March 10 (2019)
Hu, Y., Wang, Z., Chen, J., Wang, W.: Context dual-branch attention network for depth completion of transparent object. In: Liu, H., et al. (eds.) ICIRA 2022. LNCS, vol. 13458, pp. 604–614. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-13841-6_54
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. arXiv, May 18 (2015)
Saxena, A., Chung, S., Ng, A.: Learning depth from single monocular images. In: Advances in Neural Information Processing Systems. MIT Press (2005)
Liu, B., Gould, S., Koller, D.: Single image depth estimation from predicted semantic labels. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1253–1260 (2010)
Dube, D., Zell, A.: Real-time plane extraction from depth images with the randomized hough transform. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1084–1091 (2011)
Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Advances in Neural Information Processing Systems, Curran Associates (2014)
Liu, F., Shen, C., Lin, G.: Deep convolutional neural fields for depth estimation from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5162–5170 (2015)
Cao, Y., Wu, Z., Shen, C.: Estimating depth from monocular images as classification using deep fully convolutional residual networks. IEEE Trans. Circuits Syst. Video Technol. 28(11), 3174–3182 (2018)
Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., Navab, N.: Deeper depth prediction with fully convolutional residual networks. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 239–248 (2016)
Ummenhofer, B., et al.: DeMoN.: depth and motion network for learning monocular stereo. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5038–5047 (2017)
Jiang, J., Zheng, L., Luo, F., Zhang, Z.: RedNet: residual encoder-decoder network for indoor RGB-D semantic segmentation, 06 August 2018
Mao, X., Shen, C., Yang, Y.-B.: Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In: Neural Information Processing Systems, Curran Associates (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition, 10 April 2015
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition, 10 December 2015
Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks, 28 January 2018
Tan, M., Le, Q.V.: EfficientNet: rethinking model scaling for convolutional neural networks, 11 September (2020)
Howard, A.G., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications, 16 April 2017
Hu, J., Shen, L., Albanie, S., Sun, G., Wu, E.: Squeeze-and-excitation networks, 16 May 2019
Higher accuracy on vision models with EfficientNet-Lite. https://blog.tensorflow.org/2020/03/higher-accuracy-on-vision-models-with-efficientnet-lite.html. Accessed 03 Jan 2023
Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD images. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 746–760. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33715-4_54
Ma, F., Karaman, S.: Sparse-to-dense: depth prediction from sparse depth samples and a single image. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 4796–4803 (2018)
Eigen, D., Fergus, R.: Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2650–2658 (2015)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge, 29 January 2015
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification, 06 February 2015
Wofk, D., Ma, F., Yang, T.-J., Karaman, S., Sze, V.: FastDepth: fast monocular depth estimation on embedded systems. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 6101–6108 (2019)
Rudolph, M., Dawoud, Y., Güldenring, R., Nalpantidis, L., Belagiannis, V.: Lightweight monocular depth estimation through guided decoding. In: 2022 International Conference on Robotics and Automation (ICRA), pp. 2344–2350 (2022)
Acknowledgement
This work is supported by the National Natural Science Foundation of China (No. 61972091), Natural Science Foundation of Guangdong Province of China (No. 2022A1515010101, No. 2021A1515012639), the Key Research Projects of Ordinary Universities in Guangdong Province (No. 2019KZDXM007, No. 2020ZDZX3049), the Scientific and Technological Innovation Project of Foshan City (No. 2020001003285), National Natural Science Foundation of China under Grant (No. 32171909, No. 51705365) and Featured Innovation Project of Foshan Education Bureau (No. 2022DZXX06).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Liu, X. et al. (2023). L-EfficientUNet: Lightweight End-to-End Monocular Depth Estimation for Mobile Robots. In: Yang, H., et al. Intelligent Robotics and Applications. ICIRA 2023. Lecture Notes in Computer Science(), vol 14274. Springer, Singapore. https://doi.org/10.1007/978-981-99-6501-4_34
Download citation
DOI: https://doi.org/10.1007/978-981-99-6501-4_34
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-6500-7
Online ISBN: 978-981-99-6501-4
eBook Packages: Computer ScienceComputer Science (R0)