Abstract
Conventional imaging requires a line of sight to create accurate visual representations of a scene. In certain circumstances, however, obtaining a suitable line of sight may be impractical, dangerous, or even impossible. Non-line-of-sight (NLOS) imaging addresses this challenge by reconstructing the scene from indirect measurements. Recently, passive NLOS methods that use an ordinary photograph of the subtle shadow cast onto a visible wall by the hidden scene have gained interest. These methods are currently limited to 1D or low-resolution 2D color imaging or to localizing a hidden object whose shape is approximately known. Here, we generalize this class of methods and demonstrate a 3D reconstruction of a hidden scene from an ordinary NLOS photograph. To achieve this, we propose a novel reformulation of the light transport model that conveniently decomposes the hidden scene into light-occluding and non-light-occluding components to yield a separable non-linear least squares (SNLLS) inverse problem. We develop two solutions: A gradient-based optimization method and a physics-inspired neural network approach, which we call Soft Shadow diffusion (SSD). Despite the challenging ill-conditioned inverse problem encountered here, our approaches are effective on numerous 3D scenes in real experimental scenarios. Moreover, SSD is trained in simulation but generalizes well to unseen classes in simulation and real-world NLOS scenes. SSD also shows surprising robustness to noise and ambient illumination.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adib, F., Katabi, D.: See through walls with wifi! In: Proc. ACM SIGCOMM, p. 75–86 (2013)
Aittala, M., et al.: Computational mirrors: Blind inverse light transport by deep matrix factorization. Advances in Neural Information Processing Systems (NeurIPS) 32 (2019)
Baradad, M., et al.: Inferring light fields from shadows. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), pp. 6267–6275 (2018)
Boger-Lombard, J., Slobodkin, Y., Katz, O.: Towards passive non-line-of-sight acoustic localization around corners using uncontrolled random noise sources. Sci. Rep. 13(1), 4952 (2023)
Bouman, K.L., et al.: Turning corners into cameras: principles and methods. In: Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), October 2017
Cai, R., Lai, H., Chami, I., Guibas, L.J.: Learning gradient fields for shape generation. In: Proc. Eur. Conf. Computer Vision (ECCV), pp. 10751–10760 (2020)
Cavanagh, P., Leclerc, Y.G.: Shape from shadows. J. Exp. Psychol. Hum. Percept. Perform. 15(1), 3 (1989)
Chang, A.X., et al.: Shapenet: an information-rich 3d model repository (2015)
Chaudhury, A.N., Keselman, L., Atkeson, C.G.: Shape from shading for robotic manipulation. In: Proc. IEEE/CVF Winter Conf. on Applications of Computer Vision (WACV), pp. 8389–8398. IEEE (2024)
Chou, G., Bahat, Y., Heide, F.: Diffusion-sdf: conditional generative modeling of signed distance functions. In: Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV) (2023)
Cohen, A.L.: Anti-pinhole imaging. Opt. Acta 29(1), 63–67 (1982)
Community, B.O.: Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam (2018). http://www.blender.org
Czajkowski, R., Murray-Bruce, J.: Two-edge-resolved three-dimensional non-line-of-sight imaging with an ordinary camera. Nat. Commun. 15(1162) (2024)
Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Advances in Neural Information Processing Systems (NeurIPS) (2021)
Faccio, D., Velten, A., Wetzstein, G.: Non-line-of-sight imaging. Nature Rev. Phys. 2(6), 318–327 (2020)
Geng, R., et al.: Passive non-line-of-sight imaging using optimal transport. IEEE Trans. Image Process. 31, 110–124 (2022)
Heide, F., O’Toole, M., Zang, K., Lindell, D.B., Diamond, S., Wetzstein, G.: Non-line-of-sight imaging with partial occluders and surface normals. ACM Trans. Graph. (2019)
Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. In: Advances in Neural Information Processing Systems (NeurIPS), vol. 33, pp. 6840–6851 (2020)
Horn, B.K.P., Brooks, M.J. (eds.): Shape from Shading. MIT Press, Cambridge (1989)
Iwashita, Y., Stoica, A., Kurazume, R.: Gait identification using shadow biometrics. Pattern Recogn. Lett. 33(16), 2148–2155 (2012)
Jain, A., Mildenhall, B., Barron, J.T., Abbeel, P., Poole, B.: Zero-shot text-guided object generation with dream fields. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), December 2021
Kaga, M., Kushida, T., Takatani, T., Tanaka, K., Funatomi, T., Mukaigawa, Y.: Thermal non-line-of-sight imaging from specular and diffuse reflections. IPSJ Trans. Comput. Vision Appl. 11 (12 2019)
Karnieli, A., Fried, O., Hel-Or, Y.: Deepshadow: neural shape from shadow. In: Proc. Eur. Conf. Computer Vision (ECCV), pp. 415–430 (2022)
Khalid, N.M., Xie, T., Belilovsky, E., Tiberiu, P.: Clip-mesh: generating textured meshes from text using pretrained image-text models. In: SIGGRAPH Asia 2022 Conference Papers, December 2022
Kirmani, A., Jeelani, H., Montazerhodjat, V., Goyal, V.K.: Diffuse imaging: creating optical images with unfocused time-resolved illumination and sensing. IEEE Signal Process. Lett. 19(1), 31–34 (2012)
Kong, Z., Ping, W., Huang, J., Zhao, K., Catanzaro, B.: Diffwave: a versatile diffusion model for audio synthesis. In: Proc. Int. Conf. on Learning Representations (ICLR) (2021)
gil Lee, S., et al.: Priorgrad: improving conditional denoising diffusion models with data-dependent adaptive prior. In: Proc. Int. Conf. on Learning Representations (ICLR) (2022)
Lindell, D.B., Wetzstein, G., Koltun, V.: Acoustic non-line-of-sight imaging. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), pp. 6773–6782 (2019)
Lindell, D.B., Wetzstein, G., O’Toole, M.: Wave-based non-line-of-sight imaging using fast fk migration. ACM Trans. Graphics 38(4), 1–13 (2019)
Ling, J., Wang, Z., Xu, F.: Shadowneus: neural sdf reconstruction by shadow ray supervision. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), pp. 175–185 (2023)
Liu, R., Wu, R., Hoorick, B.V., Tokmakov, P., Zakharov, S., Vondrick, C.: Zero-1-to-3: Zero-shot one image to 3d object. arXiv preprint arXiv:2309.16653 (2023)
Liu, X., Bauer, S., Velten, A.: Phasor field diffraction based reconstruction for fast non-line-of-sight imaging systems. Nat. Commun. 11(1), 1645 (2020)
Liu, X., et al.: Non-line-of-sight imaging using phasor-field virtual wave optics. Nature 572(7771), 620–623 (2019)
Liu, X., Wang, J., Xiao, L., Fu, X., Qiu, L., Shi, Z.: Few-shot non-line-of-sight imaging with signal-surface collaborative regularization. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), pp. 13303–13312 (2022)
Lorensen, W.E., Cline, H.E.: Marching cubes: a high resolution 3d surface construction algorithm. In: SIGGRAPH, pp. 163–169. ACM (1987)
Luo, S., Hu, W.: Diffusion probabilistic models for 3d point cloud generation. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR) (2021)
Maeda, T., Wang, Y., Raskar, R., Kadambi, A.: Thermal non-line-of-sight imaging. In: Proc. IEEE Int. Conf. Computational Photography (ICCP), pp. 1–11. IEEE (2019)
Medin, S.C., Weiss, A., Durand, F., Freeman, W.T., Wornell, G.W.: Can shadows reveal biometric information? In: Proc. IEEE/CVF Winter Conf. on Applications of Computer Vision (WACV), pp. 869–879, January 2023
Metzler, C.A., et al.: Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging. Optica 7(1), 63–71 (2020)
Murray-Bruce, J., Saunders, C., Goyal, V.K.: Occlusion-based computational periscopy with consumer cameras. In: Wavelets and Sparsity XVIII, vol. 11138, pp. 286–297. SPIE (2019)
Nichol, A., Jun, H., Dhariwal, P., Mishkin, P., Chen, M.: Point-e: a system for generating 3d point clouds from complex prompts (2022). https://arxiv.org/abs/2212.08751
O’Toole, M., Lindell, D.B., Wetzstein, G.: Confocal non-line-of-sight imaging based on the light-cone transform. Nature 555(7696), 338–341 (2018)
Pawlikowska, A.M., Halimi, A., Lamb, R.A., Buller, G.S.: Single-photon three-dimensional imaging at up to 10 kilometers range. Opt. Express 25(10), 11919–11931 (2017)
Poole, B., Jain, A., Barron, J.T., Mildenhall, B.: Dreamfusion: text-to-3d using 2d diffusion. arXiv preprint arXiv:2309.16653 (2022)
Popov, V., Vovk, I., Gogoryan, V., Sadekova, T., Kudinov, M.: Grad-tts: a diffusion probabilistic model for text-to-speech. In: ICML (2021)
Ramachandran, V.S.: Perception of shape from shading. Nature 331, 163–166 (1988)
Rapp, J., et al.: Seeing around corners with edge-resolved transient imaging. Nat. Commun. 11(1), 5929 (2020)
Rasul, K., Seward, C., Schuster, I., Vollgraf, R.: Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In: Proc. Int. Conf. on Machine Learning (ICML) (2021)
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), pp. 10684–10695, June 2022
Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 234–241. Lecture Notes in Computer Science. Springer (2015)
Saunders, C., Bose, R., Murray-Bruce, J., Goyal, V.K.: Multi-depth computational periscopy with an ordinary camera. In: Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), pp. 9299–9305 (2020)
Saunders, C., Murray-Bruce, J., Goyal, V.K.: Computational periscopy with an ordinary digital camera. Nature 565(7740), 472–475 (2019)
Seidel, S., Rueda-Chacón, H., Cusini, I., Villa, F., Zappa, F., Yu, C., Goyal, V.K.: Non-line-of-sight snapshots and background mapping with an active corner camera. Nat. Commun. 14(1), 3677 (2023)
Seidel, S.W., et al.: Corner occluder computational periscopy: estimating a hidden scene from a single photograph. In: Proc. IEEE Int. Conf. Computational Photography (ICCP), pp. 1–9. IEEE (2019)
Seidel, S.W., Murray-Bruce, J., Ma, Y., Yu, C., Freeman, W.T., Goyal, V.K.: Two-dimensional non-line-of-sight scene estimation from a single edge occluder. IEEE Trans. Comput. Imaging 7, 58–72 (2021)
Sharma, P., et al.: What you can learn by staring at a blank wall. In: Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), pp. 2330–2339, October 2021
Shim, J., Kang, C., Joo, K.: Diffusion-based signed distance fields for 3d shape generation. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), pp. 20887–20897, June 2023
Sohl-Dickstein, J., Weiss, E.A., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: Proc. Int. Conf. on Machine Learning (ICML), pp. 2256–2265 (2015)
Somasundaram, S., Dave, A., Henley, C., Veeraraghavan, A., Raskar, R.: Role of transients in two-bounce non-line-of-sight imaging. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), pp. 9192–9201 (2023)
Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems 32 (2019)
Torralba, A., Freeman, W.T.: Accidental pinhole and pinspeck cameras: revealing the scene outside the picture. Int. J. Comput. Vis. 110(2), 92–112 (2014)
Velten, A., Willwacher, T., Gupta, O., Veeraraghavan, A., Bawendi, M.G., Raskar, R.: Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nat. Commun. 3(1), 745 (2012)
Verlekar, T.T., Soares, L.D., Correia, P.L.: Gait recognition in the wild using shadow silhouettes. Image Vis. Comput. 76, 1–13 (2018)
Wang, W., Yao, L., Chen, L., Lin, B., Cai, D., He, X., Liu, W.: Crossformer: a versatile vision transformer hinging on cross-scale attention. In: Proc. Int. Conf. on Learning Representations (ICLR) (2022)
Wang, Y., et al.: Accurate but fragile passive non-line-of-sight recognition. Commun. Phys. 4(1), 88 (2021)
Yamashita, Y., Sakaue, F., Sato, J.: Recovering 3d shape and light source positions from non-planar shadows. In: Proc. IEEE/CVF Int. Conf. Pattern Recognition (ICPR), pp. 1775–1779 (2010)
Yang, B., et al.: Paint by example: Exemplar-based image editing with diffusion models. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), pp. 18381–18391 (2023)
Yang, G., Huang, Y., Hao, Z., Liu, M., Belongie, S., Hariharan, B.: Pointflow: 3d point cloud generation with continuous normalizing flows. In: Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV) (2019)
Ye, J., Wang, P., Li, K., Shi, Y., Wang, H.: Consistent-1-to-3: consistent image to 3d view synthesis via geometry-aware diffusion models. arXiv preprint arXiv:2309.16653 (2024)
Yedidia, A.B., Baradad, M., Thrampoulidis, C., Freeman, W.T., Wornell, G.W.: Using unknown occluders to recover hidden scenes. In: Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), pp. 12231–12239 (2019)
Zeng, X., et al.: Lion: latent point diffusion models for 3d shape generation. In: Advances in Neural Information Processing Systems (NeurIPS) (2022)
Zhou, L., Du, Y., Wu, J.: 3d shape generation and completion through point-voxel diffusion. In: Proc. IEEE/CVF Int. Conf. Computer Vision (ICCV), pp. 5806–5815 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Raji, F., Bruce, J.M. (2025). Soft Shadow Diffusion (SSD): Physics-Inspired Learning for 3D Computational Periscopy. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15138. Springer, Cham. https://doi.org/10.1007/978-3-031-72989-8_22
Download citation
DOI: https://doi.org/10.1007/978-3-031-72989-8_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72988-1
Online ISBN: 978-3-031-72989-8
eBook Packages: Computer ScienceComputer Science (R0)