ABOUT ME
I am currently a ZJU100 Young Professor at Zhejiang University.
2022-2023, I was a Lecturer (a.k.a. Assistant Professor) and ARC DECRA Fellow at the ReLER Lab@University of Technology Sydney.
2020-2022, I was a Research Fellow, working with Prof. Luc Van Gool in Computer Vision Laboratory @ ETH Zurich.
2018-2019, I was a Research Scientist (2018.08-2019.06) and then a Senior Scientist (2019.06-2019.12) in IIAI.
2016-2018, I was a visiting Ph.D. student at Center for Vision, Cognition, Learning and Autonomy of University of California, Los Angeles, under the supervision of Prof. Song-Chun Zhu.
2014-2018, I was a Ph.D. student at Beijing Institute of Technology.
Email address: wenguanwang[dot]ai[at]gmail[dot]com
Other links: Google Scholar, ResearchGate, GitHub, Linkedin...
Ph.D. recruitment: Please contact me if you have an excellent background and motivation to undertake Ph.D. studies.
Collaborators and visiting students: I am always looking for excellent visiting students and long-term collaborators, who share similar research interests and academic views with me.
I apologize that I am not always able to reply to all messages but you can be sure that your message will catch my eye if you have great research experience.
RESEARCH STATEMENT
My research interests lie in the intersection of computer vision, artificial intelligence, and cognition. The ultimate goal of my research is to develop a machine that can perceive, reason, and plan in real-world scenes like humans.
Towards this goal, (1) during my graduate studies, I focused on building a perception model that learns to see the world by making use of large-scale 2D-camera data and mimicking human visual attention mechanism – a critical cognitive perception behavior. (2) After my graduation, I strived to develop a more powerful recognition system that learns to comprehensively understand this structured and human-centric visual world, based on bottom-up and top-down cognitive processing as well as grammar and graph models. (3) More recently, I pursue an explainable and embodied AI machine that is able to actively interact with humans and environments and interpret its inherent decision-making mode, on the basis of classic cognitive theories, including embodied cognition, prototype theory, exemplar-based reasoning, abductive reasoning, and counterfactual thinking.
RESEARCH INTEREST
Data-and Knowledge-Driven AI: Neuro-Symbolic AI, Neural Logic
Autonomous Driving: 2D/3D semantic segmentation, point cloud object detection
Human-Centred AI: human parsing, gaze behavior analysis, nonverbal communication understanding
Embodied AI: visual navigation, human-machine dialog, command generation for navigation robotics
AI for Science: retrosynthesis prediction, protein function prediction, mental image reconstruction from human brain activity
Workshop & Challenge
Call for papers and participation: Vision Meets Drones 2023: A Challenge, ICCV 2023
Call for papers and participation: Vision Meets Drones 2021: A Challenge, ICCV 2021
Call for papers and participation: 3rd Person in Context Workshop Challenge, CVPR 2021
Call for papers and participation: Webly-Supervised Fine-Grained Recognition Workshop Challenge, ACCV 2020
Survey
A Survey of World Models for Autonomous Driving [pdf(arxiv)]
T. Feng, W. Wang, Y. Yang, ArXiv, 2025
A Survey on 3D Gaussian Splatting [pdf(arxiv)]
G. Chen, W. Wang†, ArXiv, 2024
Towards Data-and Knowledge-Driven Artificial Intelligence: A Survey on Neuro-Symbolic Computing [pdf(arxiv)]
W. Wang, Y. Yang, F. Wu, TPAMI, 2024
Visual Knowledge in the Big Model Era: Retrospect and Prospect [pdf(arxiv)]
W. Wang, Y. Yang, Y. Pan, FITEE, 2025 (Front Cover)
A Survey on Deep Learning Technique for Video Segmentation [pdf(arxiv)] [website]
T. Zhou, F. Porikli, D. Crandall, L. Van Gool, W. Wang†. TPAMI, 2022
Salient Object Detection in the Deep Learning Era: An In-Depth Survey [pdf(arxiv)] [dataset&code&website]
W. Wang, Q. Lai, H. Fu, J. Shen, H. Ling, R. Yang. TPAMI, 2021
Preprints
Retrosynthesis Prediction Enhanced by In-silico Reaction Data Augmentation [pdf(arxiv)]
X. Zhang, Y. Mo, W. Wang†, Y. Yang†, ArXiv, 2024
Segment and Track Anything [pdf(arxiv)] [code]
Y. Cheng, L. Li, Y. Xu, X. Li, Z. Yang, W. Wang, Y. Yang, ArXiv, 2023
2025
Scene Map-based Prompt Tuning for Navigation Instruction Generation [pdf(arxiv)][code]
S. Fan, R. Liu, W. Wang, Y. Yang, CVPR, 2025
DiffVsgg: Diffusion-based Online Video Scene Graph Generation [pdf(arxiv)][code]
M. Chen, L. Li, W. Wang, Y. Yang, CVPR, 2025
TAGA: Self-supervised Learning for Template-free Animatable Gaussian Articulated Model [pdf(arxiv)][code]
Z. Zhai, G. Chen, W. Wang, D. Zheng, J. Xiao, CVPR, 2025
LOGICZSL: Exploring Logic-induced Representation for Compositional Zero-shot Learning [pdf(arxiv)][code]
P. Wu, X. Lu, H. Hao, Y. Xian, J. Shen, W. Wang, CVPR, 2025
Do as We Do, Not as You Think: the Conformity of Large Language Models [pdf(arxiv)][code]
Z. Weng*, G. Chen*, W. Wang†, ICLR, 2025 (Oral)
Hydra-SGG: Hybrid Relation Assignment for One-stage Scene Graph Generation [pdf(arxiv)][code]
M. Chen, G. Chen, W. Wang, Y. Yang, ICLR, 2025
Learning Clustering-based Prototypes for Compositional Zero-Shot Learning [pdf(arxiv)][code]
H. Qu*, J. Wei*, X. Shu, W. Wang, ICLR, 2025
2024
Scene Graph Generation with Role-Playing Large Language Models[pdf][code]
G. Chen, J. Li, W. Wang†, NeurIPS, 2024
Human-Object Interaction Detection Collaborated with Large Relation-driven Diffusion Models [pdf][code]
L. Li, W. Wang†, Y. Yang, NeurIPS, 2024
Vision-Language Navigation with Energy-Based Policy [pdf][code]
R. Liu, W. Wang, Y. Yang, NeurIPS, 2024
Nonverbal Interaction Detection [pdf(arxiv)] [code]
J. Wei*, T. Zhou*, Y. Yang, W. Wang†, ECCV, 2024
Navigation Instruction Generation with BEV Perception and Large Language Models [pdf(arxiv)] [code]
S. Fan, R. Liu, W. Wang†, Y. Yang, ECCV, 2024
Controllable Navigation Instruction Generation with Chain of Thought Prompting [pdf(arxiv)] [code]
X. Kong*, J. Chen*, W. Wang†, H. Su, X. Hu, Y. Yang, S. Liu†, ECCV, 2024
Mutual Learning for Acoustic Matching and Dereverberation via Visual Scene-driven Diffusion [pdf(arxiv)] [code]
J. Ma, W. Wang†, Y. Yang, F. Zheng, ECCV, 2024
General and Task-Oriented Video Segmentation [pdf(arxiv)] [code]
M. Chen, L. Li, W. Wang, R. Quan, Y. Yang, ECCV, 2024
Shape2Scene: 3D Scene Representation Learning Through Pre-training on Shape Data [pdf(arxiv)] [code]
T. Feng, W. Wang, R. Quan, Y. Yang, ECCV, 2024
MS2SL: Multimodal Spoken Data-Driven Continuous Sign Language Production [pdf(arxiv)] [code]
J. Ma, W. Wang, Y. Yang, F. Zheng, ACL, 2024
DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models (Exemplified as A Video Agent) [pdf(arxiv)] [code]
Z. Yang, G. Chen, X. Li, W. Wang, Y. Yang, ICML, 2024
ETPNav: Evolving Topological Planning for Vision-Language Navigation in Continuous Environments [pdf(arxiv)] [code]
D. An, H.Wang, W. Wang†, Z. Wang, Y. Huang †, K. He, L. Wang, TPAMI, 2024
Neural Clustering based Visual Representation Learning [pdf(arxiv)] [code]
G. Chen, X. Li, Y. Yang, W. Wang†, CVPR, 2024
Psychometry: An Omnifit Model for Image Reconstruction from Human Brain Activity [pdf(arxiv)] [code]
R. Quan, W. Wang†, Z. Tian, F. Ma, Y. Yang, CVPR, 2024
Clustering for Protein Representation Learning [pdf(arxiv)] [code]
R. Quan, W. Wang†, F. Ma, H. Fan, Y. Yang, CVPR, 2024
Poly Kernel Inception Network for Remote Sensing Detection [pdf(arxiv)] [code]
X. Cai, Q. Lai, Y. Wang, W. Wang†, Z. Sun, Y. Yao†, CVPR, 2024
IS-Fusion: Instance-Scene Collaborative Fusion for Multimodal 3D Object Detection [pdf(arxiv)] [code]
J. Yin, R. Chen, W. Li, R. Yang, P. Frossard, J. Shen, W. Wang†, CVPR, 2024 (Highlight)
Volumetric Environment Representation for Vision-Language Navigation [pdf(arxiv)] [code]
R. Liu, W. Wang, Y. Yang, CVPR, 2024 (Highlight)
Clustering Propagation for Universal Medical Image Segmentation [pdf(arxiv)] [code]
Y. Ding, L. Li, W. Wang, Y. Yang, CVPR, 2024
LSKNet: Towards Effective and Efficient 3D Perception with Large Sparse Kernels [pdf(arxiv)] [code]
T. Feng, W. Wang, F. Ma, Y. Yang, CVPR, 2024
2023
Neural-Logic Human-Object Interaction Detection [pdf(arxiv)] [code]
L. Li, J. Wei, W. Wang†, Y. Yang, NeurIPS, 2023
ClusterFomer: Clustering As A Universal Visual Learner [pdf(arxiv)] [code]
J. Liang, Y. Cui, Q. Wang, T. Geng, W. Wang, D. Liu, NeurIPS, 2023
LogicSeg: Parsing Visual Semantics with Neural Logic Learning and Reasoning [pdf(arxiv)] [code]
L. Li, W. Wang†, Y. Yang, ICCV, 2023 (Oral)
Large-Scale Person Detection and Localization using Overhead Fisheye Cameras [pdf(arxiv)] [code&dataset]
L. Yang*, L. Li*, X. Xin, Y. Sun, Q. Song, W. Wang†, ICCV, 2023 (Oral)
Omnidirectional Information Gathering for Knowledge Transfer-based Audio-Visual Navigation[pdf(arxiv)] [code]
J. Chen, W. Wang†, S. Liu, H. Li, Y. Yang, ICCV, 2023
DREAMWALKER: Mental Planning for Continuous Vision-Language Navigation[pdf(arxiv)] [code]
H. Wang, W. Liang, L. Van Gool, W. Wang†, ICCV, 2023
Bird's-Eye-View Scene Graph for Vision-Language Navigation[pdf(arxiv)] [code]
R. Liu, X. Wang, W. Wang†, Y. Yang, ICCV, 2023
Logic-induced Diagnostic Reasoning for Semi-supervised Semantic Segmentation [pdf(arxiv)] [code]
C. Liang, W. Wang, J. Miao, Y. Yang, ICCV, 2023
Clustering based Point Cloud Representation Learning for 3D Analysis [pdf(arxiv)] [code]
T. Feng, W. Wang, X. Wang, Y. Yang, Q. Zheng, ICCV, 2023
CLUSTSEG: Clustering for Universal Segmentation [pdf(arxiv)] [code]
J. Liang, T. Zhou, D. Liu, W. Wang†, ICML, 2023
LANA: A Language-Capable Navigator for Instruction Following and Generation [pdf(arxiv)] [code]
X. Wang, W. Wang, J. Shao, Y. Yang, CVPR, 2023
Unified Mask Embedding and Correspondence Learning for Self-Supervised Video Segmentation [pdf(arxiv)] [code]
L. Li, W. Wang†, T. Zhou, J. Li, Y. Yang, CVPR, 2023
Boosting Video Object Segmentation via Space-time Correspondence Learning [pdf(arxiv)] [code]
Y. Zhang*, L. Li*, W. Wang†, R. Xie, L. Song, W. Zhang, CVPR, 2023
Local-Global Context Aware Transformer for Language-Guided Video Segmentation [pdf(arxiv)] [code]
C. Liang, W. Wang, T. Zhou, J. Miao, Y. Luo, Y. Yang, TPAMI, 2023
Visual Recognition with Deep Nearest Centroids [pdf(arxiv)] [code]
W. Wang*†, C. Han*, T. Zhou*, D. Liu†, ICLR, 2023 (Spotlight)
2022
GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models [pdf(arxiv)] [code]
C. Liang*, W. Wang*, J. Miao, Y. Yang. NeurIPS, 2022 (Spotlight)
Towards Versatile Embodied Navigation [pdf(arxiv)] [code]
H. Wang, W. Liang, L. Van Gool, W. Wang†. NeurIPS, 2022 (Spotlight)
Learning Equivariant Segmentation with Instance-Unique Querying [pdf(arxiv)] [code]
W. Wang*, J. Liang*, D. Liu. NeurIPS, 2022 (Spotlight)
ProposalContrast: Unsupervised Pre-training for LiDAR-based 3D Object Detection [pdf(arxiv)] [code]
J. Yin, D. Zhou, L. Zhang, J. Fang, C.-Z. Xu, J. Shen†, W. Wang†. ECCV, 2022
Semi-supervised 3D Object Detection with Proficient Teachers [pdf(arxiv)] [code]
J. Yin∗, J. Fang∗, D. Zhou, L. Zhang, C.-Z. Xu, J. Shen†, W. Wang†. ECCV, 2022
Target-Driven Structured Transformer Planner for Vision-Language Navigation [pdf(researchgate)] [pdf(arxiv)] [code]
Y. Zhao*, J. Chen*, C. Gao, W. Wang†, L. Yang, H. Ren, H. Xia, S. Liu. ACMMM, 2022 (Oral)
Rethinking Semantic Segmentation: A Prototype View [pdf(researchgate)] [pdf(arxiv)] [code]
T. Zhou, W. Wang†, E. Konukoglu, L. Van Gool. CVPR, 2022 (Oral)
Deep Hierarchical Semantic Segmentation [pdf(researchgate)] [pdf(arxiv)] [code]
L. Li, T. Zhou, W. Wang†, J. Li, Y. Yang. CVPR, 2022
Counterfactual Cycle-Consistent Learning for Instruction Following and Generation in Vision-Language Navigation [pdf(researchgate)] [pdf(arxiv)] [code]
H. Wang, W. Liang, J. Shen, L. Van Gool, W. Wang†. CVPR, 2022
Locality-Aware Inter-and Intra-Video Reconstruction for Self-Supervised Correspondence Learning [pdf(researchgate)] [pdf(arxiv)] [code]
L. Li, T. Zhou, W. Wang†, L. Yang, J. Li, Yi Yang. CVPR, 2022
Visual Abductive Reasoning [pdf(researchgate)] [pdf(arxiv)] [dataset&code]
C. Liang, W. Wang, T. Zhou, Y. Yang. CVPR, 2022
2021
Segmenting Objects from Relational Visual Data [pdf(researchgate)] [code]
X. Lu, W. Wang†, J. Shen, D. Crandall, L. Van Gool. PAMI, 2021
Exploring Cross-Image Pixel Contrast for Semantic Segmentation [pdf(researchgate)] [pdf(arxiv)] [code]
W. Wang*, T. Zhou*, F. Yu, J. Dai, E. Konukoglu, L. Van Gool. ICCV, 2021 (Oral)
Differentiable Multi-Granularity Human Representation Learning for Instance-Aware Human Semantic Parsing [pdf(researchgate)] [pdf(arxiv)] [code]
T. Zhou, W. Wang†, S. Liu, Y. Yang, L. Van Gool. CVPR, 2021 (Oral)
Face Forensics in the Wild [pdf(researchgate)] [arxiv] [code]
T. Zhou, W. Wang†, Z. Liang, J. Shen. CVPR, 2021 (Oral)
Structured Scene Memory for Vision-Language Navigation [pdf(researchgate)] [pdf(arxiv)] [code]
H. Wang, W. Wang†, W. Liang†, C. Xiong, J. Shen. CVPR, 2021
2020
Mining Cross-Image Semantics for Weakly Supervised Semantic Segmentation [pdf(researchgate)] [arxiv] [code]
G. Sun, W. Wang†, J. Dai, L. Van Gool.
ECCV2020 Oral+ CVPR 2020 LID Workshop Best Paper+ CVPR2020 LID Challenge: Weakly-Supervised Semantic Segmentation Track Winner Solution
Video Object Segmentation with Episodic Graph Memory Networks [pdf(researchgate)] [arxiv] [code]
X. Lu, W. Wang†, M. Danelljan, T. Zhou, J. Shen, L. Van Gool. ECCV (Spotlight), 2020
Active Visual Information Gathering for Vision-Language Navigation [pdf(researchgate)] [arxiv] [code]
H. Wang, W. Wang† , T. Shu, W. Liang, and J. Shen. ECCV, 2020
Weakly Supervised 3D Object Detection from Lidar Point Cloud [pdf(researchgate)] [arxiv] [code]
Q. Meng, W. Wang† , T. Zhou, J. Shen, L. Van Gool and D. Dai. ECCV, 2020
Hierarchical Human Parsing with Typed Part-Relation Reasoning [pdf(researchgate)] [arxiv] [code]
W. Wang*, H. Zhu*, J. Dai, Y. Pang, J. Shen and L. Shao. CVPR, 2020
Cascaded Human-Object Interaction Recognition [pdf(researchgate)] [arxiv] [code]
T. Zhou*, W. Wang*, S. Qi, H. Ling and J. Shen. CVPR, 2020
(Winners in the 2019 Person in Context (PIC) Challenge: Relation Segmentation Track and Human-Object-Interaction Detection Track at ICCV2019!)
Learning Video Object Segmentation from Unlabeled Videos [pdf(researchgate)] [arxiv] [code&results]
X. Lu, W. Wang† , J. Shen, Y.-W. Tai, D. Crandall and S. Hoi. CVPR, 2020
A Unified Object Motion and Affinity Model for Online Multi-Object Tracking [pdf(researchgate)] [arxiv] [code]
J. Yin, W. Wang† , Q. Meng, R. Yang and J. Shen. CVPR, 2020
2019
Comic-Guided Speech Synthesis [pdf(researchgate)] [website][demo]
Y. Wang, W. Wang, W. Liang and L.-F. Yu. SIGGRAPH Asia, 2019
Zero-Shot Video Object Segmentation via Attentive Graph Neural Networks [pdf(researchgate)] [code&results]
W. Wang*, X. Lu*, J. Shen, D. Crandall and L. Shao. ICCV, 2019 (Oral)
Learning Compositional Neural Information Fusion for Human Parsing [pdf(researchgate)] [code]
W. Wang*, Z. Zhang*, S. Qi, J. Shen, Y. Pang and L. Shao. ICCV, 2019
(3rd palce in the CVPR2019 Look into Person (LIP) Challenge: Single-Person Human Parsing Track)
Understanding Human Gaze Communication by Spatio-temporal Graph Reasoning [pdf(researchgate)] [dataset&code&website]
L. Fan*, W. Wang*, X. Tang, S. Huang and S.-C. Zhu. ICCV, 2019
Human-Aware Motion Deblurring [pdf(researchgate)] [dataset&code&website]
Z. Shen*, W. Wang*, X. Lu, J. Shen, H. Ling, T. Xu and L. Shao. ICCV, 2019
Reasoning visual dialogs with structural and partial observations [pdf(researchgate)] [pdf(arxiv)] [code]
Z. Zheng*, W. Wang*, S. Qi*, and S.-C. Zhu. CVPR, 2019 (Oral)
Learning unsupervised video object segmentation through visual attention [pdf(researchgate)] [code&data&results]
W. Wang*, H. Song*, S. Zhao, J. Shen, S. Zhao, S. Hoi, and H. Ling. CVPR, 2019
See more, know more: Unsupervised video object segmentation with co-attention siamese networks [pdf(researchgate)] [code&results]
X. Lu*, W. Wang*, C. Ma, J. Shen, L. Shao and F. Porikli. CVPR, 2019
Shifting more attention to video salient object detection [pdf(researchgate)] [code] [dataset: baidu (fetch code: ivzo)] [dataset: googleDrive]
D. Fan, W. Wang, M.-M. Cheng, and J. Shen. CVPR, 2019 (Oral & Best Paper Finalist)
An iterative and cooperative top-down and bottom-up inference network for salient object detection [pdf(researchgate)] [results]
W. Wang, J. Shen, M.-M. Cheng, and L. Shao. CVPR, 2019
Salient object detection with pyramid attention and salient edges [pdf(researchgate)] [code&results]
W. Wang*, S. Zhao*, S. Hoi, J. Shen, and A. Borji. CVPR, 2019
2018
Learning human-object interactions by graph parsing neural networks [pdf(researchgate)] [pdf(arxiv)] [code]
S. Qi*, W. Wang*, B. Jia, J. Shen, and S.-C. Zhu. ECCV, 2018
Pyramid dilated deeper convLSTM for video salient object detection [pdf(researchgate)] [code&results]
H. Song*, W. Wang*, S. Zhao, J. Shen, and K.-M. Lam. ECCV, 2018
Attentive fashion grammar network for fashion landmark detection and clothing category classification [pdf(researchgate)]
W. Wang, Y. Xu, J. Shen, and S.-C. Zhu. IEEE CVPR, 2018
Revisiting video saliency: A large-scale benchmark and a new model [pdf(researchgate)] [code&DHF1K(dataset)&otherdatasets]
[Leaderboards on DHF1K, Hollywood-2, UCF sports, DIEM, LEDOV datasets] [Evaluation Code]
W. Wang, J. Shen, F. Guo, M.-M. Cheng, and A. Borji. CVPR, 2018
Salient object detection driven by fixation prediction [pdf(researchgate)] [code&results]
W. Wang, J. Shen and A. Borji. CVPR, 2018
Inferring shared attention in social scene videos [pdf(researchgate)] [code&VideoCoAtt(dataset)]
L. Fan, Y. Chen, P. Wei, W. Wang, and S.-C. Zhu. CVPR, 2018
Learning pose grammar to encode human body configuration for 3D pose estimation [pdf(researchgate)] [code]
H.-S. Fang*, Y. Xu*, W. Wang*, X. Liu, and S.-C. Zhu. AAAI, 2018 (Oral)
Video salient object detection via fully convolutional networks [pdf(researchgate)] [code]
W. Wang, J. Shen, and L. Shao. IEEE TIP, 2018
Deep visual attention prediction [pdf(researchgate)] [code&results]
W. Wang, and J. Shen. IEEE TIP, 2018
2017
Saliency-aware video object segmentation [pdf(researchgate)] [code]
W. Wang, J. Shen, R. Yang, and F. Porikli. IEEE TPAMI, 2017
Deep cropping via attention box prediction and aesthetics assessment [pdf(researchgate)]
W. Wang, and J. Shen. ICCV, 2017
Super-trajecotry for video segmentation [pdf(researchgate)]
W. Wang, J. Shen, J. Xie, and F. Porikli. ICCV, 2017
Selective video object cutout [pdf(researchgate)]
W. Wang, J. Shen, and F. Porikli. IEEE TIP, 2017
Video co-saliency guided co-segmentation [pdf(researchgate)] [code&ViCoSS(dataset)]
W. Wang, J. Shen, H. Sun, and L. Shao. IEEE TCSVT, 2017
2016
Real-time superpixel segmentation by DBSCAN clustering algorithm [pdf(researchgate)] [code]
J. Shen, X. Hao, Z. Liang, Y. Liu, W. Wang, and L. Shao. IEEE TIP, 2016
Stereoscopic thumbnail creation via efficient stereo saliency detection [pdf(researchgate)] [code]
W. Wang, J. Shen, Y. Yu, and K.-L. Ma. IEEE TVCG, 2016
Correspondence driven saliency transfer [pdf(researchgate)] [code]
W. Wang, J. Shen, L. Shao, and F. Porikli. IEEE TIP, 2016
Higher-order image co-segmentation [pdf(researchgate)] [code]
W. Wang and J. Shen. IEEE TMM, 2016
2015 and Before
Saliency-aware geodesic video object segmentation [pdf(researchgate)] [code]
W. Wang, J. Shen, and F. Porikli. CVPR, 2015
Consistent video saliency using local gradient flow optimization and global refinement [pdf(researchgate)] [code] [ViSal(dataset)]
W. Wang, J. Shen, and L. Shao. IEEE TIP, 2015
Robust video object co-segmentation [pdf(researchgate)] [code&VideoCoseg(dataset)]
W. Wang, J. Shen, X. Li, and F. Porikli. IEEE TIP, 2015
Quick Links: