Robust Low-Rank Tensor Completion Based on Tensor Ring Rank via -Norm
Tensor completion aims to recover missing entries given incomplete multi-dimensional data
by making use of the prior low-rank information, and has various applications because many
real-world data can be modeled as low-rank tensors. Most of the existing methods are
designed for noiseless or Gaussian noise scenarios, and thus they are not robust to outliers.
One popular approach to resist outliers is to employ ℓ p-norm. Yet nonsmoothness and
nonconvexity of ℓ p-norm with 0<; p≤ 1 bring challenges to optimization. In this paper, a …
by making use of the prior low-rank information, and has various applications because many
real-world data can be modeled as low-rank tensors. Most of the existing methods are
designed for noiseless or Gaussian noise scenarios, and thus they are not robust to outliers.
One popular approach to resist outliers is to employ ℓ p-norm. Yet nonsmoothness and
nonconvexity of ℓ p-norm with 0<; p≤ 1 bring challenges to optimization. In this paper, a …
Tensor completion aims to recover missing entries given incomplete multi-dimensional data by making use of the prior low-rank information, and has various applications because many real-world data can be modeled as low-rank tensors. Most of the existing methods are designed for noiseless or Gaussian noise scenarios, and thus they are not robust to outliers. One popular approach to resist outliers is to employ ℓp-norm. Yet nonsmoothness and nonconvexity of ℓp-norm with 0 <; p ≤ 1 bring challenges to optimization. In this paper, a new norm, named ℓpϵ-norm, is devised where ϵ > 0 can adjust the convexity of ℓpϵ-norm. Compared with lp-norm, ℓpϵ-norm is smooth and convex even for 0 <; p ≤ 1, which converts an intractable nonsmooth and nonconvex optimization problem into a much simpler convex and smooth one. Then, combining tensor ring rank and ℓpϵ-norm, a robust tensor completion formulation is proposed, which achieves outstanding robustness. The resultant robust tensor completion problem is decomposed into a number of robust linear regression (RLR) subproblems, and two algorithms are devised to tackle RLR. The first method adopts gradient descent, which has a low computational complexity. While the second one employs alternating direction method of multipliers to yield a fast convergence rate. Numerical simulations show that the two proposed methods have better performance than those based on the ℓp-norm in RLR. Experimental results from applications of image inpainting, video restoration and target estimation demonstrate that our robust tensor completion approach outperforms state-of-the-art methods in terms of recovery accuracy.
ieeexplore.ieee.org
Showing the best result for this search. See all results