国家天元数学中部中心学术报告 | 陈昱鑫 教授 (University of Pennsylvania)

发布时间: 2024-06-12 09:10

报告题目:Taming Nonconvexity in Tensor Completion

报告时间:2024-06-18  16:00-17:00

报 告 人:陈昱鑫  教授University of Pennsylvania

报告地点:理学院东北楼二楼报告厅(209)

报告摘要:Recent years have witnessed a flurry of activity in solving statistical estimation and learning problems via nonconvex optimization. While conventional wisdom often takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple first-order optimization methods have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. This talk explores the effectiveness of nonconvex optimization for noisy tensor completion --- the problem of reconstructing a low-CP-rank tensor from highly incomplete and randomly corrupted observations of its entries. While randomly initialized gradient descent suffers from a high-volatility issue in the sample-starved regime, we propose a two-stage nonconvex algorithm that is guaranteed to succeed, enabling linear convergence, minimal sample complexity and minimax statistical accuracy all at once. In addition, we characterize the distribution of this nonconvex estimator down to fine scales, which in turn allows one to construct entrywiseconfidence intervals for both the unseen tensor entries and the unknown tensor factors. Our findings reflect the important role of statistical models in enabling efficient and guaranteed nonconvex statistical learning.