【摘 要】
:
Computing a few eigenpairs from large-scale symmetric eigenvalue problems is far beyond the tractability of classic eigensolvers when the storage of the eig
【出 处】
:
2016年张量和矩阵学术研讨会(International conference on Tensor, Matrix a
论文部分内容阅读
Computing a few eigenpairs from large-scale symmetric eigenvalue problems is far beyond the tractability of classic eigensolvers when the storage of the eigenvectors in the classical way is impossible. We consider a tractable case in which both the coeffcient matrix and its eigenvectors can be represented in the low-rank tensor train formats. We propose a subspace optimization method combined with some suitable truncation steps to the given low-rank Tensor Train formats. Its performance can be further improved if the alternating minimization method is used to refine the intermediate solutions lo-cally. Preliminary numerical experiments show that our algorithm is competitive to the state-of-the-art methods on problems arising from the discretization of the stationary Schrodinger equation.
其他文献
In many applications, such as gene expression data analysis, we are interested in coherent patterns that consist of subsets of features and subsets of sampl
In matrix theory, the Thompson (R. C.) triangle inequality, Golden-Thompson in-equality, and Araki-Lieb-Thirring inequality are well known. In this talk, we
In machine learning, finance, statistics, and other areas, numerous interesting prob-lems can be modelled into the form of convex composite quadratic conic
Numerical data are frequently organized as d-dimensional matrices, also called ten-sors. However, only small values of d are allowed if we need to keep this
Identifying the positive definiteness of an even-order homogeneous multivariate form is an important task due to its wide applications in such as medical im
In this paper, some new Brauer-type eigenvalue inclusion theorems are established for general tensors. We show that new eigenvalue inclusion sets are sharpe
In this talk, we first review some results related Sylvester-type matrix equations and matrix decompositions. Then we investigate and analyze in detail the
A Hankel tensor is called a strong Hankel tensor if the Hankel matrix generated by its generating vector is positive semi-definite. It is known that an even
An iterative method for calculating the largest eigenvalue of a nonnegative tensor was proposed by Ng, Qi and Zhou in 2009. In this paper, two extrapolated
In this presentation, we would like to investigate some adaptive gradient methods for generalized tensor eigenvalue complementarity problem (TEiCP). Firstly