We present a new perspective on graph-based
methods for collaborative ranking for recommender
systems. Unlike user-based or itembased
methods that compute a weighted average
of ratings given by the nearest neighbors, or lowrank
approximation methods using convex optimization
and the nuclear norm, we formulate
matrix completion as a series of semi-supervised
learning problems, and propagate the known ratings
to the missing ones on the user-user or itemitem
graph globally. The semi-supervised learning
problems are expressed as Laplace-Beltrami
equations on a manifold, or namely, harmonic
extension, and can be discretized by a point integral
method. We show that our approach does
not impose a low-rank Euclidean subspace on the
data points, but instead minimizes the dimension
of the underlying manifold. Our method, named
LDM (low dimensional manifold), turns out to be
particularly effective in generating rankings of
items, showing decent computational efficiency
and robust ranking quality compared to state-ofthe-
Poisson equation on point cloud with Dirichlet boundary condition plays important
role in many problems. In this paper, we use the volume constraint proposed by Du et.al to handle
the Dirichlet boundary condition in the point integral method for Poisson equation on point cloud. We
prove that the solution given by volume constraint converges to the true solution as the point cloud
converges to the underlying smooth manifold.
The Laplace-Beltrami operator (LBO) is a fundamental object associated to Riemannian
manifolds, which encodes all intrinsic geometry of the manifolds and has many desirable prop-
erties. Recently, we proposed a novel numerical method, Point Integral method (PIM), to
discretize the Laplace-Beltrami operator on point clouds . In this paper, we analyze the
convergence of Point Integral method (PIM) for Poisson equation with Neumann boundary
condition on submanifolds isometrically embedded in Euclidean spaces.
In this paper, we propose a novel low dimensional manifold model (LDMM) and
apply it to some image processing problems. LDMM is based on the fact that the patch manifolds
of many natural images have low dimensional structure. Based on this fact, the dimension of the
patch manifold is used as a regularization to recover the image. The key step in LDMM is to solve
a Laplace-Beltrami equation over a point cloud which is solved by the point integral method. The
point integral method enforces the sample point constraints correctly and gives better results than the
standard graph Laplacian. Numerical simulations in image denoising, inpainting and super-resolution
problems show that LDMM is a powerful method in image processing.
We present the application of a low dimensional manifold model (LDMM) on hyperspectral
image (HSI) reconstruction. An important property of hyperspectral images is that the
patch manifold, which is sampled by the three-dimensional blocks in the data cube, is generally of
a low dimensional nature. This is a generalization of low-rank models in that hyperspectral images
with nonlinear mixing terms can also fit in this framework. The point integral method (PIM) is used
to solve a Laplace-Beltrami equation over a point cloud sampling the patch manifold in LDMM.
Both numerical simulations and theoretical analysis show that the sample points constraint is correctly
enforced by PIM. The framework is demonstrated by experiments on the reconstruction of
both linear and nonlinear mixed hyperspectral images with a significant number of missing voxels
and several entirely missing spectral bands.
In this paper, we formulate the deep residual network (ResNet) as a control problem of transport equation. In ResNet, the transport equation is solved along the characteristics. Based on this observation, deep neural network is closely related to the control problem of PDEs on manifold. We propose several models based on transport equation, Hamilton-Jacobi equation and Fokker-Planck equation. The discretization of these PDEs on point cloud is also discussed.
The p-th moment matrix is defined for a real random vector, generalizing the classical covariance matrix. Sharp inequalities relating the p-th moment and Renyi entropy are established, generalizing the classical inequality relating the second moment and the Shannon entropy. The extremal distributions for these inequalities are completely characterized.
The moment-entropy inequality shows that a contin- uous random variable with given second moment and maximal Shannon entropy must be Gaussian. Stam’s inequality shows that a continuous random variable with given Fisher information and minimal Shannon entropy must also be Gaussian. The Crame ́r- Rao inequality is a direct consequence of these two inequalities.
In this paper the inequalities above are extended to Renyi entropy, p-th moment, and generalized Fisher information. Gen- eralized Gaussian random densities are introduced and shown to be the extremal densities for the new inequalities. An extension of the Crame ́r–Rao inequality is derived as a consequence of these moment and Fisher information inequalities.
We explain how the classical notions of Fisher information of a random variable and Fisher information matrix of a random vector can be extended to a much broader setting. We also show that Stam’s inequality for Fisher information and Shannon entropy, as well as the more generalized versions proved earlier by the authors, are all special cases of more general sharp inequalities satisfied by random vectors. The extremal random vectors, which we call generalized Gaussians, contain Gaussians as a limiting case but are noteworthy because they are heavy- tailed.