Juyong ZhangUniversity of Science and Technology of ChinaBailin DengCardiff UniversityYang HongUniversity of Science and Technology of ChinaYue PengUniversity of Science and Technology of ChinaWenjie QinUniversity of Science and Technology of ChinaLigang LiuUniversity of Science and Technology of China
Geometric Modeling and Processingmathscidoc:2012.16002
IEEE Transactions on Visualization and Computer Graphics, 25, (4), 2019.4
The joint bilateral filter, which enables feature-preserving signal smoothing according to the structural information from a guidance, has been applied for various tasks in geometry processing. Existing methods either rely on a static guidance that may be inconsistent with the input and lead to unsatisfactory results, or a dynamic guidance that is automatically updated but sensitive to noises and outliers. Inspired by recent advances in image filtering, we propose a new geometry filtering technique called static/dynamic filter, which utilizes both static and dynamic guidances to achieve state-of-the-art results. The proposed filter is based on a nonlinear optimization that enforces smoothness of the signal while preserving variations that correspond to features of certain scales. We develop an efficient iterative solver for the problem, which unifies existing filters that are based on static or dynamic guidances. The filter can be applied to mesh face normals followed by vertex position update, to achieve scale-aware and feature-preserving filtering of mesh geometry. It also works well for other types of signals defined on mesh surfaces, such as texture colors. Extensive experimental results demonstrate the effectiveness of the proposed filter for various geometry processing applications such as mesh denoising, geometry feature enhancement, and texture color filtering.
Yue PengUniversity of Science and Technology of ChinaBailin DengCardiff UniversityJuyong ZhangUniversity of Science and Technology of ChinaFanyu GengUniversity of Science and Technology of ChinaWenjie QinUniversity of Science and Technology of ChinaLigang LiuUniversity of Science and Technology of China
Geometric Modeling and Processingmathscidoc:2012.16001
ACM Transactions on Graphics (SIGGRAPH), 37, (4), 42, 2018.8
Many computer graphics problems require computing geometric shapes subject to certain constraints. This often results in non-linear and non-convex optimization problems with globally coupled variables, which pose great challenge for interactive applications. Local-global solvers developed in recent years can quickly compute an approximate solution to such problems, making them an attractive choice for applications that prioritize efficiency over accuracy. However, these solvers suffer from lower convergence rate, and may take a long time to compute an accurate result. In this paper, we propose a simple and effective technique to accelerate the convergence of such solvers. By treating each local-global step as a fixed-point iteration, we apply Anderson acceleration, a well-established technique for fixed-point solvers, to speed up the convergence of a local-global solver. To address the stability issue of classical Anderson acceleration, we propose a simple strategy to guarantee the decrease of target energy and ensure its global onvergence. In addition, we analyze the connection between Anderson acceleration and quasi-Newton methods, and show that the canonical choice of its mixing parameter is suitable for accelerating local-global solvers. Moreover, our technique is effective beyond classical local-global solvers, and can be applied to iterative methods with a common structure. We evaluate the performance of our technique on a variety of geometry optimization and physics simulation problems. Our approach significantly reduces the number of iterations required to compute an accurate result, with only a slight increase of computational cost per iteration. Its simplicity and effectiveness makes it a promising tool for accelerating existing algorithms as well as designing efficient new algorithms.
In this paper, we develop a novel blind source separation (BSS) method for nonnegative and correlated data, particularly for the nearly degenerate data. The motivation lies in nuclear magnetic resonance (NMR) spectroscopy, where a multiple mixture NMR spectra are recorded to identify chemical compounds with similar structures (degeneracy).
Image matching is a fundamental problem in computer vision. One of the well-known techniques is SIFT (scale-invariant feature transform). SIFT searches for and extracts robust features in hierarchical image scale spaces for object identification. However it often lacks efficiency as it identifies many insignificant features such as tree leaves and grass tips in a natural building image. We introduce a content adaptive image matching approach by preprocessing the image with a color-entropy based segmentation and harmonic inpainting. Natural structures such as tree leaves have both high entropy and distinguished color, and so such a combined measure can be both discriminative and robust. The harmonic inpainting smoothly interpolates the image functions over the tree regions and so blurrs and reduces the features and their unnecessary matching there. Numerical experiments on building images show
Rapid and reliable detection and identification of unknown chemical substances are critical to homeland security. It is challenging to identify chemical components from a wide range of explosives. There are two key steps involved. One is a non-destructive and informative spectroscopic technique for data acquisition. The other is an associated library of reference features along with a computational method for feature matching and meaningful detection within or beyond the library.
The human eye can perceive a vast array of colors. Whether light or dark, the colors that our eyes can see are supposedly unlimited. However, this is not the case. In reality, every image has two aspects or descriptors to it, a reflectance and an illumination. While the reflectance essentially shows an images true color, the illumination is what causes the colors to seem different to the human eye. This effect, originally discovered by Helmholtz, is known as Color Constancy. Color Constancy ensures that the color the Human Visual System (HVS) receives is the true color of the image, regardless of illumination. As a result of this effect, in 1971, Land and McCann created the Retinex theory. Using the pixels in the image, Land tried to estimate the value of the reflectances and thus reveal the true color of the image. This theory was basically a color constancy algorithm that tried to explain why colors look different when exposed to lighting. By calculating the pixels, Land was able to depict the sameness in a gradient of colors in an image. However, the algorithm is both inefficient and complicated. Following their footsteps, many other people have tried to formulate new algorithms around the Lands original Retinex algorithm. In this paper, different methods such as least squares and discrete cosine transform are explained as well as how to enhance images using both Lands idea and histogram equalization.
Face tracking is an important computer vision technology that has been widely adopted in many areas, from cell phone applications to industry robots. In this paper, we introduce a novel way to parallelize a face contour detecting application based on the color-entropy preprocessed ChanVese model utilizing a total variation G-norm. This particular application is a complicated and unsupervised computational method requiring a large amount of calculations. Several core parts therein are difficult to parallelize due to heavily correlated data processing among iterations and pixels.
Speech signals are often produced or received in the presence of noise, which is known to degrade the performance of a speech recognition system. In this paper, a perception-and PDE-based nonlinear transformation was developed to process spoken words in noisy environment. Our goal is to distinguish essential speech features and suppress noise so that the processed words are better recognized by a computer software. The nonlinear transformation was made on the spectrogram (short-term Fourier spectra) of speech signals, which reveals the signal energy distribution in time and frequency. The transformation reduces noise through time adaptation (reducing temporally slowly varying portions of spectra) and enhances spectral peaks (formants) by evolving a focusing quadratic fourth-order PDE. Short-term spectra of speech signals were initially divided into three (low, mid and high) frequency bands based
A model based sound amplification method is proposed and studied to enhance the ability of the hearing impaired. The model consists of mechanical equations on basilar membrane and outer hair cell (OHC). The OHC is described by a nonlinear nonlocal feedforward model. In addition, a perceptive correction is defined to account for the lumped effect of higher level auditory processing, motivated by the intelligibility function of the hearing impaired. The gain functions are computed by matching the impaired model output to the perceptively weighted normal output, and qualitative agreement is achieved with NAL-NL1 prescription on clean signals. For noisy signals, an adaptive gain strategy is proposed based on the signal to noise ratios (SNR) computed by the model. The adaptive gain functions provide less gain as SNRs decrease so that the intelligibility can be higher with the adaptivity.
A nonlocally weighted soft-constrained natural gradient iterative method is introduced for robust blind separation in reverberant environment. The nonlocal weighting of the iterations promotes stability and convergence of the algorithm for long demixing filters. The scaling degree of freedom is controlled by soft-constraints built into the auxiliary difference equations. The small divisor problem of iterations in silence durations of speech is resolved. Computations on synthetic speech mixtures based on measured binaural room impulse responses show that the algorithm achieves higher signal-to-inteference ratio improvement than existing method (natural gradient time domain algorithm) in an office size room with reverberation time over 0.5 second.