In this paper, we develop a novel blind source separation (BSS) method for nonnegative and correlated data, particularly for the nearly degenerate data. The motivation lies in nuclear magnetic resonance (NMR) spectroscopy, where a multiple mixture NMR spectra are recorded to identify chemical compounds with similar structures (degeneracy).
Image matching is a fundamental problem in computer vision. One of the well-known techniques is SIFT (scale-invariant feature transform). SIFT searches for and extracts robust features in hierarchical image scale spaces for object identification. However it often lacks efficiency as it identifies many insignificant features such as tree leaves and grass tips in a natural building image. We introduce a content adaptive image matching approach by preprocessing the image with a color-entropy based segmentation and harmonic inpainting. Natural structures such as tree leaves have both high entropy and distinguished color, and so such a combined measure can be both discriminative and robust. The harmonic inpainting smoothly interpolates the image functions over the tree regions and so blurrs and reduces the features and their unnecessary matching there. Numerical experiments on building images show
Rapid and reliable detection and identification of unknown chemical substances are critical to homeland security. It is challenging to identify chemical components from a wide range of explosives. There are two key steps involved. One is a non-destructive and informative spectroscopic technique for data acquisition. The other is an associated library of reference features along with a computational method for feature matching and meaningful detection within or beyond the library.
The human eye can perceive a vast array of colors. Whether light or dark, the colors that our eyes can see are supposedly unlimited. However, this is not the case. In reality, every image has two aspects or descriptors to it, a reflectance and an illumination. While the reflectance essentially shows an images true color, the illumination is what causes the colors to seem different to the human eye. This effect, originally discovered by Helmholtz, is known as Color Constancy. Color Constancy ensures that the color the Human Visual System (HVS) receives is the true color of the image, regardless of illumination. As a result of this effect, in 1971, Land and McCann created the Retinex theory. Using the pixels in the image, Land tried to estimate the value of the reflectances and thus reveal the true color of the image. This theory was basically a color constancy algorithm that tried to explain why colors look different when exposed to lighting. By calculating the pixels, Land was able to depict the sameness in a gradient of colors in an image. However, the algorithm is both inefficient and complicated. Following their footsteps, many other people have tried to formulate new algorithms around the Lands original Retinex algorithm. In this paper, different methods such as least squares and discrete cosine transform are explained as well as how to enhance images using both Lands idea and histogram equalization.
Face tracking is an important computer vision technology that has been widely adopted in many areas, from cell phone applications to industry robots. In this paper, we introduce a novel way to parallelize a face contour detecting application based on the color-entropy preprocessed ChanVese model utilizing a total variation G-norm. This particular application is a complicated and unsupervised computational method requiring a large amount of calculations. Several core parts therein are difficult to parallelize due to heavily correlated data processing among iterations and pixels.
Speech signals are often produced or received in the presence of noise, which is known to degrade the performance of a speech recognition system. In this paper, a perception-and PDE-based nonlinear transformation was developed to process spoken words in noisy environment. Our goal is to distinguish essential speech features and suppress noise so that the processed words are better recognized by a computer software. The nonlinear transformation was made on the spectrogram (short-term Fourier spectra) of speech signals, which reveals the signal energy distribution in time and frequency. The transformation reduces noise through time adaptation (reducing temporally slowly varying portions of spectra) and enhances spectral peaks (formants) by evolving a focusing quadratic fourth-order PDE. Short-term spectra of speech signals were initially divided into three (low, mid and high) frequency bands based
A model based sound amplification method is proposed and studied to enhance the ability of the hearing impaired. The model consists of mechanical equations on basilar membrane and outer hair cell (OHC). The OHC is described by a nonlinear nonlocal feedforward model. In addition, a perceptive correction is defined to account for the lumped effect of higher level auditory processing, motivated by the intelligibility function of the hearing impaired. The gain functions are computed by matching the impaired model output to the perceptively weighted normal output, and qualitative agreement is achieved with NAL-NL1 prescription on clean signals. For noisy signals, an adaptive gain strategy is proposed based on the signal to noise ratios (SNR) computed by the model. The adaptive gain functions provide less gain as SNRs decrease so that the intelligibility can be higher with the adaptivity.
A nonlocally weighted soft-constrained natural gradient iterative method is introduced for robust blind separation in reverberant environment. The nonlocal weighting of the iterations promotes stability and convergence of the algorithm for long demixing filters. The scaling degree of freedom is controlled by soft-constraints built into the auxiliary difference equations. The small divisor problem of iterations in silence durations of speech is resolved. Computations on synthetic speech mixtures based on measured binaural room impulse responses show that the algorithm achieves higher signal-to-inteference ratio improvement than existing method (natural gradient time domain algorithm) in an office size room with reverberation time over 0.5 second.
Given a set of mixtures, blind source separation attempts to retrieve the source signals without or with very little information of the mixing process. We present a geometric approach for blind separation of nonnegative linear mixtures termed <i>facet component analysis</i>. The approach is based on facet identification of the underlying cone structure of the data. Earlier works focus on recovering the cone by locating its vertices (vertex component analysis) based on a mutual sparsity condition which requires each source signal to possess a stand-alone peak in its spectrum. We formulate alternative conditions so that enough data points fall on the facets of a cone instead of accumulating around the vertices. To find a regime of unique solvability, we make use of both geometric and density properties of the data points and develop an efficient facet identification method by combining data classification and linear
Motivated by the nuclear magnetic resonance (NMR) spectroscopy of biofluids (urine and blood serum), we present a recursive blind source separation (rBSS) method for nonnegative and correlated data. BSS problem arises when one attempts to recover a set of source signals from a set of mixture signals without knowing the mixing process. Various approaches have been developed to solve BSS problems relying on the assumption of statistical independence of the source signals. However, signal independence is not guaranteed in many real-world data like the NMR spectra of chemical compounds. The rBSS method introduced in this paper deals with the nonnegative and correlated signals arising in NMR spectroscopy of biofluids. The statistical independence requirement is replaced by a constraint which requires dominant interval(s) from each source signal over some of the other source signals in a