In this paper, we develop a novel blind source separation (BSS) method for nonnegative and correlated data, particularly for the nearly degenerate data. The motivation lies in nuclear magnetic resonance (NMR) spectroscopy, where a multiple mixture NMR spectra are recorded to identify chemical compounds with similar structures (degeneracy).
Image matching is a fundamental problem in computer vision. One of the well-known techniques is SIFT (scale-invariant feature transform). SIFT searches for and extracts robust features in hierarchical image scale spaces for object identification. However it often lacks efficiency as it identifies many insignificant features such as tree leaves and grass tips in a natural building image. We introduce a content adaptive image matching approach by preprocessing the image with a color-entropy based segmentation and harmonic inpainting. Natural structures such as tree leaves have both high entropy and distinguished color, and so such a combined measure can be both discriminative and robust. The harmonic inpainting smoothly interpolates the image functions over the tree regions and so blurrs and reduces the features and their unnecessary matching there. Numerical experiments on building images show
Rapid and reliable detection and identification of unknown chemical substances are critical to homeland security. It is challenging to identify chemical components from a wide range of explosives. There are two key steps involved. One is a non-destructive and informative spectroscopic technique for data acquisition. The other is an associated library of reference features along with a computational method for feature matching and meaningful detection within or beyond the library.
The human eye can perceive a vast array of colors. Whether light or dark, the colors that our eyes can see are supposedly unlimited. However, this is not the case. In reality, every image has two aspects or descriptors to it, a reflectance and an illumination. While the reflectance essentially shows an images true color, the illumination is what causes the colors to seem different to the human eye. This effect, originally discovered by Helmholtz, is known as Color Constancy. Color Constancy ensures that the color the Human Visual System (HVS) receives is the true color of the image, regardless of illumination. As a result of this effect, in 1971, Land and McCann created the Retinex theory. Using the pixels in the image, Land tried to estimate the value of the reflectances and thus reveal the true color of the image. This theory was basically a color constancy algorithm that tried to explain why colors look different when exposed to lighting. By calculating the pixels, Land was able to depict the sameness in a gradient of colors in an image. However, the algorithm is both inefficient and complicated. Following their footsteps, many other people have tried to formulate new algorithms around the Lands original Retinex algorithm. In this paper, different methods such as least squares and discrete cosine transform are explained as well as how to enhance images using both Lands idea and histogram equalization.
Face tracking is an important computer vision technology that has been widely adopted in many areas, from cell phone applications to industry robots. In this paper, we introduce a novel way to parallelize a face contour detecting application based on the color-entropy preprocessed ChanVese model utilizing a total variation G-norm. This particular application is a complicated and unsupervised computational method requiring a large amount of calculations. Several core parts therein are difficult to parallelize due to heavily correlated data processing among iterations and pixels.