This paper gives a new way of constructing Landau-Ginzburg mirrors using deformation theory of Lagrangian immersions motivated by the works of Seidel, Strominger-Yau-Zaslow and Fukaya-Oh-Ohta-Ono. Moreover we construct a canonical functor from the Fukaya category to the mirror category of matrix factorizations. This functor derives homological mirror symmetry under some explicit assumptions.
As an application, the construction is applied to spheres with three orbifold points to produce their quantum-corrected mirrors and derive homological mirror symmetry. Furthermore we discover an enumerative meaning of the (inverse) mirror map for elliptic curve quotients.
Morphing is the process of changing a geometric model or an image into another. The process generally involves rigid body motions and non-rigid deformations. It is well known that there exists a unique conformal mapping from a simply connected surface into a unit disk by the Riemann mapping theorem. On the other hand, a 3D surface deformable model can be built via various approaches such as mutual parameterization from direct interpolation or surface matching using landmarks. In this paper, a numerical methods of 3D surface morphing based on deformable model and conformal mapping is demonstrated.
We take the advantage of the unique representation of 3D surfaces by the mean curvatures $H$ and the conformal factors $\lambda$ associated with the Riemann mapping and build up the deformation model by consistently registering the landmarks on the conformal parametric domains. As a result, the correspondence of the $(H, \lambda)$ between two surfaces can be defined and a 3D deformation field can be reconstructed. Furthermore, by composition of the M\"obius transformation and the 3D deformation field, a smooth morphing sequence can be generated over a consistent mesh structure via the cubic spline homotopy. Several numerical experiments on the face morphing are presented to demonstrate the robustness of our approach.
We present a novel solution to automatic semantic modeling of indoor scenes from a sparse set of low-quality RGB-D images. Such data presents challenges due to noise, low resolution, occlusion and missing depth information. We exploit the knowledge in a scene database containing 100s of indoor scenes with over 10,000 manually segmented and labeled mesh models of objects. In seconds, we output a visually plausible 3D scene, adapting these models and their parts to fit the input scans. Contextual relationships learned from the database are used to constrain reconstruction, ensuring semantic compatibility between both object models and parts. Small objects and objects with incomplete depth information which are
difficult to recover reliably are processed with a two-stage approach. Major objects are recognized first, providing a known scene structure. 2D contour-based model retrieval is then used to recover smaller objects. Evaluations using our own data and two public datasets show that our approach can model typical real-world indoor scenes efficiently and robustly.