Existing domain adaptation methods aim at learning features that can be generalized among domains. These methods commonly require to update source classifier to adapt to the target domain and do not properly handle the trade-off between the source domain and the target domain. In this work, instead of training a classifier to adapt to the target domain, we use a separable component called data calibrator to help the fixed source classifier recover discrimination power in the target domain,
while preserving the source domain’s performance. When the difference between two domains is small, the source
classifier’s representation is sufficient to perform well in the target domain and outperforms GAN-based methods in
digits. Otherwise, the proposed method can leverage synthetic images generated by GANs to boost performance and achieve state-of-the-art performance in digits datasets and driving scene semantic segmentation. Our method also empirically suggests the potential connection between domain adaptation and adversarial attacks.
Code release is available at https://github.com/yeshaokai/Calibrator-Domain-Adaptation
Chenglong BaoDepartment of Mathematics, National University of Singapore, Singapore, 119076Hui JiDepartment of Mathematics, National University of Singapore, Singapore, 119076Yuhui QuanDepartment of Mathematics, National University of Singapore, Singapore, 119076Zuowei ShenDepartment of Mathematics, National University of Singapore, Singapore, 119076
Sparse coding and dictionary learning have seen their applications in many vision tasks, which usually is formulated as a non-convex optimization problem. Many iterative methods have been proposed to tackle such an optimization problem. However, it remains an open problem to have a method that is not only practically fast but also is globally convergent. In this paper, we proposed a fast proximal method for solving \ell_0 norm based dictionary learning problems, and we proved that the whole sequence generated by the proposed method converges to a stationary point with sub-linear convergence rate. The benefit of having a fast
and convergent dictionary learning method is demonstrated in the applications of image recovery and face recognition.
Zihao WangBNRist, Department of Computer Science and Technology, RIIT, Institute of Internet Industry, Tsinghua UniversityDatong ZhouDepartment of Mathematical Sciences, Tsinghua UniversityMing YangDepartment of Computer Science and Technology, Tsinghua UniversityYong ZhangBNRist, Department of Computer Science and Technology, RIIT, Institute of Internet Industry, Tsinghua UniversityChenglong BaoYau Mathematical Sciences Center, Tsinghua UniversityHao WuDepartment of Mathematical Sciences, Tsinghua University
Computing the distance among linguistic objects is an essential problem in natural language processing. The word mover’s distance (WMD) has been successfully applied to measure the document distance by synthesizing the low-level word similarity with the framework of optimal transport (OT). However, due to the global transportation nature of OT, the WMD may overestimate the semantic dissimilarity when documents contain unequal semantic details. In this paper, we propose to address this overestimation issue with a novel Wasserstein-Fisher-Rao (WFR) document distance grounded on unbalanced optimal transport theory. Compared to the WMD, the WFR document distance provides a trade-off between global transportation and local truncation, which leads to a better similarity measure for unequal semantic details. Moreover, an efficient prune strategy is particularly designed for the WFR document distance to facilitate the top-k queries among a large number of documents. Extensive experimental results show that the WFR document distance achieves higher accuracy that WMD and even its supervised variation s-WMD.