Springer, 2020. — 148 p. — ISBN: 978-3-030-30670-0.
This unique volume reviews the latest advances in domain adaptation in the training of machine learning algorithms for visual understanding, offering valuable insights from an international selection of experts in the field. The text presents a diverse selection of novel techniques, covering applications of object recognition, face recognition, and action and event recognition.
Topics and features: reviews the domain adaptation-based machine learning algorithms available for visual understanding, and provides a deep metric learning approach; introduces a novel unsupervised method for image-to-image translation, and a video segment retrieval model that utilizes ensemble learning; proposes a unique way to determine which dataset is most useful in the base training, in order to improve the transferability of deep neural networks; describes a quantitative method for estimating the discrepancy between the source and target data to enhance image classification performance; presents a technique for multi-modal fusion that enhances facial action recognition, and a framework for intuition learning in domain adaptation; examines an original interpolation-based approach to address the issue of tracking model degradation in correlation filter-based methods.
Domain Adaptation for Visual Understanding
M-ADDA: Unsupervised Domain Adaptation with Deep Metric Learning
XGAN: Unsupervised Image-to-Image Translation for Many-to-Many Mappings
Improving Transferability of Deep Neural Networks
Cross-Modality Video Segment Retrieval with Ensemble Learning
On Minimum Discrepancy Estimation for Deep Domain Adaptation
Multi-modal Conditional Feature Enhancement for Facial Action Unit Recognition
Intuition Learning
Alleviating Tracking Model Degradation Using Interpolation-Based Progressive Updating