32nd IEEE Conference on Signal Processing and Communications Applications, SIU 2024, Mersin, Türkiye, 15 - 18 Mayıs 2024
As deep learning increasingly becomes integral to a variety of real-world applications, the significance of domain generalization and out-of-distribution (OOD) detection has grown. These areas are crucial for models to not only perform well across different unseen domains but also to identify data that significantly deviates from the distribution encountered during training. In this scenario, data augmentation emerges as a key component, enhancing the model's capability to generalize and detect OOD instances efficiently.In this work, we propose that the domains of unknown data ("known unknowns") and the target domain for generalization can be considered as separate entities with distinct boundaries in the embedding space. We suggest that through the careful selection and application of augmentations, it is possible to alter the training data's boundaries in this space. Specifically, augmentations can expand and shift these boundaries, distancing them from OOD regions while bringing them closer to the target domain. This strategic adjustment aims to enhance the model's performance in domain generalization and OOD detection.To tackle this challenge, we present a new approach for choosing the most effective augmentations based on their influence on the embedding space.