Original title: Dataset Distillation via the Wasserstein Metric
Authors: Haoyang Liu, Tiancheng Xing, Luwei Li, Vibhu Dalal, Jingrui He, Haohan Wang
In this article, the authors explore dataset distillation (DD), a method in computer vision that aims to condense large datasets into smaller synthetic versions without sacrificing model performance. They focus on the core objective of DD, which is capturing the essential representation of extensive datasets in smaller forms.
The authors propose a novel approach that utilizes the Wasserstein distance, a metric rooted in optimal transport theory, to enhance distribution matching in DD. By leveraging the Wasserstein barycenter, they are able to effectively capture the centroid of a set of distributions and quantify distribution differences in a geometrically meaningful way.
To improve learning, the authors embed the synthetic data into the feature space of pretrained classification models for distribution matching. They conduct extensive testing on various high-resolution datasets and achieve state-of-the-art performance, highlighting the promising capabilities of Wasserstein metrics in dataset distillation.
Original article: https://arxiv.org/abs/2311.18531