Stella X. Yu : Papers / Google Scholar

Long-tailed Recognition by Routing Diverse Distribution-Aware Experts
Xudong Wang and Long Lian and Zhongqi Miao and Ziwei Liu and Stella X. Yu
International Conference on Learning Representations, Online, 4-8 May 2021
Paper | Slides | Code | arXiv

Abstract

Natural data are often long-tail distributed over semantic classes. Existing recognition methods tackle this imbalanced classification by placing more emphasis on the tail data, through class re-balancing/re-weighting or ensembling over different data groups, resulting in increased tail accuracies but reduced head accuracies.

We take a dynamic view of the training data and provide a principled model bias and variance analysis as the training data fluctuates: Existing long-tail classifiers invariably increase the model variance and the head-tail model bias gap remains large, due to more and larger confusion with hard negatives for the tail.

We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE). It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module. RIDE outperforms the state-of-the-art by 5\% to 7\% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks. It is also a universal framework that is applicable to various backbone networks, long-tailed algorithms, and training mechanisms for consistent performance gains. Our code is available at: \href{https://github.com/frank-xwang/RIDE-LongTailRecognition}{https://github.com/frank-xwang/RIDE-LongTailRecognition}.


Keywords
ensemble model, bias and variance, long-tail distribution