Abstract
Semantic segmentation datasets often exhibit two types of imbalance: class imbalance, where some classes appear more frequently than others and size imbalance, where some objects occupy more pixels than others. This causes traditional evaluation metrics to be biased towards majority classes (e.g. overall pixel-wise accuracy) and large objects (e.g. mean pixel-wise accuracy and per-dataset mean intersection over union). To address these shortcomings, we propose the use of fine-grained mIoUs along with corresponding worst-case metrics, thereby offering a more holistic evaluation of segmentation techniques. These fine-grained metrics offer less bias towards large objects, richer statistical information, and valuable insights into model and dataset auditing. Furthermore, we undertake an extensive benchmark study, where we train and evaluate 15 modern neural networks with the proposed metrics on 12 diverse natural and aerial segmentation datasets. Our benchmark study highlights the necessity of not basing evaluations on a single metric and confirms that fine-grained mIoUs reduce the bias towards large objects. Moreover, we identify the crucial role played by architecture designs and loss functions, which lead to best practices in optimizing fine-grained metrics. The code is available at https://github.com/zifuwanggg/JDTLosses.
Authors
Zifu Wang*, Maxim Berman*, Amal Rannen-Triki, Philip H.S. Torr*, Devis Tuia*, Tinne Tuytelaars*, Luc Van Gool*, Jiaqian Yu*, Matthew Blaschko*
Venue
NeurIPS 2023 Datasets and Benchmarks