Bridging Annotation Gaps: Transferring Labels to Align Object Detection Datasets

overview

We show the annotation differences between three road-based datasets and how classes are semantically misaligned.

Abstract

Combining multiple object detection datasets offers a path to improved generalisation but is hindered by inconsistencies in class semantics and bounding box annotations. Some methods to address this assume shared label taxonomies and address only spatial inconsistencies; others require manual relabelling, or produce a unified label space, which may be unsuitable when a fixed target label space is required. We propose Label-Aligned Transfer (LAT), a label transfer framework that systematically projects annotations from diverse source datasets into the label space of a target dataset. LAT begins by training dataset-specific detectors to generate pseudo-labels, which are then combined with ground-truth annotations via a Privileged Proposal Generator (PPG) that replaces the region proposal network in two-stage detectors. To further refine region features, a Semantic Feature Fusion (SFF) module injects class-aware context and features from overlapping proposals using a confidence-weighted attention mechanism. This pipeline preserves datasetspecific annotation granularity while enabling many-to-one label space transfer across heterogeneous datasets, resulting in a semantically and spatially aligned representation suitable for training a downstream detector. LAT thus jointly addresses both class-level misalignments and bounding box inconsistencies without relying on shared label spaces or manual annotations. Across multiple benchmarks, LAT demonstrates consistent improvements in target-domain detection performance, achieving gains of up to +4.8AP over semi-supervised baselines.

Framework

overview

Overview of the LAT architecture. Dataset-specific pseudo-labels and ground-truth annotations are combined via the Privileged Proposal Generator (PPG), which replaces the region proposal network. A frozen Vision Foundation Model (VFM) extracts shared image features. The Semantic Feature Fusion (SFF) module then refines region features by injecting class-aware information using attention over overlapping proposals. We filter the classification output to compute loss on only the current datasets label space.

Results

Qualitative

overview

We show the qualitative results of basic pseudo-labeling and LAT on the bottom row compared to the ground-truth.
LAT is able to recover missing information from the proposed psuedo-label in the target label space by utalising ground-truth information.


High-Low Class Setting Experiment

overview

Large-Small Dataset Setting Experiment

overview

Citation