Segmentation convolutional neural networks (CNNs) are now popular for the semantic segmentation (i.e., dense pixel-wise labeling) of remote sensing imagery, such as color or hyperspectral satellite imagery. In recent years a large number of hand-labeled datasets of overhead imagery have emerged, leading to breakthrough performance for CNNs. However, these datasets are typically used in isolation of one another because they are either (i) annotated with heterogeneous object type labels, or (ii) they are collected over different geographic areas. This imposes a major bottleneck on the value of these datasets. In this work we present what we call a class-asymmetric loss function that makes it possible to train a single multi-class network using multiple datasets that are heterogeneously-labeled. We show, for example, that it is possible to train a segmentation algorithm for Buildings, roads, and background using two datasets: one annotated with buildings and one annotated with buildings. We propose a class asymmetric loss that under certain common conditions, allows for one to train models on datasets in which the target class is unlabeled.