Semantic segmentation has been successfully adopted for scenarios such as indoor, outdoor, urban scenes, and synthetic scenes, but applications with scarce labeled data such as search-and-rescue (SAR), have not been addressed. In this work, we propose a transfer learning approach where the U-Net convolutional neural network incorporates ResNet-50 as an encoder for the segmentation of objects in SAR situations. First, the proposed model is trained and validated with 19 classes of the CityScapes dataset. Then we test the proposed approach by i) training the model with a set of 14 Cityscapes classes with relevant similarities to SAR classes, and ii) using transfer learning with the self-developed dataset in SAR scenarios, which has 349 semantic labeled SAR images. The results indicate good recognition in classes with significant presence on the training images.