Exploiting CNNs for Semantic Segmentation with Pascal VOC

Abstract

Author(s): Sourabh Prakash, Priyanshi Shah, Ashrya Agrawal

In this paper, we present a comprehensive study on semantic segmentation with the Pascal VOC dataset. Here, we have to label each pixel with a class which in turn segments the entire image based on the objects/entities present. To tackle this, we firstly use a Fully Convolution Network (FCN) baseline which gave 71.31% pixel accuracy and 0.0527 mean IoU. We analyze its performance and working and subsequently address the issues in the baseline with three improvements - a) cosine annealing learning rate scheduler (pixel accuracy: 72.86%, IoU: 0.0529), b) data augmentation (pixel accuracy: 69.88%, IoU: 0.0585) c) class imbalance weights (pixel accuracy: 68.98%, IoU: 0.0596). Apart from these changes in train- ing pipeline, we also explore three different architectures - a) Our proposed model- Advanced FCN (pixel accuracy: 67.20%, IoU: 0.0602) b) Transfer Learning with ResNet (Best performance) (pixel accuracy: 71.33%, IoU: 0.0926) c) U- Net (pixel accuracy: 72.15%, IoU: 0.0649). We observe that the improvements help in greatly improving the performance, as reflected both, in metrics and segmentation maps. Interestingly, we observe that among the improvements, dataset augmentation has the greatest contribution. Also, note that transfer learning model performs the best on the pascal dataset. We analyses the performance of these using loss, accuracy and IoU plots along with segmentation maps, which help us draw valuable insights about the working of the models.