[DeepReader] SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers
#machinelearning #deeplearning #paperoverview #transformer #segformer #semanticsegmentation #visiontransformer
Paper:
Code:
Abstract:
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation
16 views
62
21
4 years ago 00:05:36 16
[DeepReader] SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers