[2nd] Oral Presentation
Workshop: Vision Transformers: Theory and applications

TPFNet: A Novel Text In-painting Transformer for Text Removal

Onkar Susladkar · Dhruv Makwana · Gayatri Deshmukh · Sparsh Mittal · Sai Chandra Teja R · Rekha Singhal


Text erasure from an image is helpful for various tasks such as image editing and privacy preservation. In this paper, we present TPFNet, a novel one-stage (end-to-end) network for text removal from images. Our network has two parts. Since noise can be more effectively removed from low-resolution images, part 1 operates on low-resolution images. The output of part 1 is a low-resolution text-free image. Part 2 uses the features learned in part 1 to predict a high-resolution text-free image. In part 1, we use "pyramidal vision transformer" (PVT) as the encoder. Further, we use a novel multi-headed decoder that generates a high-pass filtered image and a segmentation map, in addition to a text-free image. The segmentation branch helps locate the text precisely, and the high-pass branch helps in learning the image structure. To precisely locate the text, TPFNet employs an adversarial loss that is conditional on the segmentation map rather than the input image. On Oxford, SCUT, and SCUT-EnsText datasets, our network outperforms recently proposed networks on nearly all the metrics.

Chat is not available.