Timezone: »
Fonts are ubiquitous across documents and come in a variety of styles. They are either represented in a native vector format or rasterized to produce fixed resolution images. In the first case, the non-standard representation prevents benefiting from latest network architectures for neural representations; while, in the latter case, the rasterized representation, when encoded via networks, results in loss of data fidelity, as font-specific discontinuities like edges and corners are difficult to represent using neural networks. Based on the observation that complex fonts can be represented by a superposition of a set of simpler occupancy functions, we introduce multi-implicits to represent fonts as a permutation-invariant set of learned implict functions, without losing features (e.g., edges and corners). However, while multi-implicits locally preserve font features, obtaining supervision in the form of ground truth multi-channel signals is a problem in itself. Instead, we propose how to train such a representation with only local supervision, while the proposed neural architecture directly finds globally consistent multi-implicits for font families. We extensively evaluate the proposed representation for various tasks including reconstruction, interpolation, and synthesis to demonstrate clear advantages with existing alternatives. Additionally, the representation naturally enables glyph completion, wherein a single characteristic font is used to synthesize a whole font family in the target style.
Author Information
Pradyumna Reddy (UCL)
Zhifei Zhang (Adobe Research)
Zhaowen Wang (Adobe Research)
Matthew Fisher (Adobe Research)
Hailin Jin (Adobe)
Niloy Mitra (University College London)
More from the Same Authors
-
2021 Spotlight: Look at What I’m Doing: Self-Supervised Spatial Grounding of Narrations in Instructional Videos »
Reuben Tan · Bryan Plummer · Kate Saenko · Hailin Jin · Bryan Russell -
2021 Poster: Look at What I’m Doing: Self-Supervised Spatial Grounding of Narrations in Instructional Videos »
Reuben Tan · Bryan Plummer · Kate Saenko · Hailin Jin · Bryan Russell -
2021 Poster: SketchGen: Generating Constrained CAD Sketches »
Wamiq Para · Shariq Bhat · Paul Guerrero · Tom Kelly · Niloy Mitra · Leonidas Guibas · Peter Wonka -
2021 Poster: MarioNette: Self-Supervised Sprite Learning »
Dmitriy Smirnov · MICHAEL GHARBI · Matthew Fisher · Vitor Guizilini · Alexei Efros · Justin Solomon -
2020 Poster: Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view Human Reconstruction »
Tong He · John Collomosse · Hailin Jin · Stefano Soatto -
2020 Poster: BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images »
Thu Nguyen-Phuoc · Christian Richardt · Long Mai · Yongliang Yang · Niloy Mitra -
2019 Poster: Learning elementary structures for 3D shape generation and matching »
Theo Deprelle · Thibault Groueix · Matthew Fisher · Vladimir Kim · Bryan Russell · Mathieu Aubry -
2017 Poster: Universal Style Transfer via Feature Transforms »
Yijun Li · Chen Fang · Jimei Yang · Zhaowen Wang · Xin Lu · Ming-Hsuan Yang