Timezone: »

A Multi-Implicit Neural Representation for Fonts
Pradyumna Reddy · Zhifei Zhang · Zhaowen Wang · Matthew Fisher · Hailin Jin · Niloy Mitra

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @ None #None

Fonts are ubiquitous across documents and come in a variety of styles. They are either represented in a native vector format or rasterized to produce fixed resolution images. In the first case, the non-standard representation prevents benefiting from latest network architectures for neural representations; while, in the latter case, the rasterized representation, when encoded via networks, results in loss of data fidelity, as font-specific discontinuities like edges and corners are difficult to represent using neural networks. Based on the observation that complex fonts can be represented by a superposition of a set of simpler occupancy functions, we introduce multi-implicits to represent fonts as a permutation-invariant set of learned implict functions, without losing features (e.g., edges and corners). However, while multi-implicits locally preserve font features, obtaining supervision in the form of ground truth multi-channel signals is a problem in itself. Instead, we propose how to train such a representation with only local supervision, while the proposed neural architecture directly finds globally consistent multi-implicits for font families. We extensively evaluate the proposed representation for various tasks including reconstruction, interpolation, and synthesis to demonstrate clear advantages with existing alternatives. Additionally, the representation naturally enables glyph completion, wherein a single characteristic font is used to synthesize a whole font family in the target style.

Author Information

Pradyumna Reddy (UCL)
Zhifei Zhang (Adobe Research)
Zhaowen Wang (Adobe Research)
Matthew Fisher (Adobe Research)
Hailin Jin (Adobe)
Niloy Mitra (University College London)

More from the Same Authors