Typeface design plays a vital role in graphic and communication design. Different fonts suit different scenarios and can express different emotions and messages. Font design still requires the participation of professional designers who can create individual font styles for particular requirements. There has also been some use of generative adversarial networks (GANs) for font generation. However, the annotation requirements of the font generation dataset are high and hard to acquire; the machine-generated font cannot meet the designer’s requirements. Therefore, the dataset annotations restrict the generated font variance. Based on the observation of current font generation models, we propose an easy solution for the font generation task. Instead of using attributes annotated by the dataset to represent the font style vector, we introduce the transformer-based language pre-training model into the font generation task, to learn the mapping between the font style description and the font style vector. We evaluated the proposed font generation model based on existing font style descriptions and the newly created font style descriptions. The generated fonts show that the proposed model can generate quality and patent-free fonts based on the input style description required from designer.