By treating dance as a long sequence of tokenized human motion data, we build a system that can synthesize novel dance motions. We train a transformer architecture on motion-captured data represented as a sequence of characters. By prompting the model with different sequences or task tokens, we can generate motions conditioned on the movement of a single joint, or the motion of a specific dance move.