Recently, the prompt-based continual learning has become a new state-of-the-art by using small prompts to induce a large pre-trained model toward each target task. However, we figure out that they still suffer from memory problem as the number of prompts should increase if the model learns very many tasks. To improve this limit, inspired by the human hippocampus, we propose Lightweight Prompt Learning with General Representation (LPG), a novel rehearsal-free continual learning method. Throughout the study, we experimentally show our LPG's promising performances and corresponding analyses. We expect our proposition to spotlight a novel continual learning paradigm that utilizes a single prompt to hedge memory problems as well as sustain precise performance.