We study the design of stabilizing policies for an airfoil under extreme turbulent flow dynamics. In practice, the standard approach for this task is to reactively correct the deviations from the desired trajectory, e.g. PID control, since learning the model dynamics is usually challenging. The recent model-free reinforcement learning (RL) methods, which also do not require model dynamics, are shown to be promising alternatives to the industry standard controllers. However, these methods typically require a vast amount of samples and lack generalizability to new scenarios, which severely limits their applicability. In this work, by leveraging the domain knowledge that the underlying turbulent flow dynamics are well-modeled in the frequency domain, we propose an efficient model-based RL framework, Fourier Adaptive Learning and CONtrol (FALCON). FALCON cleverly chooses a Fourier basis for learning the underlying system dynamics and deploys a model predictive control (MPC) approach for safe and efficient control design. We show that FALCON quickly learns the fluid dynamics, adapts to the changing flow conditions, and outperforms the state-of-the-art methods while using an order of magnitude fewer samples than the model-free methods. This makes FALCON the first model-based RL method deployed in real-world extreme turbulent environments. Moreover, we derive theoretical learning and performance guarantees for FALCON for a wide range of partially observable nonlinear dynamical systems.