In many applications of human-computer interaction a prediction of the human's next intended action is highly valuable. For locomotor actions, in order to control direction and orientation of the body a walking person relies on visual input obtained by eye and head movements. The analysis of these parameters can be used to infer the intended goal of the walker. However, such a prediction of human locomotion intentions is a challenging task since interactions between these parameters are non-linear and highly dynamic. Distinguishing gazes on future waypoints from other gazes can be a helpful source of information. We employed LSTM models to investigate if gaze and walk data can be used to predict whether walkers are currently looking at locations along their future path or whether they are looking in a direction that is away from their future path. Our models were trained on egocentric data from a virtual reality experiment in which 18 participants walked freely through a virtual environment while performing various tasks (walking along a curved path, avoiding obstacles and searching for a target). The dataset included only egocentric features (position, orientation and gaze) and no information about the environment. These features were used to determine when gaze was directed at future waypoints and when not. The trained model achieved an overall accuracy of 80%. Biasing the model to focus on correct classification of gazes away from the path increased the detection rate of these gazes to 90%.An analysis of model performance in the different walking task showed that accuracy was highest (85%) for curved path walking and lowest (73%) for the target search task. We conclude that online gaze measurements during walking can be used to estimate a walker's intention and to determine whether they look at the target of their future trajectory or away from it.