We revisit isotonic regression on linear orders, which is the problem of fitting monotonic functions to best explain the data, in online settings. It was previously shown that online isotonic regression is unlearnable in a fully adversarial model, and this lead to its study in the fixed design model. Here we instead develop the more practical random permutation model. We show that the regret is bounded above by the excess leave-one-out loss for which we develop efficient algorithms and matching lower bounds. We also analyze the class of simple and popular forward algorithms, and make some recommendations where to look for algorithms for online isotonic regression on partial orders.