The goal of Optimal Recovery is to uncover procedures that are worst-case optimal for the task of learning / recovering unknown functions from a priori information (the model) and a posteriori information (the observations). Optimal recovery procedures can sometimes be chosen as linear maps, at least when the observations are accurate. When they are corrupted, however, the problem becomes more complicated, especially if the focus is put on the practical construction of the (near) optimal procedure. This talk showcases positive results in three situations: deterministic observation errors bounded in $\ell_2$; deterministic observation errors bounded in $\ell_1$; and stochastic log-concave observation errors.