Meshed Up: Learnt Error Correction in 3D Reconstructions

Abstract – Dense reconstructions often contain errors that prior work has so far minimised using high quality sensors and regularising the output. Nevertheless, errors still persist. This paper proposes a machine learning technique to identify errors in three dimensional (3D) meshes. Beyond simply identifying errors, our method quantifies both the magnitude and the direction of depth estimate errors when viewing the scene. This enables us to improve the reconstruction accuracy. We train a suitably deep network architecture with two 3D meshes: a high-quality laser reconstruction, and a lower quality stereo image reconstruction. The network predicts the amount of error in the lower quality reconstruction with respect to the high-quality one, having only view the former through its input. We evaluate our approach by correcting two-dimensional (2D) inverse-depth images extracted from the 3D model, and show that our method improves the quality of these depth reconstructions by up to a relative 10% RMSE.

Further Info – For more experimental details please read our paper:

  • [PDF] M. Tanner, S. Saftescu, A. Bewley, and P. Newman, “Meshed Up: Learnt Error Correction in 3D Reconstructions,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, p. 3201–3206.
    title={Meshed Up: Learnt Error Correction in 3D Reconstructions},
    author={Tanner, Michael and Saftescu, Stefan and Bewley, Alex and Newman, Paul},
    booktitle={2018 IEEE International Conference on Robotics and Automation (ICRA)},
    url = {},
    pdf = {}

For a quick overview you can take a look at our video: