Successful approaches to autonomous vehicle localisation and navigation typically involve 3D LIDAR scanners and a static, curated 3D map, both of which are expensive to acquire and maintain. We propose an experience-based approach to matching a local 3D swathe built using a push-broom 2D LIDAR to a number of prior 3D maps, each of which has been collected during normal driving in different conditions. Local swathes are converted to a combined 2D height and reflectance representation, and we exploit the GPU rendering pipeline to densely sample the localisation cost function to provide robustness and a wide basin of convergence. Prior maps are incrementally built into an experience-based framework from multiple traversals of the same environment, capturing changes in environment structure and appearance over time. The LIDAR localisation solutions from each prior map are fused with vehicle odometry in a probabilistic framework to provide a single pose solution suitable for automated driving. Using this framework we demonstrate real-time centimetre-level localisation using LIDAR data collected in a dynamic city environment over a period of a year.