Abstract—Periodic intrinsic and extrinsic (re-)calibrations are essential for modern perception and navigation systems deployed on autonomous robots. To date, intrinsic calibration models for LiDARs have been based on hypothesized physical mechanisms for how a spinning LiDAR functions, resulting in anywhere from three to ten parameters to be estimated from data. Instead we propose to abstract away from the physics of a LiDAR type (spinning vs solid state, for example) and focus on the spatial geometry of the point cloud generated by the sensor. This leads to a unifying view of calibration. In experimental data, we show that it outperforms physics-based models for a spinning LiDAR. In simulation, we show how this perspective can be applied to a solid state LiDAR. We complete the paper by reporting on an open-source automatic system for target-based extrinsic calibration from a LiDAR to a camera.