Home > Articles > All Issues > 2025 > Volume 13, No. 2, 2025 >
JOIG 2025 Vol.13(2):164-173
doi: 10.18178/joig.13.2.164-173

Deep Learning-Based Pose Regression for Satellites: Handling Orientation Ambiguities in LiDAR Data

Margherita Piccinin * and Ulrich Hillenbrand
Institute of Robotics and Mechatronics, German Aerospace Center (DLR), 82234 Weßling, Germany
Email: margherita.piccinin@dlr.de (M.P.); ulrich.hillenbrand@dlr.de (U.H.)
*Corresponding author

Manuscript received June 7, 2024; revised June 15, 2024; accepted July 24, 2024; published March 26, 2025.

Abstract—In orbital spaceflight today, there is high demand for servicing of satellites, assembling of space structures, as well as clearing of orbits from harmful debris. Orbital robotics is a critical technology for accomplishing these tasks. On-board autonomy of servicing spacecraft requires imaging or 3D sensors, LiDAR in the case considered here, and intelligent processing of their data to estimate the relative pose between servicer and target satellite. In this study we investigate a parametrization for pose regression based on Deep Learning (DL) that can be superior to the standard parameters. In particular, we show that higher prediction accuracy can be achieved by adapting the parametrization to symmetries or more generally pose ambiguities of the target object. This result is established in extensive experiments on both synthetic and real LiDAR data for several DL-based methods. Moreover, our own lightweight network is both more accurate and faster than classical methods, even on a standard Central Processing Unit (CPU), and more accurate than also the other recent DL-based methods we compare to. Our synthetically trained regressor also achieves excellent sim2real transfer.

Keywords—pose estimation, deep learning, LiDAR data, satellite, orbital robotics

Cite: Margherita Piccinin and Ulrich Hillenbrand, "Deep Learning-Based Pose Regression for Satellites: Handling Orientation Ambiguities in LiDAR Data," Journal of Image and Graphics, Vol. 13, No. 2, pp. 164-173, 2025.

Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC-BY-4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.