2024-04-30
2024-06-28
2024-06-06
Manuscript received July 26, 2023; revised August 25, 2023; accepted September 21, 2023; published January 4, 2024.
Abstract—Accurate lane position prediction is crucial in autonomous driving for safe vehicle maneuvering. Monocular cameras, aided by AI advancements, have proven to be effective in this task. However, 2D image space predictions overlook lane height, causing poor results in uphill or downhill scenarios that affect action judgments, such as in the planning and control module. Previous 3D-lane detection approaches relied solely on applying Inverse Perspective Mapping (IPM) on the encoded camera feature map, which may not be ordered according to the perspective principle leading to sub-optimal prediction results. To address these issues, we present the LS-3DLane network, inspired by the Lift-Splat-Shoot architecture, which predicts lane position in 3D space using a data-driven approach. The network also employs the Parallelism loss, using prior knowledge of lane geometry, to improve performance. Such loss can be used for training any 3D lane position prediction network and would boost the performance. Our results show that LS-3DLane outperforms previous approaches like Gen-LaneNet and 3D-LaneNet, with F-score improvements reaching 5.5% and 10%, respectively, in certain cases. LS-3DLane performs similarly in X/Z error metrics. Parallelism loss was shown to boost the F-Score KPI when applied to any of the models under test (LS-3DLane, GenLaneNet, and 3D-LaneNet) by up to 2.8% in certain cases and has a positive impact on nearly all the other KPIs. Keywords—monocular camera, 3D-Lanes detection, lift-splat-shot, anchor, geometric structure Cite: Mohammed Hassoubah and Ganesh Sistu, "Data Driven 3D-Lane Detection Using Parallelism Loss Function," Journal of Image and Graphics, Vol. 12, No. 1, pp. 16-22, 2024. Copyright © 2024 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.