2024-04-30
2024-06-28
2024-06-06
Manuscript received December 3, 2022; revised March 15, 2023; accepted May 1, 2023.
Abstract—Visually impaired people can use smartphone navigation applications to arrive at their destination. However, those applications do not provide the means to detect moving objects. This paper presents an Android application that uses the smartphone’s camera to provide real-time object detection. Images captured by the camera are to be processed digitally. The model then predicts objects from the processed image using a Convolutional Neural Network (CNN) stored in mobile devices. The model returns bounding boxes for each of the detected objects. These bounding boxes are used to calculate the distance from the object to the camera. The model used is SSD MobileNet V1, which is pre-trained using the Common Objects in Context (COCO) dataset. System testing is divided into object distance and accuracy testing. Results show that the margin of error for calculating distance is below 5% for distances under 8 meters. The mean average precision is 0.9393, while the mean average recall is 0.4479. It means that the system can recognize moving objects through the embedded model in a smartphone. Keywords—Convolutional Neural Network (CNN), deep learning, object distance, object detection, support system for blind people Cite: David H. Hareva, Sebastian A., Aditya R. Mitra, Irene A. Lazarusli, and Calandra A. Haryani, "Mobile Surveillance Siren Against Moving Object as a Support System for Blind People," Journal of Image and Graphics, Vol. 11, No. 2, pp. 170-177, June 2023. Copyright © 2023 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.