2024-04-30
2024-06-28
2024-06-06
Manuscript received May 17, 2024; revised June 24, 2024; accepted July 25, 2024; published December 16, 2024.
Abstract—Potholes are considered a vital danger to road safety. This study is going to use a novel method realized in the YOLOv8 (You Only Look Once version 8) object detection algorithm library, a well-cutting-edge algorithm, to mark the potholes in road images. Focusing on the resistance to the two types of error namely overfitting and underfitting, the study adopts a set of image augmentation operations and refines the hyperparameters, which contain weight decay and learning rate. For a highly effective hole-filling prediction model, precision annotated images of the roads with the location of potholes marked using the Visual Object Tagging Tool (VoTT) were amassed. These images where potholes are marked using bounding boxes were mined, and the collected data were used to build the state-of-the-art AI models which are fine-tuned for generalization and deployment. The YOLOv8 architecture was trained on this dataset with the assistance of the assessment metric that supplies the most efficient validation and training errors. The data set was composed of 2000 MS VoTT movement images; from these, only 20% was applied to the validation and test phase while the rest of 80% was used for training. For the YOLOv8 training, exposure bounding boxes were used, in which each sample was copied and perturbed at random, the total number of samples used as training increased to 9000. Applying 500 nodes from the computing unit Google Colab featuring High-RAM specifications helped to speed up the training process. A variety of experiments had been performed to evaluate the effectiveness of isolated techniques as well as adjust and select important hyperparameters for example weight decay, learning rate, and batch size. The optimal weight decay value came from experimentation and this included using the values 0.009, 0.001, and 32 for learning rate and batch size. The sum of all this is outstanding, and the perplexity led to an exemplary result with the loss of training 0.06 and validation 0.04, this demonstrates the effectiveness of the proposed method concerning pothole detection. This test is to show whether the model is not overfitting or underfitting. Keywords—YOLOv8, exposure bounding box, Microsoft Visual object Tagging Tool (VoTT) Cite: Ken Gorro, Elmo Ranolo, Lawrence Roble, and Rue Nicole Santillan, "Road Pothole Detection Using YOLOv8 with Image Augmentation," Journal of Image and Graphics, Vol. 12, No. 4, pp. 417-426, 2024. Copyright © 2024 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.