ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2025 (SCI-Expanded) , sa. , ss.1-24, 2025 (Scopus)
Detecting camouflaged objects in camouflage images is quite challenging due to their closely matching texture, pattern, and color characteristics with the background. Existing binary segmentation solutions cannot easily deal with the problem of
detecting camouflaged objects because they have weak boundaries and background-like patterns. The purpose of camouflaged object detection (COD) is to detect objects that very closely resemble the background. In this study, an original camouflage
butterfly dataset called ERVA 1.0 is created, consisting of images of 10 butterfly species downloaded from search engines. Additionally, the raw training data is increased with data augmentation techniques. For COD, this study presents a two-stage
solution: segmentation and object recognition. The texture features of all test images on the ERVA 1.0 dataset are extracted utilizing the Gabor filter for segmentation. Then, these extracted features are clustered with the K-means algorithm, and the
original image is separated into different regions based on texture features. The local binary pattern algorithm and Euclidean distance calculation are used to determine which of these regions belongs to the butterfly object. Following the application
of morphological operations on the identified butterfly object region, pretrained models from deep learning techniques were employed to predict the species of the butterfly. Segmentation success rates are 87.89% with the structural similarity method
and 83.64% with the Dice similarity coefficient method. Deep learning pretrained models are used to classify the type of the butterfly object obtained after segmentation. Experiment 1 was conducted with un-augmented training data and Experiment
2 with augmented data by applying data augmentation techniques. The highest success rate for Experiment 1 was 92.29% with the InceptionResNetV2 model, and the highest success rate for Experiment 2 is 94.81% with the DenseNet121 model.