Multi-frame Fusion Methods Based on Cepstral Coefficients for Drone Classification


Creative Commons License

Kumbasar N., Kılıç R., Oral E. A., Özbek İ. Y.

Erzincan Üniversitesi Fen Bilimleri Enstitüsü Dergisi, cilt.18, sa.3, ss.892-916, 2025 (TRDizin)

Özet

The increasing popularity of drones in recent years has resulted in privacy and security vulnerabilities. Today, drones can be easily purchased and used, leading to concerns about intrusion into private areas. Detecting the presence of drones and identifying their operation mode is of great importance. Various detection techniques, including video, sound, thermal imaging, and Radio Frequency (RF) signals, are employed for drone detection and classification. In this study, RF signals are utilized for classifying drones using DroneRF dataset, which is a publicly available open-source dataset. It consists of four main classes for drone types, namely AR, Bebop, Phantom drones and back ground for no drone existence. The dataset is further divide into ten sub-classes representing different operating modes of each drone. Operating mode classification is crucial for security reasons since they represent drones’ specific activity. To achieve high performance in drone classification, we propose the multi-frame majority voting method using cepstral coefficients. Drone signals are divided into multiple frames (2, 4, and 8), and each frame is analyzed using Mel Frequency Cepstral Coefficients (MFCC) and Linear Frequency Cepstral Coefficients (LFCC) attributes. Each frame is classified by Support Vector Machine (SVM), and majority voting is applied to the predictions from the frames. Results show 100% accuracy for drone classification (4-Class) and 99.11% accuracy for defining operating modes (10-Class). The proposed method outperforms existing methods in drone classification using the DroneRF dataset.