Abstract
Forward-looking ground-penetrating radar (FLGPR) has recently been investigated as a remote-sensing modality for buried target detection (e.g., landmines). In this context, raw FLGPR data are beamformed into images, and then, computerized algorithms are applied to automatically detect subsurface buried targets. Most existing algorithms are supervised, meaning that they are trained to discriminate between labeled target and nontarget imagery, usually based on features extracted from the imagery. A large number of features have been proposed for this purpose; however, thus far it is unclear as to which are the most effective. The first goal of this paper is to provide a comprehensive comparison of detection performance using existing features on a large collection of FLGPR data. Fusion of the decisions resulting from processing each feature is also considered. The second goal of this paper is to investigate two modern feature learning approaches from the object recognition literature: the bag-of-visual words and the Fisher vector for FLGPR processing. The results indicate that the new feature learning approaches lead to the best performing FLGPR algorithm. The results also show that fusion between existing features and new features yields no additional performance improvements.
Original language | English |
---|---|
Pages (from-to) | 547-558 |
Number of pages | 12 |
Journal | IEEE Transactions on Geoscience and Remote Sensing |
Volume | 56 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2018 |
Keywords
- Buried object detection
- Feature extraction
- Ground-penetrating radar
- Image classification
- Object detection
- Radar imaging