A large body of recent research has focused on the development of supervised buried threat detection algorithms for ground penetrating radar (GPR) data. Such algorithms learn to automatically identify landmines in GPR data based on threat data and non-threat data examples. Training data typically consists of small 2-dimensional images that are extracted from a larger image, or volume, of GPR data. Currently, the most popular criterion for choosing training (or testing) patches is high GPR signal energy. In this work, we investigate translational variance in the patches, which occurs when relevant GPR signals (e.g., hyperbolic landmine signals) are not consistently centered, or aligned, in the extracted patches. In this work, we (i) provide evidence suggesting that translational variance is introduced into the data when popular energy based patch extraction methods are employed, and (ii) estimate the classification performance loss in supervised algorithms due to this effect. We present a simple method to help alleviate the translational variance problem. We hypothesize that reducing translational variance prior to supervised learning may facilitate the use, and success, of image features.