You are on page 1of 23

A Comparative Analysis of Dimension Reduction Algorithms on Hyperspectral Data

Kate Burgers Yohannes Fessehatsion Jia Yin Seo Advisor: Todd Wittman August 7, 2009 Sheida Rahmani

Abstract In the past there has been little study to determine an optimal dimension reduction algorithm on hyperspectral images. In this paper we investigate the performance of dierent dimension reduction algorithms including PCA and Isomap using various hyperspectral tasks to compare them. We considered runtime, classication, anomaly dection, target detection and unmixing. We use both synthetic and real hyperspectral images throughout the experiment. The results are analyzed both quantitatively and qualitatively.

Introduction

Hyperspectral sensors collect information as a set of images represented by dierent bands. Hyperspectral images are three-dimensional images with sometimes over 100 bands where as regular images have only three bands: red, green and blue. Each pixel has a hyperspectral signature that represents dierent materials. Hyperspectral images can be used for geology, forestry and agriculture mapping, land cover analysis, and atmospheric analysis. Even though hyperspectral images can sometimes contain over 100 bands, relatively few bands can explain the vast majority of the information. For such reason, hyperspectral images are mapped into a lower dimension while preserving the main features of the original data by a process called dimensional reduction. There is evidence that performing dimension reduction may aect the performance of image processing tasks including target detection algorithms, anomaly detection algorithms, classication and unmixing [1]. The dimension reduction codes are taken from Van Der Maatens MATLAB dimension reduction toolbox, except for ICA and classication, which comes from ENVI 4.6.1 Software [2, 3]. In this paper, we compare various algorithms for dimension reduction that have been developed. The paper investigates the following 8 dimension reduction techniques: (1) PCA [4], (2) Kernel PCA [5], (3) Isomap [6], (4) Diusion Maps [7], (5) Laplacian Eigenmaps [8], (6) ICA [9], (7) LMVU [10], and (8) LTSA [11]. The aims of the paper are (1) evaluate the performance of each dimension reduction algorithm on the basic image processing tasks, and 1

(2) determine intrinsic dimensionality of hyperspectral images. The performance of each algorithm is conducted both qualitatively when evaluating classication and quantitatively when evaluating runtime, target detection, anomaly detection and unmixing. This paper is organized as follows: section 2 presents and discusses the eight dimension reduction techniques. Section 3 introduces the data used and the methods of producing synthetic images for target and anomaly detections and unmixing. Section 4 discusses the approach to obtaining the runtime, anomaly and target detection, classication and unmixing. Section 5 outlines the results of the dierent algorithms based on the performance tasks. Section 6 discuses the conclusions of the experiment.

Dimension Reduction Algorithms

Eight dierent dimension reduction methods are compared in the experiment on the runtime, and the tasks of classication, anomaly detection, target detection, and unmixing. These methods are broken up into two categories: linear and nonlinear. There are often dierences between linear and nonlinear methods regarding both runtime and performance. Nonlinear methods, Isomap, Laplacian, LMVU and KPCA, are thought to give better results with the trade-o of slower runtime [1]. For our experiments, dierent subsets of algorithms were tested for dierent tasks. The methods that we chose were all run with their default parameters except for KPCA, for which we chose poly.

Images

We have used four dierent hyperspectral images on determining the runtime and classication, because some algorithms give better results on some images while failing to give adequate results on others. These four hyperspectral images are Urban (162 bands) [12], Terrain (162 bands) [12], Smith Island (126 bands) [13] and San Diego Airport (224 bands) [3] shown in gure 1. Urban is the image of a Walmart in Cypress Cove, Texas. Terrain is aerial image of roads in a desert region. Smith Island and San Diego Airport are aerial images of Smith Island, Maryland and an airport in San Diego. Ideally, we would use these four images to compare the success of each algorithm on correctly identifying each target, anomaly and class. We are unable to use these images due to the fact that we have no ground truth of these images. As such, we have created dierent synthetic images for each task. From the hyperspectral images, we obtained a number of spectral signatures of identiable materials and labeled it accordingly with the same amount of bands as the images from which they were acquired. The size of the synthetic images for anomaly detection and target detection were 50 50, where as the synthetic images for unmixing was 100 100 and downsampled to 50 50. Each pixel was assigned with a known material and Gaussian Noise was introduced with mean zero and variance 0.00005. There are three synthetic images for anomaly detection and two synthetic images for target detection shown in gure 2. The spectral signature of each images were taken from Urban and San Diego Airport labeled Fake Urban, Urban Blocks, and Fake San Diego, 2

(a) Urban

(b) Terrain

(c) Smith Island

(d) San Diego Airport

Figure 1: Images used in our experiment.

(a) Fake San Diego

(b) Fake Urban

(c) Urban Blocks

Figure 2: Images used for anomaly detection, target detection and unmixing with Gaussian noise. using the same number of bands as the parent image. Fake Urban and Fake San Diego were used for anomaly detection and target detection. Urban Blocks was used solely for anomaly detection. For unmixing, Fake Urban and Fake San Diego were enlarged to 100 100 pixels, each pixel was blurred with its neigbours and the image was downsampled.

4
4.1

Experimental Design
Classication

Classication is the process of taking an image and breaking it up into specied classes depending on the dierences in hyperspectral signature of each material [14]. Pixels with similar hyperspectral signatures are generally grouped into the same class. An eective dimension reduction algorithm will retain the information on each material enough that when ran classication, the same material should be grouped into the same class. In Meiching Fongs paper it was mentioned that dimension reduction could possibly improve classication [1]. We were intrigued to use experiments to analyze and evaluate this idea. A set of dimension reduction methods and images were chosen for testing. The methods are KPCA, PCA, and ICA with reduced to three, four, and ve bands for all three methods. The images are Urban, Terrain, San Diego Airport and Smith Island. We performed classication with the K-Means classication method from ENVI. The default parameters for K-Means is retained with the change to the number of classes depending on the image. The number of endmembers were used to select the number of classes in each image. The number of classes chosen for each image are shown in table 1. The classication results were evaluated qualitatively based on comparison between methods and the dierent dimensionalities on each method. The images were analyzed based on accuracy of size of each class, ability to classify between manmade and natural materials, and consistency in classifying the same materials as the same class.

Urban 5

San Diego Airport 7

Terrain 4

Smith Island 4

Table 1: The number of classes in each image.

4.2

Anomaly Detection

In our experiment, we tested the performance of ve dierent dimension reduction methods on the tasks of anomaly and target detection. We also tested for the intrinsic dimensionality for each image on both tasks. The dimension reduction methods compared are Diusion Map, ICA, KPCA, LTSA and PCA. Dimension reduction was performed on the prepared synthetic hyperspectral data to reduce the dimensionalities down to 3, 4 and 5 dimensions. We compare the methods by the means of True Positive Rate (TPR) and False Positive Rate (FPR) in the anomalies or targets detected. Anomaly detection refers to selecting pixels from a given hyperspectral image that are dissimilar to their surrounding [15]. Finding an anomaly is generally not an easy task because anomalies are small, which could be read as noise, and determining something as anomaly is fairly subjective. The tool used in our experiment is the RX Anomaly Detection in ENVI with a local mean source. On each of the three synthetic hyperspectral images, we run each of the ve dimension reduction methods to reduce the number of dimensions down to 3, 4, and 5 bands. With these we have a total of 45 result images to compare with 3 original images, to determine both the optimal dimension reduction method and the intrinsic dimensionality.

4.3

Target Detection

The purpose of target detection is to nd other pixels in the image that has the same spectral signature as a pre-determined material [15]. The diculty in nding the targets arise from: 1) with noise added, the spectral signatures of the target pixels are not the same; 2) target patches of pixels are of various sizes, down to one pixel, which could be mistaken as noise; 3) each target is on top of dierent backgrounds, with some background more similar to the target than others. To perform target detection, we rst select a pixel as the target spectral signature. The tool used in our experiment is the SAM Target Finder from ENVI. The parameters are set as default with the SAM maximum angle of 0.1. We rst run ve dimension reduction methods on the two synthetic images down to the output dimensionality of 3, 4, and 5 to obtain 30 results. Each of these results are fed through the SAM Target Finder, using the same target pixel for each image. These resulting images are used to test with the original synthetic images to evaluate the performance of each dimension reduction method. We tested whether each of the dimension reduction methods was able to retain the anomalies and targets in the original image even after removing the majority of the bands. To compare the dierent methods, we use the metrics TPR and FPR. First, we create

the binary image representation of the original synthetic images, with one denoting the presence of an anomaly or target and zero denoting the background. Secondly, we take the dimension reduction result images and obtain the binary image representation of each one. We compare the binary image of the original data along with each of the binary result images to obtain a TPR and FPR for each. Comparing the two images pixel by pixel, we classify each pixel as True Positive (TP), False Positive (FP), True Negative (TN), or False Negative (FN). The TPR and FPR are dened as follows: TPR = and TP TP + FN

FP . FP + TN Each of the rates is ranged from zero to one. The optimal value for the TPR is one and zero for FPR. The dierent methods can be evaluated using these rates. FPR =

4.4

Unmixing

Hyperspectral images generally have very low spacial resolution, so each pixel tends to have multiple materials mixed together. Unmixing is the process of determining what materials are in each pixel, and how much [16]. In order to use unmixing, we rst had to extract endmembers, which are pixels in the image that are representative of a material. For this task, we used SMACC Endmember extraction in ENVI. Since the endmembers are not found in any particular order, we then had to match the extracted endmembers to known endmembers in our image before nally unmixing the images using Linear Spectral Unmixing in ENVI. In this portion of the experiment, we looked at four dierent algorithms, PCA, KPCA, ICA and Isomap, and compared those results to each image before it was dimensionally reduced. We can then compare the algorithms in three dierent ways: rst by considering the number of endmembers that were found, second by determining how many of those endmembers matched a known endmember, and third by determining how closely the unmixed images match a ground truth. In choosing endmembers, there were several parameters to test. The rst was maximum number of endmembers. We did not want to limit the number of endmembers directly, so we chose to have 30 endmembers as an upper-bound. In order to keep from getting pixels that were too impure, we set a maximum error of 0.1 and also chose to have the algorithm coalesce redundant endmembers. Once a set of endmembers had been chosen, we could then match them to our ground truth endmembers. Once that was done, we could then compare the ground truth map for each material to the map for each corresponding endmember, if one was found. We did this using the Euclidean metric across the pixels. The root mean squared error E is given by

E=
1

(Gi Ri )2

where i is the number of pixels, Gi is the ith pixel of the ground truth image and Ri is the ith pixel in the result image.

5
5.1

Results
Runtime

The four hyperspectral images, Urban, Terrain, San Diego Airport, and Smith Island are all of dierent sizes. When testing for runtime, we wanted to control for the size of each image. As such, 2500 pixel samples of each image were taken to test the runtime. We have tested the runtime of PCA, KPCA, LLE, Isomap, LMVU, Laplacian, LTSA, and Diusion Map. All eight dimension reduction algorithms were run three times, each time reducing to three, four or ve bands. The time was recorded in seconds and the average runtime of three, four and ve bands was computed. For each image, PCA strongly outperformed the other seven algorithms reducing the 50 50 images in less than one second. LLE, Laplacian and LTSA each took under ve seconds to run, KPCA and Diusion Map each took under a minute to run and total runtime for Isomap was just under seven minutes. LMVU is the slowest algorithm to run: the runtime for LMVU ranged from 29-41 minutes to nish reducing each image. There are some unknown issues with LLE, which failed to reduce the dimensions on San Diego. Table 2 shows the average recorded runtime of each algorithm. PCA 0.07 s 0.04 s 0.04 s 0.07 s KPCA 56 s 54 s 49 s 75 s LLE 2.7 s 2.4 s 2.5 s Isomap 429 s 431 s 420 s 428 s LMVU 2073 s 2101 s 1781 s 2451 s Laplacian 1.7 s 1.6 s 1.7 s 1.8 s LTSA 4.9 s 4.7 s 4.7 s 4.9 s DM 42 s 42 s 42 s 42 s

Urban Terrain Smith Island San Diego

Table 2: The average runtime of all eight algorithms which reduced 50 50 images into three, four, and ve bands. Each time was recorded in seconds. There is no data for LLE San Diego because LLE fails to run on San Diego.

5.2

Classication

We found that for classication, dierent methods are more suitable to use on dierent images. As each image has its own characteristics, some methods are better in retaining those characteristics than others. Over all, classication rate did improve after using dimension reduction. Since ground truth was not available, the results were analyzed qualitatively.

Some of the obvious dierences were that after dimension reduction, certain materials were more distinct; therefore they were identied correctly into dierent categories. In our experiments we chose to reduced each image to 3, 4, and 5 bands As shown in gure 7 in appendix A, after running classication the results were identical for the dierent dimensionality values. Results varied among the four images. As shown in gure 8 in appendix A, in the San Diego image, some of the noticeable dierences were in using the KPCA. Some materials, for example grass and concrete, were classied as the same. In ICA, the planes and grass were classied as the same. PCA was detailed and it classied the airplanes and grass dierently. The results were drastically dierent for the Terrain image, as shown in gure 9 in appendix app:class. Terrain has four classes and using PCA there was too much detail that took away from the smoothness of the grass and tree areas. It identied the shadows as a dierent category. KPCA performs better on this image, it retains the smoothness of the grass area. ICA also gave great results but the shapes of the trees were not preserved as well after running the ICA method. In testing using Smith Island, the best result was achieved with ICA, as shown in gure 10 in appendix A. It was more distinct in categories and it was more consistent with comparison to the original hyperspectral image. In the Urban image, PCA performed best, as shown in gure 11 in appendix A. It captured all the detailed dierences. It was able to identify grass, tree and concrete as dierent materials. When using KPCA, grass tree and concrete were identied to be the same. Also the ICA method was not accurate.

5.3

Anomaly Detection

The performance of each dimension reduction method after anomaly detection was evaluated against the ground truth synthetic image to obtain a TPR and FPR for each. Table 3 in appendix app:detect shows the results of our experiment on each dimension reduction methods performance in three dierent images and three dimensionalities. Results obtained in the three images tend to vary as they have dierent characteristics on the materials, size, and location of the anomalies. Fake Urban had all the anomalies in blocks of 22 pixels; however the anomalies are laid in dierent backgrounds at various locations of the image. As seen in gure 3, this image also has a large number of edges, and we notice that RX detector has the tendency to detect edges as anomalies, resulting in a fairly large amount of False Positives found. Out of all the methods, only KPCA was able to achieve a TPR of one in all three dimensionalities. All other methods except for Diusion Map were able to achieve TPR of one. For all the methods, the FPR appears to be very similar except a jump in dimensionality 4 for Diusion Map and LTSA. There does not seem to be a correlation between TPRs and FPRs in each method. We also observe the trend of increasing TPRs, and constant FPRs as we increase the dimensionalities. Fake San Diego had the anomalies in various sizes and shapes, but the anomalies are still laid on dierent backgrounds at various locations of the image. Like Fake Urban, this image had a large number of edges, which resulted in generation of false positives. 8

(a) Diusion Map

(b) ICA

(c) KPCA

(d) LTSA

(e) PCA

Figure 3: Here are ve dimension reduction results with all ve methods in dimensionality four. Diusion Map and LTSA both have a fair amount of false positives with DM being to detect the actual anomalies a little better. Both methods detected edges as anomalies. ICA, KPCA, and PCA were able to detect all the correct anomalies but with dierent numbers of false positives. Although KPCA were able to select all the anomalies in all three dimensionalities, it selects more false positives than ICA and PCA.

(a) 3 bands

(b) 4 bands

(c) 5 bands

Figure 4: Here are three anomaly detection results on Urban Blocks with Diusion Map. Each pixel in the image has a value that indicates how close to being an anomaly that pixel is. The whiter the pixel, the closer the pixel is to being an anomaly. On dimensionality 3, the right most columns of anomalies are not detected. The left two most columns in dimensionality 4 are darker than the two of dimensionality 5. This shows that in the case of diusion map, dimensionality 5 creates the most optimal results with the brightest results. In this picture, KPCA again is the only method that generated TPR of one in all three dimensionalities, with LTSA coming in second with TPR of one in two dimensionalities. Both PCA and ICA generated the lowest FPR with fairly similar results, but ICA has a higher TPR. Again, there is a positive correlation between the TPR and dimensionalities, and only in dimensionality of ve did we observe TPR of one for all methods. The Urban Blocks image contains ve dierent types of anomalies on the same background. Each type of anomaly has various sizes. Unlike the other two images, this image does not have many edges, which helps the RX detector in not generating any false positives. This image also contains a larger number of anomalies than the other two images, and all anomalies are perfectly lined up. In this image, ICA and PCA gives the highest TPR in all dimensionalities. All of the methods generated a FPR of zero mainly due to the arrangement of the anomalies. Diusion Map had the lowest average TPR in dimensionality 3 and 4. As seen in gure 4, there is a positive correlation between TPR and the dimensionality.

5.4

Target Detection

The results of target detection are tabulated in table 4 in appendix B, with all ve methods and all three dimensionalities. Both images yield similar results and are very successful in nding all the targets. LTSA is the only method that did not have TPR of one in all dimensionalities. Diusion Map and LTSA are the only two methods that did not yield FPR of zero in all dimensionalities. In gure 5, we can see that Diusion Map picks out a lot of false positives along with the true positives. Again in target detection we witness that only dimensionality of 5 has perfect results in all methods and all images. We observe a very unlikely result of having most of the TPR of one, this led us to conduct another sub-experiment to test the SAM Maximum Angle on the eect of varying 10

(a) 3 bands

(b) 4 bands

(c) 5 bands

Figure 5: Here are three target detection results from the method Diusion Map. Although dimensionality 3 and 4 both were able to detect the targets, both dimensionalities detected non-targets as targets, with dimensionality 4 performing the worst. Only dimensionality 5 was able to obtain all the correct targets with no false positives. this threshold. The results of this sub-experiment is shown in table 5 in appendix B. We found that we are able to vary this threshold greatly and still obtain constant results. Most importantly, LTSA is the most robust to this change of threshold and it was able to obtain TPR of one and FPR of zero. On the other hand, Diusion Map is the most vulnerable to this change, and tends to have the TPR and FPR vary greater as the threshold changes. Ultimately, we veried that SAM Maximum Angle of 0.1 is the optimal parameter with all methods achieving TPR of one and FPR of zero.

5.5

Unmixing

The results of unmixing can be determined by looking at endmembers as well as abundance maps. Since we used synthetic data, we know exactly how many endmembers are in each image, what the endmembers are, as well as in how much abundance for each pixel. We found that in general when Isomap or PCA was run, there were not quite enough endmembers or the right number, but when KPCA and ICA were run, more endmembers were found than actually existed. Despite nding so many endmembers in these cases, not many of them actually matched a known endmember, where as the endmembers found when Isomap or PCA were run tended to match known endmembers. The results of endmember selection are shown in table 6 in appendix C. We found that in the resulting abundance maps, the correct pixels would have the highest percent of a given material, but many pixels that contained none of that material would be given a non-zero value as well. This is illustrated in gure 6. The algorithm also picked up the noise that was added to the synthetic image. In order to quantitatively compare the methods for dimension reduction, we took the root mean squared error of the abundance map for each member separately. PCA tended to have the lowest error. While KPCA generally performed well, it was inconsistant. The results are summarized in table 8 in appendix C. We also performed unmixing on the original synthetic images to see whether dimension 11

(a) Ground Truth for Fake San Diego

(b) Fake Sandiego Unmixed after ICA is run to 3 bands

Figure 6: This is the abundance map of grass in Fake San Diego. Notice that unmixing tends to nd some grass in places that there is not actally any grass. reduction improved the results. We found that generally the results are similar whether or not we performed dimension reduction. The exception was with PCA, which improved the results. The results without dimension reduction are summarized in table 7 in appendix C. We also wanted to know if the images have an intrinsic dimensionality. For unmixing, every algorithm except ICA performed best with dimensionality 5 most of the time. When ICA was run, there seems to be no strong correlation between dimensionality and accuracy.

Conclusion

We compared eight dierent dimension reduction methods in performance on ve dierent hyperspectral tasks. PCA is the fastest algorithm, while LMVU is the slowest. After PCA is run, urban images are classied more accurately, and after KPCA is run rural images are classied more accurately. For anomaly detction, KPCA works best for images with multiple edges, but PCA and ICA perform comparably on images without many edges. Target detection worked perfectly on synthetic images when we ran KPCA, PCA and ICA. PCA usually resulted in less error for unmixing than the other algorithms. In conclusion, we found that PCA outperforms the other methods in every task that we investigated.

12

Classication Images

(a) PCA with 3 dimensions

(b) PCA with 4 dimensions

(c) PCA with 5 dimensions

Figure 7: Images with dierent dimensionality values. The images are very similar, regardless of dimensionality.

13

(a) Original San Diego Airport image (b) Classied image without dimension reduction

(c) Dimensionally reduced image using (d) Dimensionally reduced image using PCA KPCA

(e) Dimensionally reduced image using ICA

Figure 8: Dierent dimension reduction methods on the San Diego image. 14

(a) Original Terrain image

(b) Classied image without (c) Dimensionally reduced imdimension reduction age using PCA

(d) Dimensionally reduced im- (e) Dimensionally reduced image using KPC age using ICA

Figure 9: Dierent dimension reduction methods on the Terrain image.

15

(a) Original Smith Island image

(b) Classied image without dimen- (c) Dimensionally reduced image using sion reduction PCA

(d) Dimensionally reduced image using (e) Dimensionally reduced image using KPCA ICA

Figure 10: Dierent dimension reduction methods on the Smith Island image.

16

(a) Original Urban image

(b) Classied image without (c) Dimensionally reduced imdimension reduction age using PCA

(d) Dimensionally reduced im- (e) Dimensionally reduced image using KPCA age using ICA

Figure 11: Dierent dimension reduction methods on the Urban image.

17

Anomaly and Target Detection Tables


Fake Urban DM ICA KPCA LTSA PCA Fake San Diego DM ICA KPCA LTSA PCA Urban Blocks DM ICA KPCA LTSA PCA 3 - TPR 0.1429 0.8571 1 1 0.8571 3 - TPR 0.3077 0.0769 1 0.0769 0.0769 3 - TPR 0.4188 0.6063 0.4063 0.6 0.6063 3 - FPR 0.1429 0.0392 0.0396 0.0388 0.0396 3 - FPR 0.0229 0.004 0.0221 0.029 0.0044 3 - FPR 0 0 0 0 0 4 - TPR 0.1429 1 1 0.1429 1 4 - TPR 0.2308 0.7692 1 1 0.4615 4 - TPR 0.6188 0.9938 0.8063 0.6375 0.9938 4 - FPR 0.0854 0.0392 0.0396 0.0866 0.0392 4 - FPR 0.0748 0.0072 0.0225 0.0209 0.0072 4 - FPR 0 0 0 0 0 5 - TPR 1 1 1 1 1 5 - TPR 1 1 1 1 1 5 - TPR 1 1 1 1 1 5 - FPR 0.0396 0.0396 0.0405 0.0396 0.0396 5 - FPR 0.0314 0.006 0.0281 0.031 0.006 5 - FPR 0 0 0 0 0

Table 3: Anomaly Detection results of all three images, ve methods and 3, 4, and 5 dimensions

18

Fake Urban DM ICA KPCA LTSA PCA Fake SD DM ICA KPCA LTSA PCA

3 - TPR 1 1 1 1 1 3 - TPR 1 1 1 1 1

3 - FPR 0.0028 0 0 0.0057 0 3 - FPR 0.123 0 0 0 0

4 - TPR 1 1 1 0.3571 1 4 - TPR 1 1 1 1 1

4 - FPR 0.0433 0 0 0 0 4 - FPR 0.123 0 0 0 0

5 - TPR 1 1 1 1 1 5 - TPR 1 1 1 1 1

5 - FPR 0 0 0 0 0 5 - FPR 0 0 0 0 0

Table 4: Target Detection results of two images, ve methods and 3, 4, and 5 dimensions

SAM Angle 0.005 0.01 0.1 0.55 0.75

DM TPR 0.25 0.7143 1 1 1

FPR 0 0 0 0.04 0.0433

ICA TPR 0.0357 0.1429 1 1 1

FPR 0 0 0 0 0.0113

KPCA TPR 0.0357 0.1429 1 1 1

FPR 0 0 0 0 0

LTSA TPR 1 1 1 1 1

FPR 0 0 0 0 0

PCA TPR 0.0357 0.3571 1 1 1

FPR 0 0 0 0 0.0433

Table 5: Target Detection result of Fake Urban in dimensionality of 5 with varying SAM Maximum Angles. The dierent SAM Maximum Angles used were 0.005, 0.01, 0.1, 0.55, and 0.75.

19

Unmixing Tables
(a) Results for Fake Urban.

Found Matched Found Matched

ICA - 3 6 2 KPCA - 3 13 5 ICA - 3 5 4 KPCA - 3 12 4

ICA - 4 7 1 KPCA - 4 14 5 ICA - 4 28 6 KPCA - 4 16 4

ICA - 5 8 1 KPCA - 5 14 4 ICA - 5 14 7 KPCA - 5 17 4

PCA-3 5 5 Isomap-3 4 4 PCA-3 5 5 Isomap-3 3 3

PCA - 4 5 5 Isomap - 4 5 5 PCA - 4 5 5 Isomap - 4 4 4

PCA - 5 6 6 Isomap - 5 6 6 PCA - 5 5 5 Isomap - 5 6 6

(b) Results for Fake San Diego.

Found Matched Found Matched

Table 6: The number of endmembers found after each algorithm was run on each images, as well as the number of those endmembers which closely matched a known endmember. Fake Urban had six materials and Fake San Diego had seven materials.

Urban San Diego

Car 9.0723 Grass 3.0577

Roong N/M Dirt N/M

Road 6.1146 Plane 15.7316

Dirt 10.3639 Street 38.7782

Tree 29.4156 Roong N/M

Grass N/M Cement N/M

Car N/M

Found 4 Found 3

Matched 4 Matched 3

Table 7: The results of unmixing on the synthetic data without running dimension reduction. 20

(a) Results for Fake Urban.

Car Roong Road Dirt Tree Grass Car Roong Road Dirt Tree Grass

ICA - 3 17.1628 16.0306 N/M N/M N/M N/M KPCA - 3 3.8302 7.785 10.4356 N/M 15.9174 30.1688 ICA - 3 24.6383 80.884 20.4755 N/M N/M 66.783 N/M KPCA - 3 27.828 11.4607 N/M 6.4651 N/M 8.4147 N/M

ICA - 4 25.5465 N/M N/M N/M N/M N/M KPCA - 4 3.7915 6.6579 9.5683 N/M 11.2643 30.4059 ICA - 4 31.8303 13.4311 4.5548 17.2976 17.4452 19.9295 N/M KPCA - 4 24.6011 10.8406 N/M 7.82 N/M 6.0202 N/M

ICA - 5 37.1267 N/M N/M N/M N/M N/M KPCA - 5 3.4849 6.0378 4.7265 N/M 8.8447 N/M ICA - 5 17.34 6.7828 2.9187 16.2582 7.9619 11.3591 2.5806 KPCA - 5 23.8715 12.4254 N/M 8.066 N/M 5.314 N/M

PCA - 3 11.1771 N/M 6.9818 6.5991 14.6616 16.9262 Isomap - 3 14.0413 21.5723 18.2611 5.8537 N/M N/M PCA - 3 2.0846 11.9723 14.9182 2.5456 N/M 9.6941 N/M Isomap - 3 28.8731 39.2464 N/M N/M 31.0046 N/M N/M

PCA - 4 2.0607 N/M 3.7827 2.079 1.7435 2.0893 Isomap - 4 10.5166 11.1113 14.3377 4.9535 11.407 N/M PCA - 4 0.8043 4.9288 15.6564 0.7424 N/M 2.5046 N/M Isomap - 4 31.3294 20.9429 34.111 N/M 41.9153 N/M N/M

PCA - 5 0.1336 0.3075 0.2083 0.1672 0.4505 0.5688 Isomap - 5 9.2738 10.517 9.386 4.1337 12.0029 31.1289 PCA - 5 0.7245 4.3842 15.4881 0.7383 N/M 1.9937 N/M Isomap - 5 20.3963 11.844 8.5502 18.7444 22.8257 29.3472 N/M

(b) Results for Fake San Diego.

Grass Dirt Plane Street Roong Cement Car Grass Dirt Plane Street Roong Cement Car

Table 8: Root mean squared error in unmixed image for each material. N/M indicates that no endmember was found that matched the material.

21

References
[1] Fong, M. (2007, August 31). Dimension Reduction on Hyperspectral Images. [2] van der Maaten, L. (2008, November). Matlab Toolbox for Dimensionality Reduction v0.7. http://ticc.uvt.nl/~lvdrmaaten/Laurens_van_der_Maaten/Matlab_ Toolbox_for_Dimensionality_Reduction.html [3] RSI (2008). ENVI Version 4.6.1 Computer Software. Research Systems Inc. [4] Jollie, I. T. (1986). Principal Component Analysis (2nd ed.) New York: Springer Verlag. [5] Scholkopf, B., Smola, A., & Muller, K. (1998). Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation, 10, 12991319. [6] Tenenbaum, J. B., de Silva, V., & Langford J. C. (2000). A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science, 290, 2319. [7] Lafon, S., & Lee, A. B. (2006, September). Diusion Maps and Coarse-Graining: A unied framework for dimensionality reduction, graph partitioning, and data set parameterization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(9), 13931403. [8] Belkin, M. & Niyogi, P. (2003, June). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 6(15), 13731396. [9] Hyvrinen, A., & Oja, E. (2000). Independent component analysis: algorithms and a applications. Neural Networks, 15(4), 411430. [10] Venna, J. (2007, June 8). Dimensionality Reduction for Visual Exploration of Similarity Structures. (Doctoral dissertation, Helsinki University of Technology, 2007). [11] Zhang, Z. & Zha, H. (2002). Principal manifolds and nonlinear dimension reduction via local tangent space alignement. (Tech. Rep. No. CSE-02-019). Pennsylvania State University, Department of computer science and engineering. [12] US Army Topographic Engineering Center, HyperCube. http://www.tec.army. mil/Hypercube/ [13] Web Site for the University of Virginias Long Term Ecological Research Program [Online]. Available: http://www.vcrlter.virginia.edu [14] Canty, J. M. (2007). Image Analysis, Classication, and Change Detection in Remote Sensing with Algorithms for ENVI/IDL. Florida: CRC Press.

22

[15] Chang, C.-I., & Ren, H. (2000, March). An Experiment-Based Quantitative and Comparative Analysis of Target Detection and Image Classication Algorithrms for Hyperspectral Imagery. IEEE Transactions on Geoscience and Remote Sensing, 38(2), 10441063. [16] Winter, M. E. (2000). N-FINDR: an algorithm for fast autonomous spectral endmember determination in hyperspectral data.

23

You might also like