You are on page 1of 1

A Toolkit for Remote Sensing Enviroinformatics Clustering

Fazlul Shahriar, George Bonev

Advisors: Michael Grossberg, Irina Gladkova, Srikanth Gottipati

Issues: Clustering Module:

 Remotely sensing data is typically vast. k-means - This is a top-down clustering algorithm which attempts to find k representative centers
for the data. The initial means are selected from the training data itself.
 Data size requires advanced tools to explore them semi-automatically.
Fuzzy k-means - This is a top-down clustering algorithm which attempts to find k representative
 Clustering is one such tool. centers for the data. The initial means are selected from the training data itself. This algorithm uses
a slightly different gradient search than the simple standard k-means algorithm, but generally
yields the same final solution.

Expectation-Maximization - Estimates the means and covariances of components in Gaussian


Implementation: Mixture Model.

 Many clustering algorithms have been proposed in the literature but they are dispersed in Competitive learning - Competitive learning clustering, where the nearest cluster center is
multiple libraries in different languages. Hence it becomes difficult to test these algorithms on updated according to the position of a randomly selected training pattern. Clustering obtained using EM algorithm where Clustering obtained with preprocessing step
applications at hand. 5 clusters were specified and run with a using whitening followed by EM algorithm
Leader follower - Basic leader-follower clustering, which is similar to competitive learning but random initial staring point. The algorithm
 Our goal is to create a single library (platform independent) so that users can test them on additionally generates a new cluster center whenever a new input pattern differs by more than usually gets stuck in a local minima.
remote sensing data threshold distance \theta from existing clusters.

 To accomplish this, we choose Python programming language which gives a MATLAB-like ADDC (agglomerative clustering) - An on-line (simple-pass) clustering algorithm which accepts a
interface and and at the same time lends to deal with large databases single sample at each step, updates the cluster centers and generates new centers as needed. The
algorithm is efficient in that it generates the cluster centers with a single pass of the data.
 Furthermore, Python allows easy integration with C/C++/R libraries.
DSLVQ (distinction sensitive linear vector quantization) - Performs earning vector quantization
(i.e., represents a data set by a small number of cluster centers) using a distinction or classification
criterion rather than a traditional sum-square-error criterion.

Minimum spanning tree (undirected) - Builds a minimum spanning tree for a data set based on
nearest neighbors.

Connected components - Finds connected components for a data set based on nearest neighbor.
Returns a list of the connected components of the given graph.

Graph cut - Graph cut clustering is achieved by cutting edges of the graph to form a good set of
connected components such that the weights of within-components edges will be minimized Clustering obtained using K-means algorithm where Clustering obtained with preprocessing step
compared to across-component edges. 5 clusters were specified and run with a random using whitening followed by K-means algorithm
initial staring point.
Spectral clustering - Spectral clustering techniques make use of the spectrum of the similarity
matrix of the data to perform dimensionality reduction for clustering in fewer dimensions.

HDR (hierarchical dimensionality reduction) - Clusters similar features so as to reduce the


Code fragment from clustering toolkit IPython - MATLAB-like interface for dimensionality of the data.
Python
SOHC (stepwise optimal hierarchical clustering) - Bottom-up clustering. The algorithm starts by
assuming each training point is its own cluster and then iteratively merges the two clusters that
change a clustering criterion the least, until the desired number of clusters, k are formed.

Utility Functions:
Make graph (similarity matrix) - Gen a set of data points A, the similarity matrix may be defined
as a matrix S where each elements represents a measure of the similarity between points i,j in A.

Normalization (standard deviation based) - Normalize a group of observations on a per feature


basis. This is done by dividing each feature by its standard deviation across all observations. Modes obtained during the mean shift algorithm. Clustering obtained using a combination of
Red dots represent the local peaks of the density mean shift and connected components algorithms
UniqueRand - Generates unique set of random points drawn from N(0,1) estimate of the data

Training - Nearest neighbor classifier is used to classify test data set with the clustering obtained
from trained data set.
3-d cloud of points which could easily be clustered 2-d cloud of points where clustering using normal distribution based
using a parametric method like the Expectation- methods could fail while methods like spectral and geometric clustering UniqueVector - Computes the unique set of feature vectors from a given set of feature vectors.
Maximization (EM) algorithm algorithms could do a better job
Whitening transform - Performed on d-dimensional data set, it first subtracts the sample mean
from each point, and then multiplies the data set by inverse of the square root of the covariance
matrix

Validation Module:

Cross validation – statistical method for validating a predictive model. Subsets of the data are
held out, to be used as validating sets; a model is fit to the remaining data (a training set) and used
to predict for the validation set. Averaging the quality of the predictions across the validation sets
yields an overall measure of prediction accuracy.

Bootstrap - statistical method for estimating the sampling distribution of an estimator by sampling
with replacement from the original sample, most often with the purpose of deriving robust
estimates of standard errors and confidence intervals of a population parameter.

Jackknife - To estimate the bias and standard error in a statistic, when a random sample of
observations is used to calculate it. The basic idea behind the jackknife estimator lies in
systematically recomputing the statistic estimate leaving out one observation at a time from the
sample set.
Physics based cluster labeling: 1 Physics based cluster labeling: 2 Unsupervised nonparametric classification
Clustering obtained on 2-d cloud of points Trajectories of the mean shift procedures drawn Colors are the same as used in the scatter
BIC - In parametric methods, there might be various candidate models each with a different
by running mean shift procedure with over the density estimate computed over the same plot above
number of parameters to represent a data set. The Bayesian information criterion is a useful
various initializations data set. The peaks retained for final classification statistical criterion for model selection for parametric methods. MODIS cloud classification over the eastern part of United States.
are marked with red dots.
AIC – Tool for nonparametric model selection. Given a data set, several competing models may
be ranked according to their AIC, with the one having the lowest AIC being the best.
This research has been funded by NOAA-CREST grant # NA06OAR4810162

You might also like