You are on page 1of 26

Characteristics of Images

Pixels
A digital image comprises of a two dimensional array of
individual picture elements called pixels arranged in columns
and rows.
A pixel has an intensity value and a location address in the
two dimensional image.

The information from a narrow wavelength range is gathered


and stored in a channel, also sometimes referred to as a band

The data from each channel is represented as one of the


primary colours and, depending on the relative brightness (i.e.
the digital value) of each pixel in each channel, the primary
colours combine in different proportions to represent different
colours.

Multispectral sensors detect light reflectance in more


than one or two bands of the EM spectrum. These bands
represent different data. When combined into the red,
green, blue guns of a color monitor, they form different
colors.

These different data


sets are referred to
as spectral bands,
bands, or channels..

Spatial resolution
Refers to the size of the smallest object that can be resolved on the
ground.
In a digital image, the resolution is limited by the pixel size, i.e. the
smallest resolvable object cannot be smaller than the pixel size

Images where only large features are visible are said to have coarse or
low resolution
In fine or high resolution images, small objects can be detected

PAN 5M

LANDSAT TM- 30 M

Radiometric resolution
Determines how fine the sensor can distinguish between
objects of similar reflection. The higher the radiometric
resolution, the better we can distinguish between even
subtle differences in reflection.

Imagery data are represented


by positive digital numbers
which vary from 0 to (one less
than) a selected power of 2.
This range corresponds to the
number of bits used for coding
numbers in binary format. Each
bit records an exponent of
power 2 (e.g. 1 bit=2 1=2).

2 bit

8 bit

Spectral Resolution
Spectral resolution means the span of wavelength range
over which a spectral channel operates, i.e. the spectral
bandwidth over which the radiation are integrated.

Different classes of features and


details in an image can often be
distinguished by comparing their
responses over distinct wavelength
ranges
The finer the spectral resolution, the
narrower the wavelength range for a
particular channel or band.

Advanced multi-spectral sensors called hyperspectral sensors, detect


hundreds of very narrow spectral bands throughout the visible, nearinfrared, and mid-infrared portions of the electromagnetic spectrum

Temporal Resolution
Temporal resolution refers to the repetivity of the
observation over an area, and is equal to the time
interval between successive observations. It depends
on the orbital parameters and the swath width of the
sensor.

DIGITAL IMAGE PROCESSING


Digital image processing may involve numerous
procedures including formatting and correcting of the data,
digital enhancement to facilitate better visual interpretation,
or even automated classification of targets and features
entirely by computer.
Preprocessing
Image Enhancement
Image Transformation
Image Classification and Analysis

Preprocessing

functions involve those operations that are


normally required prior to the main data analysis and extraction of
information, and are generally grouped as radiometric (correcting the
data for sensor irregularities ) or geometric corrections (correcting for
geometric distortions due to sensor-Earth geometry variations )
The "drift" was different
for each of the six
detectors, causing the
same brightness to be
represented differently by
each detector

DROP LINE

STRIPPING
Dropped lines occur when there are
systems errors which result in
missing or defective data along a
scan line

Geometric registration of the imagery to a known ground coordinate

IMAGE TO MAP REGISTRATION


IMAGE TO IMAGE REGISTRATION
GCPS

Geometric corrections are


intended to compensate for
these distortions so that the
geometric representation of the
imagery will be as close as
possible to the real world

RESAMPLING process calculates the new pixel values from the original
digital pixel values in the uncorrected image

Nearest neighbour
Bilinear interpolation

Cubic convolution

Image enhancement
Improve the appearance of the imagery for better interpretability. The advantage of
digital imagery is that it allows us to manipulate the digital pixel values in an image
Examples of enhancement functions include contrast stretching to increase the
tonal distinction between various features in a scene, and spatial filtering to
enhance (or suppress) specific spatial patterns in an image.

Linear contrast stretch

Histogram-equalized stretch
More display values (range) to the frequently occurring portions of the histogram

This stretch assigns more display values (range) to the frequently occurring
portions of the histogram. In this way, the detail in these areas will be better
enhanced relative to those areas of the original histogram where values occur
less frequently.

Spatial filtering
Enhance the appearance of an image
Spatial filters are designed to highlight or suppress specific features in an
image based on their spatial frequency
Filtering procedure involves moving a 'window' of a few pixels in dimension
(e.g. 3x3, 5x5, etc.) over each pixel in the image, applying a mathematical
calculation using the pixel values under that window, and replacing the central
pixel with the new value

Low Pass
FilterSmoothening

To emphasize larger,
homogeneous areas
of similar tone and
reduce the smaller
detail in an image.
High Pass Filter- Sharpening

Image transformations
Arithmetic operations (i.e. subtraction, addition, multiplication, division) are
performed to combine and transform the original bands into "new" images which
better display or highlight certain features in the scene.
Principal components analysis
Reduce the
dimensionality (i.e. the
number of bands) in
the data, and
compress as much of
the information in the
original bands into
fewer bands
> than 1.0 for vegetation
around 1.0 for soil and
water.
Thus the discrimination of
vegetation from other
surface cover types is
significantly enhanced.

Band 7 (Near Infra-red -0.8 to 1.1 mm)


--------------------------------------Band 5 (red -0.6-0.7 mm)

Spectral or band ratioing

Image classification and analysis operations are used to


digitally identify and classify pixels in the data. Classification is
usually performed on multi-channel data sets (A) and this process
assigns each pixel in an image to a particular class or theme (B)
based on statistical characteristics of the pixel brightness values

Two generic approaches which are used most often, namely


supervised and unsupervised classification.

Supervised classification
Homogeneous representative samples of the different surface
cover types (information classes) of interest.
Samples are referred to as training areas.
The numerical information in all spectral bands for the pixels
comprising these areas are used to "train" the computer to
recognize spectrally similar areas for each class
Once the computer has determined the
signatures for each class, each pixel
in the image is compared to these
signatures and labeled as the class
it most closely "resembles" digitally

Unsupervised Classification
Spectral classes are grouped first, based solely on the numerical information
in the data, and are then matched by the analyst to information classes
Programs, called clustering algorithms, are used to determine the natural
(statistical) groupings or structures in the data.
In addition to specifying the desired number of classes, the analyst may also
specify parameters related to the separation distance among the clusters and
the variation within each cluster

INTERPRETATION AND ANALYSIS


In order to take advantage of and make good use of remote sensing
data, we must be able to extract meaningful information from the
imagery.

Much
interpretation
and
identification of targets in remote
sensing imagery is performed
manually or visually, i.e. by a human
interpreter
ANALOG
DIGITAL

INTERPRETATION AND ANALYSIS


Manual interpretation is often limited to analyzing only a single
channel of data or a single image at a time due to the difficulty in
performing visual interpretation with multiple images

The computer environment is


more
amenable
to
handling
complex images of several or
many channels or from several
dates. In this sense, digital
analysis is useful for simultaneous
analysis of many spectral bands
and can process large data sets
much faster than a human
interpreter

KEY TO INTERPRETATION AND INFORMATION EXTRACTION.

Tone refers to the relative brightness or


colour of objects in an image.

Shape refers to the general form,


structure, or outline of individual objects.
Straight edge shapes typically
represent urban or agricultural (field)
targets, while natural features, such
as forest edges, are generally more
irregular in shape

Size of objects in an image is a function of scale


If an interpreter had to distinguish zones
of land use, and had identified an area
with a number of buildings in it, large
buildings
such
as
factories
or
warehouses would suggest commercial
property, whereas small buildings would
indicate residential use.

Pattern refers to the spatial


arrangement of visibly discernible
objects. Typically an orderly
repetition of similar tones and
textures will produce a distinctive
and
ultimately
recognizable
pattern

Texture refers to the arrangement


and frequency of tonal variation in
particular areas of an image.
Rough textures would consist of a
mottled tone where the grey levels
change abruptly in a small area,
whereas smooth textures would
have very little tonal variation
Shadow is also helpful in interpretation as it
may provide an idea of the profile and relative
height of a target or targets which may make
identification easier
Association takes into account the
relationship
between
other
recognizable objects or features in
proximity to the target of interest.

Multiresolution
IRS1C LISS III+PAN

Multisensor

Multi-data
Optical+Radar

DEM/DTM + Image drapping

Data Integration and Analysis


Since data available in digital format from
Involves the combining or merging of data from multiple sources in an
effort to extract better and/or more information.
Include data that are multitemporal, multiresolution, multisensor, or
multi-data type in nature

You might also like