You are on page 1of 5

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO.

11, NOVEMBER 2014

1871

Effective Contrast-Based Dehazing


for Robust Image Matching
Cosmin Ancuti, Member, IEEE, and Codruta O. Ancuti, Member, IEEE

AbstractIn this letter we present a novel strategy to enhance


images degraded by the atmospheric phenomenon of haze. Our
single-based image technique does not require any geometrical information or user interaction enhancing such images by restoring
the contrast of the degraded images. The degradation of the finest
details and gradients is constrained to a minimum level. Using a
simple formulation that is derived from the lightness predictor our
contrast enhancement technique restores lost discontinuities only
in regions that insufficiently represent original chromatic contrast
of the scene. The parameters of our simple formulation are optimized to preserve the original color spatial distribution and the
local contrast. We demonstrate that our dehazing technique is
suitable for the challenging problem of image matching based on
local feature points. Moreover, we are the first that present an
image matching evaluation performed for hazy images. Extensive
experiments demonstrates the utility of the novel technique.
Index TermsDehazing, image matching, Scale Invariant
Feature Transform (SIFT).

I. I NTRODUCTION

IVEN two or more images of the same scene, the process


of image matching requires to find valid corresponding feature points in the images. These matches represent
projections of the same scene location in the corresponding
image. Since images are in general taken at different times,
from different sensors/cameras and viewpoints this task may
be very challenging. Image matching plays a crucial role in
many remote sensing applications such as, change detection,
cartography using imagery with reduced overlapping, fusion
of images taken with different sensors. In the early remote
sensing systems, this task required substantial human involvement by manually selecting some feature points of significant
landmarks. Nowadays, due to the significant progress of local
feature points detectors and descriptors, the tasks of matching
and registration can be done in most of the cases automatically.
Many local feature points operator have been introduced in
the last decade. By extracting regions that are covariant to a
class of transformation [1], recent local feature operators are
robust to occlusions being invariant to image transformations
such as geometric (scale, rotation, affine) and photometric. A
comprehensive survey of such local operators is included in the
study of [2].
Manuscript received December 19, 2013; revised January 27, 2014; accepted
March 11, 2014.
The authors are with the Department of Measurements and Optical Engineering, Politehnica University of Timisoara, 300006 Timisoara, Romania (e-mail:
cosmin.ancuti@upt.ro; codruta.ancuti@upt.ro).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/LGRS.2014.2312314

However, besides the geometric and photometric variations,


outdoor and aerial images that need to be matched are often
degraded by the haze, a common atmospheric phenomenon.
Obviously, remote sensing applications are dealing with such
images since in many cases the distance between different
sensors and the surface of earth is significant. Haze is the
atmospheric phenomenon that dims the clarity of an observed
scene due to the particles such as smoke, fog, and dust. A
hazy scene is characterized by an important attenuation of the
color that depends proportionally by the distance to the scene
objects. As a result, the original contrast is degraded and the
scene features gradually fades as they are far away from the
camera sensor. Moreover, due to the scattering effects the color
information is shifted.
Restoring such hazy images is a challenging task. The first
dehazing approaches employ multiple images [3] or additional
information such as depth map [4] and specialized hardware
[5]. Since in general such additional information is not available
to the users, these strategies are limited to offer a reliable
solution for dehazing problem. More recent, several single
image based techniques [6][11] have been introduced in the
literature. Roughly, these techniques can be divided in two
major classes: physically based and contrast-based techniques.
Physically based techniques [6], [9], [10] restore the hazy
images based on the estimated transmission (depth) map. The
strategy of Fattal [6] restores the airlight color by assuming
that the image shading and scene transmission are locally
uncorrelated. He et al. [9] estimate a rough transmission map
version based on the dark channel [12] that is refined in a final
step by a computationally expensive alpha-matting strategy.
The technique of Nishino et al. [10] employs a Bayesian
probabilistic model that jointly estimates the scene albedo and
depth from a single degraded image by fully leveraging their
latent statistical structures.
On the other hand, contrast-based techniques [7], [8], [11]
aim to enhance the hazy images without estimating the depth
information. Tans [7] technique maximizes the local contrast
while constraining the image intensity to be less than the global
atmospheric light value. The method of Tarel and Hautire [8]
enhances the global contrast of hazy images assuming that the
depth-map must be smooth except along edges with large depth
jumps. Ancuti and Ancuti [11] enhance the appearance of hazy
images by a multiscale fusion-based technique that is guided by
several measurements.
In this letter we introduce a novel technique that removes
the haze effects of such degraded images. Our technique is
a single-based image that aims to enhance such images by
restoring the contrast of the degraded images. Different than

1545-598X 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

1872

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 11, NOVEMBER 2014

most of the existing techniques, our strategy does not require


any geometrical information or user interaction. Our method is
built on the basic observation that haze-free images are characterized by a better contrast than hazy images. The presented
strategy takes advantage by the original color information by
maximizing the contrast of degraded regions. The degradation
of the finest details and gradients is constrained to a minimum
level. Using a simple formulation that is derived from the
lightness predictor defined in [13] effect our contrast enhancement technique restores lost discontinuities only in regions that
insufficiently represent original chromatic contrast of the scene.
The parameters of our simple formulation are optimized to
preserve the original color spatial distribution and the local
contrast. Extensive experiments demonstrate the utility of the
novel technique.
As a second contribution of this letter, we demonstrate that
our dehazing technique is suitable for the challenging problem
of image matching based on local feature points. To the best
of our knowledge this represents the first work that presents an
image matching evaluation performed for hazy images.
II. C ONTRAST-BASED E NHANCING OF H AZY S CENES
The process of contrast enhancement aims to increase the
perceptibility of the objects in the image. Typically, the contrast
of an image can be enhanced by general operators that can be
found in the commercial tools (e.g., Adobe Photoshop) such
as auto-contrast, gamma correction, linear mapping, histogram
stretching. As discussed previously also by Fattal [6], these
operators ignore the spacial relations between pixels. Since the
haze effect is not constant across the scene, various locations
in the image are spoilt differently. As a result, such classical
contrast enhancing operators, since they perform the same operation for each image pixel, do not represent reliable solutions
for the dehazing problem.
To solve this problem, we define a contrast technique with
several parameters that are optimized for a given image. Our
strategy is derived from the global mapping operator used
previously also for the decolorization problem [15]. We start
from the global operator that enhances the contrast of the
luminance L based on the hue H and saturation S (the mapping
is processed in the HSL color space)
LE = L (1 + Sf (H))

LE = L (1 + S cos(H + )) .

(3)

As a result, the three parameters of (3) (, , and ) of the


nonlinear mapping [(3)] are depending by the input image being
optimized in our framework to to enhance the local features of
the degraded hazy image.
A. Optimization
As previously mentioned, in this letter we do not intend to
fully recover the original colors of the scene or albedo but to
restore the visibility of a hazy image by properly enhancing the
contrast of an input degraded image. Moreover, besides visibility improvement, different than existing dehazing methods
we aim for a method that increases the matching performances
of the local operators. Therefore, our strategy is designed to
preserve most of the local features and details in the enhanced
version. As emphasized by Lowe [14], local contrast preservation is crucial in the process of matching by feature points. In its
final stage, the well-known Scale Invariant Feature Transform
(SIFT) operator [14] filters out all the features with low local
contrast. This operation improves considerably the process of
matching by decreasing the ambiguity when comparing the
values of Euclidean distance between descriptor vectors.
We optimize the parameters of (3) (, , and, ) by constraining the local contrast of the enhanced version by the chromatic contrast of the original image. As a result, to enhance the
local features of our nonlinear contrast mapping, we minimize
the following energy function E that represents the difference
of image gradients between the original hazy color image and
the enhanced version of the luminance LE [(3)]

(4)
E(x, y) =
LE (x, y) DL a b (x, y)2

(1)

where LE is the enhanced contrast and the f can be modeled


as a trigonometric polynomial function. This global operator
can be also related with the lightness predictor of Nayatani
[13] used to model the influence of chromatic components on
the perceptual lightness of an isolated
 color. In Nayatani [13]
function f is expressed as f (x) = i ai cos(ix) + bi sin(ix)
but all the unknown parameters are fixed constants determined
by fitting the function to an experimental data set of perceptual
lightness of colors.
In contrast we express the variation of the hue by the following equation with the parameters that are adaptively optimized
for a given image
f (H) = cos(H + )

where the parameter aims to temper the impact of the


saturation acting like a modulator that controls the amount of
color contrast. In our extensive experiments we observed that
parameter needs to be set to smaller values for desaturated
images while for highly saturated images is assigned to
higher values. The parameter represents the period while
parameter represents the offset angle of the color wheel
(0360 ) being related by the color distribution and palette
of the input image. Finally, our nonlinear contrast operator is
obtained by replacing previous expression in (1)

(2)

where (x, y) is a pixel value, LE represents the gradient of


the enhanced luminance LE ((f ) = (f /x, f /y)), and
DL a b is the difference between color pixels of the original
hazy image computed in the perceptual CIEL a b [16] color
space with the expression



(5)
DL a b = L 2 + a 2 + b 2
where the L (color lightness), a (lightness position between
red/magenta and green), b (lightness position between yellow
and blue) are the coordinates of the CIEL a b and weights
the impact of color information in the contrast expression.
Our problem is now reduced to finding a the unknown parameters (, , and, ) that best minimize the cost function E.
We addressed it as a least squares problem with non-negativity

ANCUTI AND ANCUTI: EFFECTIVE CONTRAST-BASED DEHAZING FOR ROBUST IMAGE MATCHING

constraints. Since we aim for a low-complexity recovery solution of the unknown vector we solve the problem as a
L1-regularized least squares. To prevent over-fitting we introduce a regularization term weighted by the parameter = 0.1.
Our L1-regularized least squares problem always converges to
a solution and shows to be relatively fast converging in approximately 2030 iterations. The final dehazed version of the image
is the result of substituting the original degraded luminance L
with the enhanced version LE and the new saturation SE =
S(1 + L/LE ) in the HSL color space.
III. M ATCHING OF H AZY I MAGES
Local feature points (keypoints) are used for matching images due to their impresive robustness and invariance to different transformations. The matching methods based on keypoints
shown to be more effective [2] than matching techniques based
on extracting edges and contours. Typically, the framework
of matching images based on local keypoints consists on
three main steps. First, the local feature points (keypoints or
interest points) are extracted from an image based on their
neighborhood information. In general the keypoints are those
locations of images with important variation in their immediate
neighborhoods. The second step is to compute descriptors
(signatures) based on the neighbor regions of the keypoints.
Different techniques, which describe nearby regions of feature
points, considers in general color, structure, and texture. The
main goal of them is to increase the distinctness of the extracted
feature points to improve the efficiency and to simplify the
matching process. Finally, the signature vectors of extracted
keypoints are compared using some metrics (e.g., Euclidean
distance, earth movers distance) or derived strategies that are
based on such distances.
Recent remote sensing applications employ local feature
points to solve problems such as automatic registration [17],
urban-area and building detection [18], registration of hyperspectral imagery [19]. In general these applications are built
on the well-known SIFT [14] operator. Due to its impressive
results reported in the comprehensive studies [2], [20] applying
SIFT is not causal. Basically, most of the recent local feature
points operators represent improvements derived from SIFT,
designed for specific cases and applications.
In this letter, we also use the the well-known SIFT [14]
operator. However, different than previous work, in this letter
we analyze the problem of matching hazy images based on local
feature points. For the sake of completeness the SIFT operator
is briefly discussed in the following subsection.
A. Matching Based on SIFT Operator
The feature points (keypoints) of the SIFT operator [14]
are searched in the Difference of Gaussian (DoG) scale space.
The DoG is built by subtracting images that previously have
been convolved (blurred) with a Gaussian function with a
standard deviation that increases monotonically and represents
a good approximation of Laplacian. The keypoints candidates
are filtered as local extrema in DOG scale space. Such location
is selected only if its value is greater or smaller than all its
26 neighbors [Fig. 1(a)]. As discussed previously, related to

1873

Fig. 1. SIFT operator. (a) Keypoint detection in DOG scale space. (b) Signature of the keypoint.

our contrast-based dehazing technique, the feature points with


strong response to edges and low contrast are rejected to reduce
the ambiguity in the process of matching.
The SIFT descriptor or signature is calculated based on the
image gradient information. A signature is assigned to each
extracted keypoint and consists of a 128-dimension vector. It is
computed from the gradient magnitudes and orientations in the
circular neighbor regions of the keypoint. The selected image
pyramid level of the image is determined by the computed
characteristic scale of the respective feature point selects the
pyramid level of the image. For every keypoint a 4 4 orientation histogram is built on a 4 4 subregion computed from a
16 16 region that is centered on the keypoint location. Each
histogram has 8 bins corresponding to every 45 [Fig. 1(b)].
Finally, the matching procedure is based on computing the
Euclidean distance among the descriptor vectors that correspond to the extracted keypoints. Since extensive experiments
revealed that only the minimum distance criterion is not enough
to identify good matches, Lowe [14] adopted a matching
technique where the ratio between the distance of the firstbest matched and the second-best matched feature points is
evaluated. Only the keypoints with the ratio value greater than
a threshold are considered valid matches.
IV. R ESULTS AND D ISCUSSION
Our technique has been tested extensively for real hazy images. In Fig. 2 we compared with several state-of-the-arts single
image dehazing techniques. As can be observed our technique
is able to produce visually pleasing results that are comparable
with the results yielded by the more complex techniques [9],
[10] that require estimation of the depth map. Moreover, in
comparison with the technique of Tarel and Hautiere [8] that is
also a contrast-based technique that processes the latent image
without computing the transmission map, our technique is less
prone to the artifacts. Our unoptimized (Matlab implementation) code is able to process an 800 600 image in less than
4 seconds.
Fig. 3 displays one of the images used in the feature-based
image matching shown in Fig. 4. As can be seen, the Adobe
Photoshop Auto Contrast operation performs poorly. Compared
with the technique of Tarel and Hautiere [8] our contrastbased strategy is able to better preserve the fine transitions
in the hazy regions without introducing unpleasing artifacts.
These observations are emphasized also by the matching results
shown in Fig. 4. Applying standard SIFT operator on the
original hazy images yields only 10 valid matches while using
images enhanced by Adobe Photoshop Auto Contrast yields 52

1874

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 11, NOVEMBER 2014

Fig. 2. Comparative results.

Fig. 4. Matching based on SIFT: original hazy images (10 good matches),
Adobe Photoshop Auto Contrast (52 good matches), Tarel and Hautiere [8]
(71 good matches), our method (236 good matches).
Fig. 3. Hazy image and the enhanced results of Adobe Photoshop Auto
Contrast operation, Tarel and Hautiere [8] and our contrast-based strategy.

good correspondences. The same feature-based operator yields


71 valid matches for the images enhanced by technique of
Tarel and Hautiere [8]. On the other hand, the same matching
procedure based on SIFT but applied on the images enhanced
with our technique generates 236 valid matches.
To measure the performances of different enhancing techniques for the matching procedure we evaluate a number of 9
pairs of images that represent aerial images and hazy images
captured during the daylight. In our evaluation we use the
standard SIFT operator that is applied on the original hazy
images, images enhanced using Photoshop Auto Contrast, technique of Tarel and Hautiere [8] and our contrast-based method.
Besides the similar complexity with ours, the technique of Tarel
and Hautiere [8] is the only one of the recent state-of-the-art
dehazing techniques of which the code has been made available
to the public.
In the evaluation we investigate the influence of the overlap
error on performance of the SIFT operator applied on the

original images but also on the images enhanced using different enhancing techniques. This evaluation, used previously
in the comprehensive study of Mikolajczyk and Schmid [1],
compares the recall for various overlap errors. We slightly
redefine recall as the number of correct matches divided by
the maximum number of correct matches obtained by the most
efficient method. The overlap error represents how well the
neighbor regions of the matched features correspond under a
transformation. In our case the tested pairs of images are related
by the homography that is computed as presented in [1]. The
overlap error is defined by the ratio of the intersection and union
of the feature neighbor regions.
Fig. 5 plots the relation between the recall value and the
value of the overlap error for different enhancing techniques
matching procedure that uses the SIFT operator. In Figs. 4
and 6 the valid matches are those that have the overlap error
less than 50%. As can be seen, matching procedure applied on
our enhanced results yields significant improvements in terms
of correct matches compared with the Adobe Photoshop Auto
Contrast but also the recent technique of Tarel and Hautiere [8].

ANCUTI AND ANCUTI: EFFECTIVE CONTRAST-BASED DEHAZING FOR ROBUST IMAGE MATCHING

1875

scene. Our technique is simple and does not require estimation


of the transmission map. We show that our technique is suitable
for the task of matching images using local feature points
outperforming the compared techniques. In the future work, we
will extend our work to the problem of video dehazing.
R EFERENCES

Fig. 5. Matching evaluation based on SIFT. Relation between overlap error


and recall (number of correct matches divided by the maximum number of
correct matches obtained by the most efficient method).

Fig. 6. Matching the original images using SIFT [14] yields 37 valid matches
while the same matching procedure applied on the enhanced images using our
technique yields 179 valid matches.

V. C ONCLUSION
In this letter is presented a new single-based image dehazing
technique. Our strategy to enhance hazy images optimizes the
contrast by constraining the degradation of the finest details
and gradients to minimum. The degraded discontinuities are enhanced mostly in the regions that lost the original contrast of the

[1] K. Mikolajczyk and C. Schmid, A performance evaluation of local


descriptors, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10,
pp. 16151630, Oct. 2005.
[2] T. Tuytelaars and K. Mikolajczyk, Local invariant feature detectors:
A survey, Found. Trends Comput. Graph. Vis., vol. 3, no. 3, pp. 177280,
Jan. 2008.
[3] S. G. Narasimhan and S. K. Nayar, Contrast restoration of weather
degraded images, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 6,
pp. 713724, Jun. 2003.
[4] J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen,
M. Uyttendaele, and D. Lischinski, Deep photo: Model-based photograph enhancement and viewing, ACM Trans. Graph., vol. 27, no. 5,
p. 116, Dec. 2008.
[5] T. Treibitz and Y. Y. Schechner, Polarization: Beneficial for visibility
enhancement? in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2009,
pp. 525532.
[6] R. Fattal, Single image dehazing, ACM Trans. Graph., vol. 27, no. 3,
p. 72, Aug. 2008.
[7] R. T. Tan, Visibility in bad weather from a single image, in Proc. IEEE
Conf. Comput. Vis. Pattern Recognit., 2008, pp. 18.
[8] J.-P. Tarel and N. Hautiere, Fast visibility restoration from a single
color or gray level image, in Proc. IEEE Int. Conf. Comput. Vis., 2009,
pp. 22012208.
[9] K. He, J. Sun, and X. Tang, Single image haze removal using dark
channel prior, IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12,
pp. 23412353, Dec. 2011.
[10] K. Nishino, L. Kratz, and S. Lombardi, Bayesian defogging, Int. J.
Comput. Vis., vol. 98, no. 3, pp. 263270, Jul. 2012.
[11] C. O. Ancuti and C. Ancuti, Single image dehazing by multiscale
fusion, IEEE Trans. Image Process., vol. 22, no. 8, pp. 32713282,
Aug. 2013.
[12] P. S. Chavez, An improved dark-object subtraction technique for atmospheric scattering correction of multispectral data, Remote Sens. Environ., vol. 24, no. 3, pp. 459479, Apr. 1988.
[13] Y. Nayatani, Simple estimation methods for the Helmholtz Kohlrausch
effect, Color Res. Appl., vol. 22, no. 6, pp. 385401, Dec. 1997.
[14] D. Lowe, Distinctive image features from scale-invariant keypoints, Int.
J. Comput. Vis., vol. 60, no. 2, pp. 91110, Nov. 2004.
[15] C. Ancuti, C. O. Ancuti, and P. Bekaert, Enhancing by saliency-guided
decolorization, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.,
2011, pp. 257264.
[16] G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods,
Quantitative Data and Formulae, 2nd ed. Hoboken, NJ, USA: Wiley,
2000.
[17] A. Wong and D. A. Clausi, ARRSI: Automatic registration of remotesensing images, IEEE Trans. Geosci. Remote Sens., vol. 45, no. 5,
pp. 14831493, May 2007.
[18] B. Sirmacek and C. Unsalan, Urban-area and building detection using sift
keypoints and graph theory, IEEE Trans. Geosci. Remote Sens., vol. 47,
no. 4, pp. 11561166, Apr. 2009.
[19] L. P. Dorado-Muoz, M. Vlez-Reyesa, A. Mukherjee, and B. Roysam,
A vector sift detector for interest point detection in hyperspectral imagery, IEEE Trans. Geosci. Remote Sens., vol. 50, no. 11, pp. 45214533,
Nov. 2012.
[20] P. Moreels and P. Perona, Evaluation of features detectors and descriptors
based on 3D objects, Int. J. Comput. Vis., vol. 73, no. 3, pp. 263284,
Jul. 2007.

You might also like