IJSRD - International Journal for Scientific Research & Development| Vol.

3, Issue 09, 2015 | ISSN (online): 2321-0613

The Survey Paper on Clothing Pattern and Color Recognition for
Visually Impaired People
Honey Mishra
Department of Electronics and Telecommunication & Engineering
Chhatrapati Shivaji Institute of Technology, Durg, Chhattisgarh, india
Abstract— For visually handicapped people a mental
support is important in their daily life and participation in a
society. The select clothes with complex patterns, colors of
clothes is a challenging task for visually impaired people.
We have developed a camera-based system that is use to
recognizes clothing patterns in four different categories that
is (plaid, striped, patternless, and irregular) and identifies
different 11 clothing colors. We used the CCNY Clothing
Pattern dataset. The system integrates a camera, a
microphone, a computer, and a Bluetooth earpiece for audio
description of clothing patterns and colors. The output of
this system is audio signal. The clothing patterns and colors
are described to blind users verbally. This system can be
controlled by speech input through microphone. in this
survey paper I provide a comparison report among all
technique which are provide a clothing pattern and color
recognition of cloth for visually impaired people.
Key words: Clothing Pattern, Color Recognition

(a)
(b)
Fig. 1: Intraclass variations in clothing pattern images and
traditional texture images. (a) Clothing pattern samples with
large intraclass pattern and color variations. (b) Traditional
texture samples with less intraclass pattern and intensity
variati.

I. INTRODUCTION

II. OPTIMIZATION METHOD

Based on the 2002 world population and based on the World
Health Organization (WHO), there are more than 161
million people are visually impaired in the world [1].
Choosing of cloth is a challenging task for blind people with
suitable color and pattern in their daily life. Most blind or
visually impaired people manage this task with the help
from their family members or through using plastic Braille
nlabels or different types of stitching [2].Blind people
require A physical support, for example an infrastructure
equipment in a public area for those people to safely move,
human support like guiding or taking care of them and a
mental support is very important. They are always worried
about their dresses and how they look, even if they cannot
recognize the color and pattern of their cloth. However,
there have been few studies about this support. Therefore we
had proposed the basic system [3] which can recognize
colors and patterns of clothes. Camera-based clothing
pattern recognition is a challenging task due to many
clothing pattern and color designs as well as corresponding
large intraclass variations [4]. Existing texture analysis
methods mainly focus on textures with large changes in
viewpoint, orientation, and scaling, but with less intraclass
pattern and intensity variations (see Fig. 1). We have
observed that traditional texture analysis methods [5], [6],
[7], [8], [9], [10], [11], [12], [13], [14]can- not achieve the
same level of accuracy in the context of clothing pattern
recognition.
In an extension to [15], the system can handle
clothes with complex patterns and recognize clothing
patterns into four different categories (plaid, striped,
patternless, and irregular) to meet the basic requirements
based on our survey with ten blind participants. the system
is able to identify different colors: red, orange, yellow,
green, cyan, blue, purple, pink, black, grey, and white

A. Method 1
1) Expression of Color:
Color Coordinate System (PCCS) [15] Japan Color
Research Institute propos the practical color co-ordinate
system [15], color are define with “tone” and “color name”.
The brightness and saturation are described by tone. Tones
are integrated so that the blind person can know their
features using the system’s voice response. All tones are
integrated into seven tones, they are pale (p and lt), dull (sf
and d), grayish (ltg and g) and dark (dk and dkg). The
eleven colors which are red, green, yellow, blue, brown,
purple, orange, pink, white, black and gray [16]. So “color
name” is expressed with voice signal.
2) Recognition of Color:

Fig. 2: flow chart of clothing color recognition
The specimen image are capture by the camera are mapped
to L*a*b* color space, and this image is divided into less
than four clusters by the rule of K-means. The L*a*b* are
the three axis of color space are quantized in to 25 different

All rights reserved by www.ijsrd.com

773

The Survey Paper on Clothing Pattern and Color Recognition for Visually Impaired People
(IJSRD/Vol. 3/Issue 09/2015/187)

levels. The number of pixels of the image is counted in each
voxel, and the four top most voxels are selected in terms of
its numbers. Among these four selected voxels, the mid
position of pixels is calculated and it is set as the initial
value. Thereafter in this method all pixels of the image are
classified into four clusters [B].
3) Expression of Pattern:
The clothing pattern are divided in to five categories :
“vertical- stripe,” “horizontal-stripe,” “checker,” “plain,”
and “others,” the input image is passes with a low-pass filter
to avoid detecting texture patterns; hence, any unevenness
on the surface of the cloth is disregarded, and the clothing
image is converted in two-dimensional pattern.
4) Recognition of patternThe flow chart of pattern recognition is shown in Fig. 3. An
image is converted to grayscale and then smoothed to
suppress the influence of texture. The image is filtered by
using Fast Fourier Transform in a spatial frequency domain,
and a smoothing image is obtained by Inverse Fast Fourier
Transform. Then the Sobel operator is applied to the inverse
fast fourier image which is smooth image and vertical edges
are detected. Standard deviation is calculated for a
distribution of horizontal luminance values, rotating the
vertical edge of the image by 1◦ counterclockwise. By this
process pattern of the cloth are recognized.

principle orientation of an image. The image is then rotated
according to this dominant direction to achieve rotation
invariance.

Fig. 3: Flow Chart of Pattern Recognition

Fig. 5: The computation of STA on wavelet subbands.
3) Scale Invariant Feature Transform Bag of Words
The local image features has a number of applications, such
as image retrieval, and recognition of object, texture, and
scene categories[18]. This has development of several local
image feature detectors and descriptors. Generally, detectors
are used to detect interest points by searching local extrema
in a scale space; descriptors are employed to compute the
representations of interest points based on their associated
support regions.

B. Model 2
The captured image of cloth has two feature that is Local
feature and Global feature. The local feature of image is
only capture area of the target system. and Global features
including directionality and statistical properties of clothing
patterns are more stable within the same category. In our
proposed system we present extractions of global and local
features for clothing pattern recognition, i.e., Radon
Signature, statistical descriptor (STA), and scale invariant
feature transform (SIFT).
1) Radon Signature:
Clothing images has large intraclass variations, which
provide the major challenge for clothing pattern recognition
for the blind people. the clothing patterns of plaid and
striped are both anisotropic and patternless and irregular are
isotropic. For this use a novel descriptor, i.e., the Radon
Signature, to characterize the directionality feature of the
patterns of cloth. Radon Signature is based on the Radon
transform [17]which is commonly used to detect the

Fig. 4: Computation of RadonSig.
2) Statistics of Wavelate Subbands
The discrete wavelet transform (DWT) decomposes the
image into low-frequency channel Dj (I) under a coarser
scale and multiple high-frequency channels under multiple
scales Wk,j (I);k =1 ,2,3;j =1 ,2,...,J, where J is the number
of scaling levels. Therefore, in every scaling level j there is
four wavelet subbands including one low-frequency channel
Dj (I) and three high-frequency channels Wk,j (I). The highfrequency channels Wk,j (I);k =1 ,2,3 encode the
discontinuities of an image along horizontal, vertical, and
diagonal directions, respectvely.

Fig. 6: Process of local image feature extraction

All rights reserved by www.ijsrd.com

774

The Survey Paper on Clothing Pattern and Color Recognition for Visually Impaired People
(IJSRD/Vol. 3/Issue 09/2015/187)

C. Model 3
Detecting the feature is the important method of classifying
the patterns. Each image has its own different characteristics
for other[19][20]. These features can be extracted using the
following algorithms.
 Statistical feature extraction (STA)
 Scale Invariance feature transform (SIFT)
 Recurrence Quantification Analysis (RQA)
1) Statica Feature Extration
Statistical feature extraction is done by the use of wavelet
transform. The STA are used to decompose the image pixel
into low pixels. STA has 4 features this are variance, energy,
uniformity and entropy. Using these features the images can
be classified.
2) Scale Invariance Feature Transform (SIFT)SIFT is used for the local feature extraction. To perform
recognition, it is very important that the global and local
features extracted from the image be identified even under
changes in image scale, noise and illumination, as the name
mentioned it is invariant to the scale. The features extracted
are points, patches in the image.
3) Recurrence Quantification Analysis (RQA)Recurrence Quantification Analysis (RQA) is also provide a
local feature extractor. Mainly it is used for the increase
accuracy in the SVM classifier[21] [22]. RQA has three
feature they are
 Recurrence Plot – It is a graph that shows all the time
at Which a state of the dynamical system recurs.
 Recurrence rate- It is the percentage of points in the
threshold plot.
III. DISCUSSION
From the method 1 the color and pattern recognition is
shown in Table I for every tone of PCCS.
Cloth
color pattern
Stripe
70.5
90.4
Checker 69.7
97.2
Plain
95.6
88.3
Irregular 76.3
98.0
regular
86.2
100.0
Total
78.4
94.4
Table 1: Output of different color and pattern
From the method 2 the color and pattern
recognition is and their contain percentage are show in table
II.

Table 2: output for all different pattern and color
From the method 3 the color and clothing pattern
are explain in table III.

Table 3: output for all different pattern and color

IV. CONCLUSION
The above three method are used for clothing pattern and
color recognition. This all method is provide help for blind
people to recognize the clothing pattern and color of cloth.
The images are taken from CCNY database in order to
implement the system. Output of this method are voice
signal which is main advance of this method .the output of
method 2 and 3 are same w3hich provide a 98% accurate
result and method 1 provide 94% of accurate result.
REFERENCES
[1] Xiaodong Yang, Shuai Yuan, and YingLi Tian
“Assistive Clothing Pattern Recognition for Visually
Impaired People”april2014
[2] Masao Miyake, Yoshitsugu Manabe, Yuki Uranishi,
Masataka Imura and Osamu Oshiro” Voice Response
System of Color and Pattern on Clothesfor Visually
Handicapped Person” osaka japan 3-7 july 2013.
[3] Jarin joe rini J, thilagavathi B “Recognizing clothes
patterns and colours for blind people using neural
network”in 2015.
[4] S. Resnikoff, D. Pascolini, D. Etya'aleI.Kocur, R.
Pararajasegaram, G, Pokharel, Global data on visual
impairment in the year 2002. Bulletin of the World
Health Organization, 82, 844- 851, 2004.
[5] http://www.associatedcontent.com/article/1788762/how
_blind_people_match_clothing.html, How Blind People
Match Clothing?
[6] M. Miyake, Y. Manabe, T. Uranishi, S. Ikeda and K.
Chihara, “Presentation System of Color and Pattern on
Clothes for Visually Impaired Person,” Journal of The
Color Science Association Japan, Vol. 36, No. 1 pp. 314, 2012.3 (in Japanese).
[7] D. Gould, “The making of a pattern,” Vogue Patterns,
1996.
[8] L. Davis, S. Johns, and J. Aggarwal, “Texture analysis
using general- ized co-occurrence matrices,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. PAMI-1, no. 3,
pp. 251–259, Jul. 1979.
[9] R. Haralick, “Statistical and structural approaches to
texture,” Proc. IEEE, vol. 67, no. 5, pp. 786–804, May
1979.
[10] S. Lam, “Texture feature extraction using gray level
gradient based on co-occurrence matrices,” in Proc. Int.
Conf. Syst., Man Cybern., 1996, pp. 267–271.
[11] S. Lazebnik, C. Schmid, and J. Ponce, “A sparse texture
representation us- ing local affine regions,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 27, no. 8, pp. 1265–
1277, Aug. 2005.
[12] V.Manian,R.Vasquez,andP.Katiyar,“Textureclassificati
onusinglogical operation,” IEEE Trans. Image Process.,
vol. 9, no. 10, pp. 1693–1703, Oct. 2000.
[13] T. Randen and J. Husoy, “Filtering for texture
classification: A com- parative study,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 21, no. 4, pp. 291–310,
Apr. 1999.
[14] Y. Xu, H. Ji, and C. Fermuller, “Viewpoint invariant
texture description using fractal analysis,” Int. J.
Comput. Vis., vol. 83, no. 1, pp. 85–100, 2009.
[15] X. Yang, S. Yuan, and Y. Tian, “Recognizing clothes
patterns for blind people by confidence margin based

All rights reserved by www.ijsrd.com

775

The Survey Paper on Clothing Pattern and Color Recognition for Visually Impaired People
(IJSRD/Vol. 3/Issue 09/2015/187)

feature combination,” in Proc. Int. ACM Conf.
Multimedia, 2011, pp. 1097–1100.
[16] X. Yang and Y. Tian, “Texture representations using
subspace embed- dings,” Pattern Recog. Lett., vol. 34,
no. 10, pp. 1130–1137, 2013.
[17] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid,
“Loca features and kernels for classification of texture
and objectcategories: Acomprehensive study,” Int. J.
Comput. Vis., vol. 73, no. 2, pp. 213–238, 2007.
[18] Hitoshi Komatsubara, “Conceptual of PCCS in a
Perceived Color Appearance Space,” Studies of Color,
Vol. 53. No.1 pp.3-4, 2006. (in Japanese).
[19] Berlin, B. and Kay, P., “Basic Color Terms: Their
Universality and Evolution,” University of California
Press, Berkley 1969.
[20] K. Khouzani and H. Zaden, “Radon transform
orientation estimation for rotation invariant texture
analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol.
27, no. 6, pp. 1004–1008, Jun. 2005..
[21] Jarbas Joaci de Mesquita Sa, Andre Ricardo
Backes,Paulo Cesar Cortez (2013) “Texture analysis
and classification using shortest paths in graphs”,
Elsevier, pattern recognition, 1314–1319.
[22] Xiaodong Yang, YingLi Tian (2013) “Texture
representations using subspace embeddings”, Elsevier,
pattern recognition, 1130-1137.
[23] Christopher sentelle, Georgios anagnostopoulos,
Michael georgiopoulos (2011) “Efficient revised
simplex method for SVM training”, IEEE transactions
on neural networks, 22(10), 1650-1661.
[24] Sandro cumani, Pietro Laface (2012) “Analysis of
large-scale SVM training algorithms for language and
speaker recognition”, IEEE transactions on audio,
speech, and language processing, 20(5), 1585- 1596.
[25] Hitoshi Komatsubara, “Conceptual of PCCS in a
Perceived Color Appearance Space,” Studies of Color, Vol.
53. No.1 pp.3-4, 2006. (inJapanese)
[26] Berlin, B. and Kay, P. , “Basic Color Terms: Their
Universality and Evolution,” University of California
Press, Berkley1969.
[27] Filmtools Inc. website, “Gretag Macbeth ColorChecker
Gray Scale Balance Card,” [Online]. Available:
http://www.filmtools.com/grcogrscbaca.html
[28] C. S. McCamy, H. Marcus, and J. G. Davidson, “A ColorRendition Chart,” Journal of Applled Photographic
Engineering, Vol. 2. 95-99,Summer 1976.

All rights reserved by www.ijsrd.com

776

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.