You are on page 1of 9

IJIRST International Journal for Innovative Research in Science & Technology| Volume 3 | Issue 04 | September 2016

ISSN (online): 2349-6010

Retinal Area Detector for Classifying Retinal


Disorders from SLO Images
Shobha Rani Baben Paul P Mathai
PG Student Associate Professor
Department of Computer Science & Engineering Department of Computer Science & Engineering
Federal Institute of Science & Technology Federal Institute of Science & Technology

Abstract
The retina is a thin layer of tissue that lines the back of the eye on the inside. It is located near the optic nerve. The purpose of the
retina is to receive light that the lens has focused, convert the light into neural signals, and send these signals on to the brain for
visual recognition. The retina processes a picture from the focused light, and the brain is left to decide what the picture is. Since
the retina plays vital role in vision, damage to it can also cause problems such as permanent blindness. So we need to find out
whether a retina is healthy or not for the early detection of retinal diseases. Scanning laser ophthalmoscopes (SLOs) can be used
for this purpose. The latest screening technology provides the advantage of using SLO with its wide eld of view, which can
image a large part of the retina for better diagnosis of the retinal diseases. On the other side, the artefacts such as eyelashes and
eyelids are also imaged along with the retinal area, during the imaging process. This brings forth a big challenge on how to
exclude these artefacts. In this paper we propose a novel approach to automatically extract out true retinal area from an SLO
image based on image processing and machine learning approaches. To provide a convenient primitive image pattern, and reduce
the image processing tasks complexity we have grouped pixels into different regions based on the regional size and compactness,
called superpixels. The framework then extracts and calculates image based features which include textural and structural
information and classies between retinal area and artefacts. And further the rectinal area is used to classify the retinal disorder
based on machine learning approaches. The experimental evaluation results have shown good performance with an overall
accuracy of 92%.
Keywords: Feature Selection, Retinal Artefacts Extraction, Retinal Image Analysis, Scanning Laser Ophthalmoscope
(SLO), Retinal Disorders
_______________________________________________________________________________________________________

I. INTRODUCTION

EARLY detection and treatment of retinal eye diseases is critical to avoid preventable vision loss. Conventionally, retinal disease
identication techniques are based on manual observations. Optometrists and ophthalmologists often rely on image operations
such as change of contrast and zooming to interpret these images and diagnose results based on their own experience and domain
knowledge. These diagnostic techniques are time consuming. Automated analysis of retinal images has the potential to reduce
the time, which clinicians need to look at the images, which can expect more patients to be screened and more consistent
diagnoses can be given in a time efcient manner[1]. The 2-D retinal scans obtained from imaging instruments [e.g., fundus
camera, laser ophthalmoscope (SLO)] may contain structures other than the retinal area; collectively regarded as artefacts.
Exclusion of artefacts is important as a preprocessing step before automated detection of features of retinal diseases. In a retinal
scan, extraneous objects such as the eyelashes, eyelids, and dust on optical surfaces may appear bright and in focus. Therefore,
automatic segmentation of these artefacts from an imaged retina is not a trivial task. The purpose is to develop a method that can
exclude artefacts from retinal scans so as to improve automatic detection of disease features from the retinal scans and there by
classify whether its ahealthy or unhealthy retina.
To the best of our knowledge, works related to differentiation between the true retinal area and the artefacts for retinal area
detection in an SLO image is very rare. The SLO manufactured by Optos [2] produces images of the retina with a width of up to
200 (measured from the centre of the eye).
This compares to 4560 achievable in a single fundus photograph. Examples of retinal imaging using fundus camera and
SLO are shown in Fig. 1. Due to the wide eld of view (FOV) of SLO images, structures such as eyelashes and eyelids are also
imaged along with the retina. If these structures are removed, this will not only facilitate the effective analysis of retinal area, but
also enable to register multi view images into a montage, resulting in a completely visible retina for disease diagnosis.
We have proposed a novel framework for the extraction of retinal area in SLO images. The main steps for constructing our
framework include:
Determination of features that can be used to distinguish between the retinal area and the artefacts;
Selection of features which are most relevant to the classication;
Construction of the classier which can classify out the retinal area from SLO images.
Determination of features that can be used to distinguish between the healthy and unhealthy retinal area ;

All rights reserved by www.ijirst.org 237


Retinal Area Detector for Classifying Retinal Disorders from SLO Images
(IJIRST/ Volume 3 / Issue 04/ 040)

Selection of features which are most relevant to the classication;


Classifier construction which can classify out the healthy and unhealthy retina.
For differentiating between the retinal area and the artefacts, we have determined different image-based features which reect
gyscale, textural, and structural information at multiple For differentiating between the retinal area and the artefacts, we have
determined different image-based features which reect grayscale, textural, and structural information at multiple resolutions.
Then, we have selected the features among the large feature set, which are relevant to the classication. The feature selection
process improves the classier performance in terms of computational time. Finally, we have constructed the classier for
discriminating between the retinal area and the artefacts. Our prototype has achieved average classication accuracy of 92% on
the dataset having healthy as well as diseased retinal images.
The rest of this paper is organized as follows. Section II introduces the previous work for feature determination and
classication. Section III discusses our proposed method. Section IV provides experimental results. Section V summarizes and
concludes the method with future work.

II. RELATED WORKS

Our literature survey is initiated with the methods for detection and segmentation of eyelids and eyelashes applied on images of
the front of the eye, which contains the pupil, eyelids, and eyelashes. On such an image, the eyelashes are usually in the form of
lines or bunch of lines grouped together. Therefore, the rst step of detecting them was the application of edge detection
techniques such as Sobel, Prewitt, Canny, Hough Transform[3], and Wavelet transform [4]. The eyelashes on the iris were then
removed by applying nonlinear ltering on the suspected eyelash areas [5].Since eyelashes can be in either separable for mor in
the form of multiple eyelashes grouped together, Gaussian lter and Variance lter were applied in order to distinguish among
both forms of eyelashes [6]. The experiment showed that separable forms of eyelashes were most likely detected by applying
Gaussian lter, whereas Variance lters are more preferable for multiple eyelash segmentation [7]. Initially, the eyelash
candidates were localized using active shape modeling, and then, eight-directional lter bank was applied on the possible eyelash
candidates. Focus score was used by Kang and Park [8] in order to vary the size of convolution kernels for eyelash detection.
The size variation of the convolution kernels also differentiated between separable eyelashes and multiple eyelashes. The
extraction of features was done by Min and Park [9] based on intensity and local standard variation in order to determine
eyelashes. They were thresholded using Otsus method, which is an automatic threshold selection method based on particular
assumptions about intensity distribution. CASIA database [10], which is an online database of Iris images were used for the
application of all these methods

Fig. 1: Example of (a) a fundus image and (b) an SLO image annotated with true retinal area and ONH.

In an image obtained from SLO, the eyelashes are shown as either dark or bright region compared to retinal area depending up
on how laser beam is focused as it passes the eyelashes. The eyelids are shown as reectance region with greater reectance
response compared to retinal area. Therefore, the features, which can differentiate among true retinal area and the artefacts in
SLO retinal scans are to be found out. After visual observation in Fig. 1(b), the features reecting the textural and structural
difference could have been the suggested choice. These features have been calculated for different regions in fundus images,
mostly for quality analysis.
The characterization of retinal images were performed in terms of image features such as intensity, skewness, textural
analysis, histogram analysis, sharpness, etc.,[1],[11],[12].Dias et al. [13] determined four different classiers using four types of
features. They were analyzed for the retinal area including colour, focus, contrast, and illumination. The output of these
classierswereconcatenatedforqualityclassication.Forclassication, the classiers such as partial least square (PLS) [14] and
support vector machines (SVMs) [15] were used. PLS selects the most relevant features required for classication. Apart from
calculating image features for whole image, grid analysis containing small patches of the image has also been proposed for
reducing computational complexity [11]. For determining image quality, the features of region of interest of anatomical
structures such as optic nerve head (ONH) and Fovea have also been analyzed [16]. The features included structural similarity
index, area, and visual descriptor etc. Some of the above mentioned techniques suggest the use of grid analysis, which can be an

All rights reserved by www.ijirst.org 238


Retinal Area Detector for Classifying Retinal Disorders from SLO Images
(IJIRST/ Volume 3 / Issue 04/ 040)

time effective method to generate features of particular region rather than each pixel. But grid analysis might not be an accurate
way to represent irregular regions in the image. Therefore, we decided the use of superpixels [17]which group pixels into
different regions depending upon their regional size and compactness.
Our methodology is based on analyzing the SLO image-based features, which are calculated for a small region in the retinal
image called superpixels. The determination of feature vector for each superpixel is computationally efcient as compared to
feature vector determination for each pixel. The superpixels from the images in the training set are assigned the class of either
retinal area or artefacts depending upon the majority of pixels in the superpixel belonging to the particular class. The
classication is performed after ranking and selection of features in terms of effectiveness in classication. The details of the
methods are discussed in the following section.

III. METHODOLOGY

The block diagram of the retina detector frame work is shown in Fig. 2. The framework has been divided into three stages,
namely training stage, testing and evaluation stage, and deployment stage. The training stage is concerned with building of
classication model based on training images and the annotations reecting the boundary around retinal area. In the testing and
evaluation stages, the automatic annotations are performed on the test set of images and the classier performance is evaluated
against the manual annotations for the determination of accuracy. Finally, the deployment stage performs the automatic
extraction of retinal area.

Fig. 2: Block diagram of retina detector framework.

The subtasks for training, testing, and deployment stages are briey described as follows:
1) Manual Annotations: It involves the training of the retinal area, fore ground and backround images are trained
separately.
2) Image Preprocessing: Images are then preprocessed in order to bring the intensity values of each image into a particular
range.
3) Generation of Superpixels: The training images after preprocessing are represented by small regions called superpixels.
The generation of the feature vector for each superpixel makes the process computationally efcient as compared to
feature vector generation for each pixel.
4) Feature Generation: We generate image-based features which are used to distinguish between the retinal area and the
artefacts. The image-based features reect textural, gray scale, or regional information and they were calculated for each
super pixel of the image present in the training set. In testing stage, only those features will be generated which are
selected by feature selection process.
5) Feature Selection: Due to a large number of features, the feature array needs to be reduced before classier construction.
This involves features selection of the most signicant features for classication.
6) Classier Construction: In conjunction with manual annotations, the selected features are then used to construct the
binary classier. The result of such a classier is the superpixel representing either the true retinal area or the artefacts.
7) Image Postprocessing: Image postprocessing is performed by morphological ltering so as to determine the retinal area
boundary using superpixels classied by the classication model.

All rights reserved by www.ijirst.org 239


Retinal Area Detector for Classifying Retinal Disorders from SLO Images
(IJIRST/ Volume 3 / Issue 04/ 040)

Image Preprocessing
Images were normalized by applying a Gamma () adjustment to bring the mean image intensity to a target value. was
calculated using

(1)
Where orig is the mean intensity of the original image and target is the mean intensity of the target image. For image
visualization, target is set to 80. Finally, the Gamma adjustment of the image is given as:

I
Inorm = (2)
255
Generation of Superpixels
The superpixel algorithm groups pixels into different regions, which can be used to calculate image features while reducing the
complexity of subsequent image processing tasks. Superpixels capture image redundancy and provide a convenient primitive
image pattern. Concerning the fundus retinal images , the superpixels generation is used for analyzing anatomical structures] and
retinal hemorrhage detection . For retinal hemorrhage detection, the superpixels were generated using watershed approach but
the number of superpixels generated in our case need to be controlled. The disadvantage of watershed approach is that it
sometimes generates number of superpixels of the artefacts more than desired. The superpixel generation method used in our
retina detector framework is simple linear iterative clustering[17],which was shown to be efcient not only in terms of
computational time, but also in terms of region compactness and adherence. The algorithm is initialized by dening number of
superpixels to be generated. The value was set to 5000 as a compromise between computational stability and prediction
accuracy.
Feature Generation
After the generation of superpixels, the next step is to determine their features. We intend to differentiate between the retinal area
and artefacts using textural, grayscale gradient, and regional based features. Textural and gradient based features are calculated
from red and green channels on different Gaussian blurring scales, also known as smoothing scales [18]. In SLO images, the blue
channel is set to zero; therefore, no feature was calculated for the blue channel. The regional features are determined for the
image irrespective of the colour channel. The details of these features are described as follows
Textural Features
Texture can be analyzed using Haralick features [19] by gray level co-occurrence matrix (GLCM) analysis. GLCM determines
how often a pixel of a gray scale value i occurs adjacent to a pixel of the value j. Four angles for observing the pixel adjacency,
i.e., =0, 45, 90, 135 are used.
These directions are shown in Fig. 3(a). GLCM also needs an offset value D,which denes pixel adjacency by certain distance.
In our case, offset value is set to 1. Fig. 3(b) illustrates the process of creating GLCM using the image I. The mean value in each
direction was taken for each Haralick feature and they were calculated from both red and green channels.

Fig. 3(a): GLCM directions and offset. (b): GLCM process using image I [20].

Gradient Features
The reason for including gradient features was illumination non uniformity of the artefacts. In order to calculate these features,
the response from Gaussian lter bank [18] is calculated. The Gaussian lter bank includes Gaussian N() ,its two rst-order
derivatives Nx() and Ny() and three second-order derivatives Nxx(), Nxy(), and Nyy() in horizontal (x) and vertical (y)

All rights reserved by www.ijirst.org 240


Retinal Area Detector for Classifying Retinal Disorders from SLO Images
(IJIRST/ Volume 3 / Issue 04/ 040)

directions. After convolving the image with the lter bank at a particular channel, the mean value is taken over of each lter
response overall pixels of each superpixel.
Regional Features
The features used to dene regional attributes were included because superpixels belonging artefacts have irregular shape
compared to those belonging the retinal area in an SLO image The image features are calculated for each superpixel of the
images present in the training set and they form a matrix of the form as
A tex R A
tex
G
FM = tex tex (3)
B R B G
Where A and B represent class of true retinal area and class of artefacts, superscript tex represent textural features, and
subscript R and G represent the red and green channel, respectively. For determining features at different smoothing scales, both
red and green channels of images are convolved with the Gaussian [18] at scales =1 ,2,4,8,16. The textural features are
calculated at the original scale, as well as at ve different smoothing scales so as to accommodate their image response in the
training set after blurring. In this way, the total number of columns in both channels of Atex and Btex will be 22 making it 44
altogether.
In this way, there are the total numbers of 88 features in the feature matrix for each superpixel of the image present in the
training set. Each column of the feature matrix calculated for the particular image is normalized using z-score normalization. Z-
score normalization returns the scores of the column with zero mean and unit variance.
Feature Selection
The main purposes for feature selection are reducing execution time, determination of features most relevant to the classication
and dimensionality reduction. For feature selection, we have selected filter and sequential forward selection (SFS) approach.
Though we have extracted the regional, gradient and textural features only the textural features i. e. 44 features are given as the
inputs to the classifier. The Table I represent the percentage of different types of features selected in each feature set. The table
shows clear dominance of textural features compared to gradient features and regional features. Therefore we need not have the
feature selection but directly give the textural features into the feature matrix
Table - 1
Percentage of Different Types of Features across Different Feature Set
Feature Set Textural Features Gradient Features Regional Features
SFS Approach 90% 10% 0%
Filter Approach 72.73% 24.24% 3.03%
Filter and SFS Approach 100% 0% 0%
Classier Construction
The classier is constructed in order to determine the different classes in a test image. In our case, it is a two class problem:
True retinal area and artefacts. We have applied Articial Neural Networks (ANNs). The ANN is the classication algorithm
that is inspired by human and animal brain. It is composed of many interconnected units called articial neurons. ANN takes
training samples as input and determines the model that best ts to the training samples using non linear regression. Consider the
Fig. 4 which shows three basic blocks of ANN, i.e., input, hidden layer (used for recoding or providing representation for
input),and output layer. More than one hidden layer can be used but in our case, there is only one hidden layer with ten neurons.
The output of each layer is in the form of matrix of oating values, which can be obtained by sigmoid function as
hw(x)= 1 (4)

1 exp W x b
T

Where b is the bias value and W are the weights of input x. These weights can be determined by backpropagation algorithm,
which tends to minimize mean square error value between desired output and actual output as
err = 1 t y 2 (5)
2

Fig. 4: AANs diagram

Where b is the bias value and W are the weights of input x.


These weights can be determined by backpropagation algorithm, which tends to minimize mean square error value between
desired output and actual output as

All rights reserved by www.ijirst.org 241


Retinal Area Detector for Classifying Retinal Disorders from SLO Images
(IJIRST/ Volume 3 / Issue 04/ 040)

err (6)
y t y 1 y x i
W i
These weights can be determined by backpropagation algorithm, which tends to minimize mean square error value between
desired output and actual output as
w i t y x i (7)
represents the step size. The weights were updated until 1000 iterations.
Image Postprocessing
After the classication of the test image, the superpixels are rened using morphological operation, so as to remove misclassied
isolated super pixels. The morphological closing was performed so as to remove small gaps among super pixels. The size of disk
structuring element can be a smaller value, say 10. For better results, area opening is performed so as to remove one or two
misclassied isolated super pixels. The rectinal area is detected and is plotted.
Classication of healthy and unhealthy retina
The plotted retinal area is extracted alone by removing the artifacts and the RGB image is converted to the gray level image.
Textural features are analyzed using Haralick features [19] by gray level co-occurrence matrix (GLCM) analysis. GLCM
determines how often a pixel of a gray scale value i occurs adjacent to a pixel of the value j. Four angles for observing the pixel
adjacency, i.e., = 0, 45, 90, 135 are used. In our case, oset value is set to 1. The mean value in each direction was taken for
each Haralick feature. About 44 features are extracted and saved for the testing and training stages.
The classier is constructed in order to determine the dierent classes in a test image. In our case, it is a two class problem:
true healthy retina and unhealthy retina. We have applied Articial Neural Networks (ANNs). The classication algorithm used
here is Scaled conjugate gradient backpropagation algorithm. Scaled conjugate gradient backpropagation algorithm is a network
training algorithm that updates weight and bias values according to the scaled conjugate gradient method. can train any network
as long as its weight, net input, and transfer functions have derivative functions. Backpropagation is used to calculate derivatives
of performance with respect to the weight and bias variables.
The conjugate gradient algorithms, in particular SCG, seem to perform well over a wide variety of problems, particularly for
networks with a large number of weights. The SCG algorithm is almost as fast as the LM algorithm on function approximation
problems (faster for large networks) and is almost as fast as resilient backpropagation on pattern recognition problems. Its
performance does not degrade as quickly as resilient backpropagation algorithm performance does when the error is reduced.
The conjugate gradient algorithms have relatively modest memory requirements and are used in pattern recognition problems.
The training algorithm helps to determine the healthy and unhealthy retinas based on the information obtained during the training
state. When an input is given it is classied based on the information from training stage. The output is shown as a pop up
message based on healthy or unhealthy retina.

IV. EXPERIMENTAL EVALUATION

The images for training and testing have been obtained from Optos[2] and are acquired using their ultrawide eld SLO.The
method was implemented in a MATLAB 2014 prototype and tested with image data in SLO images manufactured by the Optos.
The dataset is composed of healthy and diseased retinal images; most of the diseased retinal images are from Diabetic
Retinopathy patients. The system has been trained with 20 images and tested against 40 images. The advantage of using ANN is
its high computational efficiency in terms of testing time. Fig.5 and Fig 6 shows superpixel classication results and nal output
after postprocessing of different examples of healthy and diseased retinal images.

(a) (b) (c)

(d)

All rights reserved by www.ijirst.org 242


Retinal Area Detector for Classifying Retinal Disorders from SLO Images
(IJIRST/ Volume 3 / Issue 04/ 040)

Fig. 5: Superpixel classication result of an example of SLO images. (a) represent the input image of healthy retina (b) represent superpixel
classication result and (e) represent output after postprocessing and (d) represent the output result showing healthy retina

(a) (b) (c)

(d)
Fig. 6: Superpixel classication result of an example of SLO images. (a) Represent the input image of unhealthy retina (b) represent superpixel
classication result and (e) represent output after postprocessing and (d) represent the output result showing unhealthy retina

(a) (b)
Fig. 7: The performance graphs (a) Performance graph of ANN using LM algorithm(b)Performance graph of ANN using SCG algorithm

(a) (b) (c)


Fig. 8: The ROC Curves of ANN using LM algorithm (a) Training Stage(b)Testing Stage(c)Validation

All rights reserved by www.ijirst.org 243


Retinal Area Detector for Classifying Retinal Disorders from SLO Images
(IJIRST/ Volume 3 / Issue 04/ 040)

(a) (b) (c)


Fig. 9: The ROC Curves of ANN using SCG algorithm (a) Training Stage (b)Testing Stage(c)Validation

ANN is able to achieve the average accuracy nearer to that of other classiers such as SVM and kNN, while saving signicant
computational time when processing millions of images for automatic annotations. The fig 7(a) and (b) shows the performance
graphs of the ANN using LM and SCG algorithms.IN fig 7(a) performance is calculated in terms of Mean Square Error. The
mse obtained is 4.380800105284537 102 which is nearly zero mse. Here the graph shows the best validation at 24th epoch. In
fig. 7(b) the mse obtained is 3.8435161112000 102 which is nearly zero mse. Therefore the classification is of high
performance. The ROC curves of ANN using LM and SCG are shown in fig.8 and fig.9. The graphs indicates the high
performance of the ANN classifier.

V. DISCUSSION AND CONCLUSION

A novel framework for automatic detection of true retinal area in SLO images is the proposed methodology. The superpixels are
used to segment regions in a compact way and reduce the computing cost. The textural features are extracted and ANN
classiers has been built to extract out the retina area and classify the retinal health. It has been compared to other two classiers
and was compatible while saving the computational time. The experimental evaluation result shows that our proposed framework
can achieve high performance with least mean square error. As far as the classier is concerned, the testing time of ANN was the
lowest compared to other two classiers. Retina detection framework serves as the rst step toward the processing of ultra wide
eld SLO images. A complete retinal scan is possible if the retina is imaged from dierent eye-steered angles using an ultra-wide
eld SLO and, then, montaging the resulting image. Montaging is possible only if the artefacts are removed before. The main
purpose is to remove the artefacts which have been implemented and further the classication of healthy and unhealthy retina is
done. The classication is done based on a classier. After the training and testing the given input SLO image is classied as the
healthy or unhealthy retina. The future work includes the specication of the diseases of unhealthy retinal images.

REFERENCES
[1] M. S. Haleem, L. Han, J. van Hemert, and B. Li, Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: A review,
Comput. Med. Imag. Graph., vol. 37, pp. 581596, 2013.
[2] Optos. (2014). [Online]. Available: www.optos.com
[3] R. C. Gonzalez and R. E. Woods, Eds., Digital Image Processing, 3rd ed. Englewood Cliffs, NJ, USA: Prentice-Hall, 2006.
[4] M.J.Aligholizadeh, S.Javadi, R.S.Nadooshan, and K.Kangarloo, Eyelid and eyelash segmentation based on wavelet transform for iris recognition, in
Proc. 4th Int.Congr.Image Signal Process.,2011,pp.12311235.
[5] D. Zhang, D. Monro, and S. Rakshit, Eyelash removal method for human iris recognition, in Proc. IEEE Int. Conf. Image Process., 2006, pp. 285288.
[6] A.V.Mire and B.L.Dhote, Iris recognition system with accurate eyelash segmentation and improved FAR, FRR using textural and topological features,
Int. J. Comput. Appl., vol. 7, pp. 09758887, 2010.
[7] Y.-H.Li, M.Savvides, andT.Chen, Investigating useful and distinguishing features around the eyelash region, in Proc. 37th IEEE Workshop Appl. Imag.
Pattern Recog., 2008, pp. 16.
[8] B.J.Kang and K.R.Park, A robust eyelash detection based on iris focus assessment, Pattern Recog. Lett., vol. 28, pp. 16301639, 2007.
[9] T. H. Min and R. H. Park, Eyelid and eyelash detection method in the normalized iris image using the parabolic Hough model and Otsus thresholding
method, Pattern Recog. Lett., vol. 30, pp. 11381143, 2009.
[10] Iris database. (2005). [Online]. Available: http://www.cbsr.ia.ac.cn/ IrisDatabase.htm
[11] H.Davis, S.Russell ,E.Barriga, M.Abramoff, and P.Soliz, Vision-based, real- time retinal image quality assessment, in Proc. 22nd IEEE Int.Symp.
Comput.-Based Med. Syst., 2009, pp. 16.
[12] H. Yu, C. Agurto, S. Barriga, S. C. Nemeth, P. Soliz, and G. Zamora, Automated image quality evaluation of retinal fundus photographs in diabetic
retinopathy screening, in Proc. IEEE Southwest Symp. Image Anal. Interpretation, 2012, pp. 125128.
[13] J. A. M. P. Dias, C. M. Oliveira, and L. A. d. S. Cruz, Retinal image quality assessment using generic image quality indicators, Inf. Fusion, vol. 13, pp.
118, 2012.
[14] M. Barker and W. Rayens, Partial least squares for discrimination, J. Chemom., vol. 17, pp. 166173, 2003.
[15] J.Paulus, J.Meier, R.Bock, J.Hornegger, and G.Michelson, Automated quality assessment of retinal fundus photos, Int. J. Comput. Assisted Radiol.
Surg., vol. 5, pp. 557564, 2010.
[16] R.Pires,H.Jelinek,J.Wainer,andA.Rocha,Retinal image quality analysis for automatic diabetic retinopathy detection, in Proc.25th SIBGRAPI Conf.
Graph., Patterns Images, 2012, pp. 229236.

All rights reserved by www.ijirst.org 244


Retinal Area Detector for Classifying Retinal Disorders from SLO Images
(IJIRST/ Volume 3 / Issue 04/ 040)

[17] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk, Slic superpixels compared to state-of-the-art superpixel methods, IEEE Trans.
Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 22742282, Nov. 2012.
[18] M. Abr`amoff, W. Alward, E. Greenlee, L. Shuba, C. Kim, J. Fingert, and Y. Kwon, Automated segmentation of the optic disc from stereo color
photographs using physiologically plausible features, Invest.Ophthalmol. Vis. Sci., vol. 48, pp. 16651673, 2007.
[19] R. M. Haralick, K. Shanmugam, and I. Dinstein, Textural features for image classication, IEEE Trans. Syst., Man, Cybern., vol. SMC-3, no. 6, pp. 610
621, Nov. 1973.
[20] R. Haralick and L. Shapiro, Eds., Computer and Robot Vision. Reading, MA, USA: Addison-Wesley, 1991.

All rights reserved by www.ijirst.org 245

You might also like