You are on page 1of 13

For: - B. E | B. Tech | M. E | M. Tech | MCA | BCA | Diploma |MS |M.

Sc |
IEEE
REAL TIME PROJECTS& TRAINING GUIDE
SOFTWARE & EMBEDDED

www.makefinalyearproject.com

PROJECTS TITLES FOR ACADEMIC YEAR 2018-2019


#19, MN Complex, 2nd Cross, Sampige Main Road, Malleswaram, Bangalore – 560003
Call Us: 9590544567 / 7019280372 www.makefinalyearproject.com
www.igeekstechnologies.com Land Mark: Opposite Joyalukkas Gold Showroom, Near to Mantri Mall
IEEE 2018 IMAGE PROCESSING PROJECT TITLES & ABSTRACTS
IGTMA01 TITLE: Size-Scalable Content-Based Histopathological Image Retrieval From
Database That Consists of WSIs

Abstract—Content-based image retrieval (CBIR) has been widely researched for


histopathological images. It is challenging to retrieve contently similar regions from
histopathological whole slide images (WSIs) for regions of interest (ROIs) in different
size. In this paper, we propose a novel CBIR framework for database that consists of
WSIs and size-scalable query ROIs. Each WSI in the database is encoded into a matrix
of binary codes. When retrieving, a group of region proposals that have similar size
with the query ROI are firstly located in the database through an efficient table-lookup
approach. Then, these regions are ranked by a designed multi-binary-code-based
similarity measurement. Finally, the top relevant regions and their locations in the
WSIs as well as the corresponding diagnostic information are returned to assist
pathologists. The effectiveness of the proposed framework is evaluated on a fine-
annotated WSI database of epithelial breast tumors. The experimental results have
proved that the proposed framework is effective for retrieval from database that
consists of WSIs. Specifically, for query ROIs of 4096 × 4096 pixels, the retrieval
precision of the top 20 return has reached 96% and the retrieval time is less than 1.5
s.

IGTMA02 TITLE: Optic Disk Detection in Fundus Image Based on Structured Learning

Abstract—Automated optic disk (OD) detection plays an important role in developing


a computer aided system for eye diseases. In this paper, we propose an algorithm for
the OD detection based on structured learning. A classifier model is trained based on
structured learning. Then, we use the model to achieve the edge map of OD.
Thresholding is performed on the edge map, thus a binary image of the OD is obtained.
Finally, circle Hough transform is carried out to approximate the boundary of OD by
a circle. The proposed algorithm has been evaluated on three public datasets and
obtained promising results. The results (an area overlap and Dices coefficients of
0.8605 and 0.9181, respectively, an accuracy of 0.9777, and a true positive and false
positive fraction of 0.9183 and 0.0102) show that the proposed method is very
competitive with the state-ofthe- art methods and is a reliable tool for the segmentation
of OD.

IGTMA03 TITLE: Color Balance and Fusion for Underwater Image Enhancement

Abstract—We introduce an effective technique to enhance the images captured


underwater and degraded due to the medium scattering and absorption. Our method is
a single image approach that does not require specialized hardware or knowledge about
the underwater conditions or scene structure. It builds on the blending of two images
that are directly derived from a color compensated and white-balanced version of the
original degraded image. The two images to fusion, as well as their associated weigh
maps, are defined to promote the transfer of edges and color contrast to the output
image. To avoid that the sharp weight map transitions create artifacts in the low
frequency components of the reconstructed image, we also adapt a multi scale fusion
strategy. Our extensive qualitative and quantitative evaluation reveals that our
enhanced images and videos are characterized by better exposedness of the dark
regions, improved global contrast, and edges sharpness. Our validation also proves that
our algorithm is reasonably independent of the camera settings, and improves the
accuracy of several image processing applications, such as image segmentation and
key point matching.
IGTMA04 TITIE: SDL: Saliency-Based Dictionary Learning Framework for Image
Similarity.

Abstract—In image classification, obtaining adequate data to learn a robust


classifier has often proven to be difficult in several scenarios. Classification of
histological tissue images for health care analysis is a notable application in this
context due to the necessity of surgery, biopsy or autopsy. To adequately exploit
limited training data in classification, we propose a saliency guided dictionary
learning method and subsequently an image similarity technique for histo-
pathological image classification. Salient object detection from images aids in the
identification of discriminative image features. We leverage the saliency values
for the local image regions to learn a dictionary and respective sparse codes for
an image, such that the more salient features are reconstructed with smaller
error. The dictionary learned from an image gives a compact representation of
the image itself and is capable of representing images with similar content, with
comparable sparse codes. We employ this idea to design a similarity measure
between a pair of images, where local image features of one image, are encoded
with the dictionary learned from the other and vice versa. To effectively utilize
the learned dictionary, we take into account the contribution of each dictionary
atom in the sparse codes to generate a global image representation for image
comparison. The efficacy of the proposed method was evaluated using three tissue
data sets that consist of mammalian kidney, lung and spleen tissue, breast cancer,
and colon cancer tissue images. From the experiments, we observe that our
methods outperform the state of the art with an increase of 14.2% in the average
classification accuracy over all data sets.

IGTMA05 TITLE: Remote Sensing Image Classification With Large-Scale Gaussian


Processes

Abstract—Current remote sensing image classification problems have to deal with an


unprecedented amount of heterogeneous and complex data sources. Upcoming
missions will soon provide large data streams that will make land cover/use
classification difficult. Machine-learning classifiers can help at this, and many methods
are currently available. A popular kernel classifier is the Gaussian process classifier
(GPC), since it approaches the classification problem with a solid probabilistic
treatment, thus yielding confidence intervals for the predictions as well as very
competitive results to the state-of-theart neural networks and support vector machines.
However, its computational cost is prohibitive for large-scale applications, and
constitutes the main obstacle precluding wide adoption. This paper tackles this
problem by introducing two novel efficient methodologies for GP classification. We
first include the standard random Fourier features approximation into GPC, which
largely decreases its computational cost and permits large-scale remote sensing image
classification. In addition, we propose a model which avoids randomly sampling a
number of Fourier frequencies and alternatively learns the optimal ones within a
variation Bayes approach. The performance of the proposed methods is illustrated in
complex problems of cloud detection from multispectral imagery and infrared
sounding data. Excellent empirical results support the proposal in both computational
cost and accuracy.

IGTMA06 TITLE: Classification of Medical Images in the Biomedical Literature by Jointly


Using Deep and Handcrafted Visual Features

Abstract—The classification of medical images and illustrations from the biomedical


literature is important for automated literature review, retrieval, and mining. Although
deep learning is effective for large-scale image classification, it may not be the optimal
choice for this task as there is only a small training dataset. We propose a combined
deep and handcrafted visual feature (CDHVF) based algorithm that uses features
learned by three fine-tuned and pretrained deep convolutional neural networks
(DCNNs) and two handcrafted descriptors in a joint approach. We evaluated the
CDHVF algorithm on the ImageCLEF 2016 Subfigure Classification dataset and it
achieved an accuracy of 85.47%, which is higher than the best performance of other
purely visual approaches listed in the challenge leaderboard. Our results indicate that
handcrafted features complement the image representation learned by DCNNs on
small training datasets and improve accuracy in certain medical image classification
problems.

IGTMA07 TITLE: The Simple Image Processing Scheme for Document Retrieval Using
Date of Issue as Query

Abstract—Storing paper-based documents converted into digital image format is one


of effective solutions to preserve content in a document. Searching the Document
Images stored in a repository by using time as queries is one of importance tasks of
document retrieval. This paper presents a simple scheme of image processing to
retrieve the documents, which contain the desired date of issue printed in Thai
alphabets, from a repository. The procedure of this retrieval scheme consists of 4
stages: image acquisition, pre-processing, zone identification, and pattern recognition.
The proposed machine vision scheme gave the excellent result over 365 dummy dates
of issue of documents in the tested repository. It is a simple and uncomplex approach
for the document image retrieval of Thai government letters, printed in Thai alphabets,
using date of issue as query.

IGTMA08 TITLE: Blind Quality Assessment Based on Pseudo-Reference Image

Abstract—Traditional full-reference image quality assessment (IQA) metrics


generally predict the quality of the distorted image by measuring its deviation from a
perfect quality image called reference image. When the reference image is not fully
available, the reduced-reference and no-reference IQA metrics may still be able to
derive some characteristics of the perfect quality images, and then measure the
distorted image’s deviation from these characteristics. In this paper, contrary to the
conventional IQA metrics, we utilize a new “reference” called pseudo-reference image
(PRI) and a PRI-based blind IQA (BIQA) framework. Different from a traditional
reference image, which is assumed to have a perfect quality, PRI is generated from the
distorted image and is assumed to suffer from the severest distortion for a given
application. Based on the PRI-based BIQA framework, we develop distortion-specific
metrics to estimate blockiness, sharpness, and noisiness. The PRI-based metrics
calculate the similarity between the distorted image’s and the PRI’s structures. An
image suffering from severer distortion has a higher degree of similarity with the
corresponding PRI. Through a two-stage quality regression after a distortion
identification framework, we then integrate the PRI-based distortion-specific metrics
into a general-purpose BIQA method named blind PRI-based (BPRI) metric. The BPRI
metric is opinion-unaware (OU) and almost training-free except for the distortion
identification process. Comparative studies on five large IQA databases show that the
proposed BPRI model is comparable to the state-of-the-art opinion-aware- and OU-
BIQA models. Furthermore, BPRI not only performs well on natural scene images, but
also is applicable to screen content images. The MATLAB source code of BPRI and
other PRI-based distortion specific metrics will be publicly available.
IGTMA09 TITLE: Predicting CT Image From MRI Data Through Feature Matching With
Learned Nonlinear Local Descriptors

Abstract—Attenuation correction for positron-emission tomography (PET)/magnetic


resonance (MR) hybrid imaging systems and dose planning for MR-based radiation
therapy remain challenging due to insufficient high-energy photon attenuation
information. We present a novel approach that uses the learned nonlinear local
descriptors and feature matching to predict pseudo computed tomography (pCT)
images from T1-weighted and T2-weighted magnetic resonance imaging (MRI) data.
The nonlinear local descriptors are obtained by projecting the linear descriptors into
the nonlinear high-dimensional space using an explicit feature map and low-rank
approximation with supervised manifold regularization. The nearest neighbors of each
local descriptor in the input MR images are searched in a constrained spatial range of
the MR images among the training dataset. Then the pCT patches are estimated
through-nearest neighbor regression. The proposed method for pCT prediction is
quantitatively analyzed on a dataset consisting of paired brain MRI and CT images
from 13 subjects. Our method generates pCT images with a mean absolute error (MAE)
of 75.25 ± 18.05 Hounsfield units, a peak signal-to-noise ratio of 30.87 ± 1.15 dB, a
relative MAE of 1.56 ± 0.5% in PET attenuation correction, and a dose relative
structure volume difference of 0.055 ± 0.107% in D98%, as compared with true CT.
The experimental results also show that our method outperforms four state-of-the-art
methods.

IGTMA10 TITLE: Learning a Deep Single Image Contrast Enhancer from Multi-Exposure
Images

Abstract—Due to the poor lighting condition and limited dynamic range of digital
imaging devices, the recorded images are often under-/over-exposed and with low
contrast. Most of previous single image contrast enhancement (SICE) methods adjust
the tone curve to correct the contrast of an input image. Those methods, however, often
fail in revealing image details because of the limited information in a single image. On
the other hand, the SICE task can be better accomplished if we can learn extra
information from appropriately collected training data. In this paper, we propose to use
the convolutional neural network (CNN) to train a SICE enhancer. One key issue is
how to construct a training data set of low-contrast and high-contrast image pairs for
end-to-end CNN learning. To this end, we build a large-scale multi-exposure image
data set, which contains 589 elaborately selected high-resolution multi-exposure
sequences with 4,413 images. Thirteen representative multi-exposure image fusion and
stack-based high dynamic range imaging algorithms are employed to generate the
contrast enhanced images for each sequence, and subjective experiments are conducted
to screen the best quality one as the reference image of each scene. With the
constructed data set, a CNN can be easily trained as the SICE enhancer to improve the
contrast of an under-/over-exposure image. Experimental results demonstrate the
advantages of our method over existing SICE methods with a significant margin.

IGTMA11 TITLE: REPRESENTATIVE PIXELS COMPRESSION ALGORITHM USING


GRAPH SIGNAL PROCESSING FOR COLORIZATION-BASED IMAGE

This paper deals with the colorization-based image coding algorithm. In this algorithm,
a color image is compressed by encoding its luminance image by standard coding
method such as JPEG coding and by storing several color pixels called as
representative pixels (RPs). In decoding phase, a color image is restored from
luminance image and color information of RPs using the image colorization technique.
While previous studies have achieved a high coding performance, the compression
method of RPs has not been considered because the positions of RPs are
inhomogeneous. In order to improve the image coding performance, this paper
proposes the RPs compression algorithm using the graph Fourier transform. Numerical
results show that proposed algorithm achieves better performance than JPEG2000
coding.

IGTMA12 TITLE: Improve Image De-blurring

Image blurring is one of the most important concerns which modest the quality of
image. Image blurring can have occurred due to many different reasons. Image de-
blurring play a big area as a task for researchers to come over this challenge. There are
many methods developed for image processing to go over image blurring issue.
However, in this paper a new filter is suggested to merge with other de-blurring
methods for improving image de-blurring enhancement. The proposed filter based on
combining Markov basis and Laplace filter, it is slightly modified to make it
appropriate for color image. Escalation image edge content is one of the proposed filter
using. Moreover, merge the proposed filter with other de-blurring algorithms provides
high quality outcomes to improve the performance of several de-blurring procedures.
Also, by using median filter the outcomes can be rise up for both color and gray images.
The proposed filter compared with other filters and gives promised results.

IGTMA13 TITLE: Underwater Image Color Correction using Exposure-Bracketing


Imaging

Abstract—Absorption and scattering of light in an underwater scene saliently attenuate


red spectrum components. They cause heavy color distortions in the captured
underwater images. In this letter, we propose a method for color-correcting underwater
images, utilizing a framework of gray information estimation for color constancy. The
key novelty of our method is to utilize exposure bracketing imaging: a technique to
capture multiple images with different exposure times for color correction. The long-
exposure image is useful for sufficiently acquiring red spectrum information of
underwater scenes. In contrast, pixel values in the green and blue channels in the short-
exposure image are suitable because they are unlikely to attenuate more than the red
ones. By selecting appropriate images (i.e., least over- and under-exposed images) for
each color channel from those taken with exposure-bracketing imaging, we fuse an
image that includes sufficient spectral information of underwater scenes. The fused
image allows us to extract reliable gray information of scenes; thus, effective color
corrections can be achieved. We perform color correction by linear regression of gray
information estimated from the fused image. Experiments using real underwater
images demonstrate the effectiveness of our method.

IGTMA14 TITLE: CALCIUM REMOVAL FROM CARDIAC CT IMAGES USING DEEP


CONVOLUTIONAL NEURAL NETWORK

Coronary calcium causes beam hardening and blooming artifacts on cardiac computed
tomography angiography (CTA) images, which lead to overestimation of lumen
stenosis and reduction of diagnostic specificity. To properly remove coronary
calcification and restore arterial lumen precisely, we propose a machine learning-based
method with a multi-step in painting process. We developed a new network
configuration, Dense-Unet, to achieve optimal performance with low computational
cost. Results after the calcium removal process were validated by comparing with gold-
standard X-ray angiography. Our results demonstrated that removing coronary
calcification from images with the proposed approach was feasible, and may
potentially improve the diagnostic accuracy of CTA.

IGTMA15 TITLE: SELF-LEARNING TO DETECT AND SEGMENT CYSTS IN LUNG


CT IMAGES WITHOUT MANUAL ANNOTATION

Image segmentation is a fundamental problem in medical image analysis. In recent


years, deep neural networks achieve impressive performances on many medical image
segmentation tasks by supervised learning on large manually annotated data. However,
expert annotations on big medical datasets are tedious, expensive or sometimes
unavailable. Weakly supervised learning could reduce the effort for annotation but still
required certain amounts of expertise. Recently, deep learning shows a potential to
produce more accurate predictions than the original erroneous labels. Inspired by this,
we introduce a very weakly supervised learning method, for cystic lesion detection and
segmentation in lung CT images, without any manual annotation. Our method works
in a self-learning manner, where segmentation generated in previous steps (first by
unsupervised segmentation then by neural networks) is used as ground truth for the
next level of network learning. Experiments on a cystic lung lesion dataset show that
the deep learning could perform better than the initial unsupervised annotation, and
progressively improve itself after self-learning.

IGTMA16 TITLE: Quality Assessment of Thai Rice Kernels Using Low Cost Digital Image
Processing System

Abstract—This paper presents a low cost digital image processing system for quality
assessment of Thai rice kernels. Nowadays, Thailand is the top country which export
rice into the world market, according to the mention of the Rice Trader, the export
volume is 9,883,288 tons in 2016 and export value is 154,434 million baht or 4,401
million dollars.Thai rice quality is controlled by rice department, ministry of commerce
Thailand in order to guarantee the quality in market including prices base on grade of
rice quality. Thence, quality assessment of Thai rice kernels is required. Quality
assessment or grading of Thai rice kernels usually use manual operation by person in
cooperating with equipment called micrometer to measure geometrical features such
as length, width, and area of rice kernels. This method takes a long time and also gives
uncertainty in results due to eye fatigue because size of rice kernels is very small.
Therefore, an image processing technique is then applied to measure size of Thai rice
kernels. Proposed system consists of flatbed scanner and image processing algorithm
which correspond to measure of Thai rice kernels. The low cost system for quality
assessment of Thai rice kernels can be delivered to Thai rice industry, the certainty of
results and speed of quality assessment can be significantly improved.

IGTMA17 TITLE: A Simple Measure for Acuity in Medical Images

Abstract—An automatic and objective assessment of image quality is important in an


era, where large-scale processing of imaging data from multi-center studies becomes
commonplace. Based on a comprehensive statistical image model that includes noise
and blur, a measure for image acuity is derived here as the ratio of the maximal gradient
magnitude and the intensity difference at a boundary. Acuity may be affected by the
object under study, the image acquisition, reconstruction processes, and any post-
processing steps. The acuity measure presented here is post-hoc, intuitive to
understand, simple to compute, and easily integrates with other standard measures of
image quality. Three applications in medical imaging are included where our acuity
measure is useful in the objective and automatic assessment of image quality.
IGTMA18 TITLE: Creation and Solution of Image Processing Based CAPTCHA Test

Abstract— With the development of web-based technologies, it has been observed that
the number of software called “bot” in the Internet tends to increase. As a result of this
increase, the internet sites have become unable to use the resources in their hands
efficiently. In this study, CAPTCHA (Completely Automated Public Turing test to tell
Computers and Humans Apart) test, which is proposed to prevent bot software, was
created and solved using image processing techniques. In this context, sample texts
were created and the letters were analyzed. Performance analyses were carried out with
experimental results. As a result of the tests performed, it has been observed that the
tests performed with the template matching method can be resolved at a satisfactory
level of accuracy. In addition, it has been found that the resolution rates decrease
significantly with the addition of geometric distortions to the test texts.

IGTMA19 TITLE: Segmentation of Live and Dead Cells in Tissue Scaffolds

Abstract— Image processing techniques are frequently used for extracting


quantitative information (cell area, cell size, cell counting, etc.) from different types
of microscopic images. Image analysis in the field of cell biology and tissue
engineering is time consuming, and requires personal expertise. In addition,
evaluation of the results may be subjective. Therefore, computer based learning /
vision-based applications have been developed rapidly in recent years. In this study,
images of the viable preosteoblastic mouse MC3T3-E1 cells in tissue scaffolds, which
was captured from a bone tissue regeneration study, were analyzed by using image
processing techniques. Tissue scaffolds were bioprinted from alginate and alginate-
hydroxyapatite polymers. Confocal Laser Scanning Microscope images of the tissue
scaffolds were processed in the study. Percentages of live and dead cell area
in the scaffolds were determined by using image processing techniques at two
different time points of the culture.

IGTMA20 TITLE: Segmentation of the Main Structures in Hematoxylin and Eosin Images

Abstract—Pathologists conduct a biopsy on a tissue when a carcinoma case for a


patient is suspected. They stain the cells on that tissue using some biochemical
materials that react with a certain cell element. They put stained cells onto a slide and
examine the cells by using an optical microscope device. In our case, we will focus on
H&E stained breast tissue samples. Pathologists keep track of a standard process to
determine the patient’s condition by focusing on the structures in H&E stained images
such as epithelium, lumen, and nuclei. They employ scoring methods with quantitative
and qualitative inferences in this decision process. Those inferences contain mitotic
nuclei activity, number of the nucleus, lumen region distribution, epithelium area size
and so on. Each factor has a score for the patient’s carcinoma case. In this paper a novel
image processing algorithm is developed to enable the pathologists to make decisions
easily by segmenting epithelium, lumen and nuclei structures. Actual microscopic
images could show some degenerated cell structures.
IGTMA21 TITLE: Retinal Disease Screening Through Local Binary Patterns

Abstract—This paper investigates discrimination capabilities in the texture of fundus


images to differentiate between pathological and healthy images. For this purpose, the
performance of local binary patterns (LBP) as a texture descriptor for retinal images
has been explored and compared with other descriptors such as LBP filtering and local
phase quantization. The goal is to distinguish between diabetic retinopathy (DR), age-
related macular degeneration (AMD), and normal fundus images analyzing the texture
of the retina background and avoiding a previous lesion segmentation stage. Five
experiments (separating DR from normal, AMD from normal, pathological from
normal, DR from AMD, and the three different classes) were designed and validated
with the proposed procedure obtaining promising results. For each experiment, several
classifiers were tested. An average sensitivity and specificity higher than 0.86 in all the
cases and almost of 1 and 0.99, respectively, for AMD detection were achieved. These
results suggest that the method presented in this paper is a robust algorithm for
describing retina texture and can be useful in a diagnosis aid system for retinal disease
screening.

IGTMA22 TITLE: The Indian Spontaneous Expression Database for Emotion Recognition

Abstract—Automatic recognition of spontaneous facial expressions is a major


challenge in the field of affective computing. Head rotation, face pose, illumination
variation, occlusion etc. are the attributes that increase the complexity of recognition
of spontaneous expressions in practical applications. Effective recognition of
expressions depends significantly on the quality of the database used. Most well-
known facial expression databases consist of posed expressions. However, currently
there is a huge demand for spontaneous expression databases for the pragmatic
implementation of the facial expression recognition algorithms. In this paper, we
propose and establish a new facial expression database containing spontaneous
expressions of both male and female participants of Indian origin. The database
consists of 428 segmented video clips of the spontaneous facial expressions of 50
participants. In our experiment, emotions were induced among the participants by
using emotional videos and simultaneously their self-ratings were collected for each
experienced emotion. Facial expression clips were annotated carefully by four trained
decoders, which were further validated by the nature of stimuli used and self-report of
emotions. An extensive analysis was carried out on the database using several machine
learning algorithms and the results are provided for future reference. Such a
spontaneous database will help in the development and validation of algorithms for
recognition of spontaneous expressions.

IGTMA23 TITLE: Face Antispoofing Using Speeded-Up Robust Features and Fisher Vector
Encoding

Abstract—The vulnerabilities of face biometric authentication systems to spoofing


attacks have received a significant attention during the recent years. Some of the
proposed countermeasures have achieved impressive results when evaluated on
intratests, i.e., the system is trained and tested on the same database. Unfortunately,
most of these techniques fail to generalize well to unseen attacks, e.g., when the system
is trained on one database and then evaluated on another database. This is amajor
concern in biometric antispoofing research that is mostly overlooked. In this letter, we
propose a novel solution based on describing the facial appearance by applying Fisher
vector encoding on speeded-up robust features extracted from different color spaces.
The evaluation of our countermeasure on three challenging benchmark face-spoofing
databases, namely the CASIA face anti spoofing database, the replay-attack database,
and MSU mobile face spoof database, showed excellent and stable performance across
all the three datasets. Most importantly, in inter database tests, our proposed approach
outperforms the state of the art and yields very promising generalization capabilities,
even when only limited training data are used.

IGTMA24 TITLE: Facial Age Estimation With Age Difference

Abstract—Age estimation based on the human face remains a significant problem in


computer vision and pattern recognition. In order to estimate an accurate age or age
group of a facial image, most of the existing algorithms require a huge face data set
attached with age labels. This imposes a constraint on the utilization of the immensely
unlabeled or weakly labeled training data, e.g., the huge amount of human photos in
the social networks. These images may provide no age label, but it is easy to derive the
age difference for an image pair of the same person. To improve the age estimation
accuracy, we propose a novel learning scheme to take advantage of these weakly
labeled data through the deep convolutional neural networks. For each image pair,
Kullback–Leibler divergence is employed to embed the age difference information.
The entropy loss and the cross entropy loss are adaptively applied on each image to
make the distribution exhibit a single peak value. The combination of these
losses is designed to drive the neural network to understand the age gradually from
only the age difference information. We also contribute a data set, including more than
100 000 face images attached with their taken dates. Each image is both labeled with
the timestamp and people identity. Experimental results on two aging face databases
show the advantages of the proposed age difference learning system, and the state-of-
the-art performance is gained.

IGTMA25 TITLE: Defect Detection in SEM Images of Nanofibrous Materials

Abstract—Nanoproducts represent a potential growing sector and nanofibrous


materials are widely requested in industrial, medical, and environmental applications.
Unfortunately, the production processes at the nanoscale are difficult to control and
nanoproducts often exhibit localized defects that impair their functional properties.
Therefore, defect detection is a particularly important feature in smart manufacturing
systems to raise alerts as soon as defects exceed a given tolerance level and to design
production processes that both optimize the physical properties and control the
defectiveness of the produced materials. Here, we present a novel solution to detect
defects in nanofibrous materials by analyzing scanning electron microscope images.
We employ an algorithm that learns, during a training phase, a model yielding sparse
representations of the structures that characterize correctly produced nanofiborus
materials. Defects are then detected by analyzing each patch of an input image and
extracting features that quantitatively assess whether the patch conforms or not to the
learned model. The proposed solution has been successfully validated over 45 images
acquired from samples produced by a prototype electro spinning machine. The low
computational times indicate that the proposed solution can be effectively adopted in
a monitoring system for industrial production.

IGTMA26 TITLE: Data-Dependent Label Distribution Learning for Age Estimation

Abstract—As an important and challenging problem in computer vision, face age


estimation is typically cast as a classification or regression problem over a set of face
samples with respect to several ordinal age labels, which have intrinsically cross-age
correlations across adjacent age dimensions. As a result, such correlations usually lead
to the age label ambiguities of the face samples. Namely, each face sample is associated
with a latent label distribution that encodes the cross-age correlation information on
label ambiguities. Motivated by this observation, we propose a totally data-driven label
distribution learning approach to adaptively learn the latent label distributions. The
proposed approach is capable of effectively discovering the intrinsic age distribution
patterns for cross-age correlation analysis on the basis of the local context structures
of face samples. Without any prior assumptions on the forms of label distribution
learning, our approach is able to flexibly model the sample-specific context aware label
distribution properties by solving a multi-task problem, which jointly optimizes the
tasks of age-label distribution learning and age prediction for individuals.
Experimental results demonstrate the effectiveness of our approach.

IGTMA27 TITLE: Automatic Detection of Red Light Running Using Vehicular Cameras

Abstract — The red traffic light running is a very common traffic violation. Nowadays,
vehicles running red traffic lights are detected by sensors fixed on the streets. However
a very small percentage of all traffic lights are equipped with such sensors. For this
reason, this work proposes a red light runner detection to be performed by a system
that consists of a camera and a computer embedded in the vehicle. An algorithm is also
proposed to process the recorded videos and a prototype was implemented. The
prototype's goal is to monitor work vehicles without any intervention in driving, acting
only in as an educational tool. Tests are performed with video recorded in the streets
of Belo Horizonte during the day and with a benchmark video using the implemented
prototype. The results are compared based on the execution time and accuracy. The
video processing took less one tenth of the video duration and the accuracy was about
95.8%.

IGTMA28 TITLE: A Hierarchical Approach for Rain or Snow Removing in a Single Color
Image

Abstract—In this paper, we propose an efficient algorithm to remove rain or snow from
a single color image. Our algorithm takes advantage of two popular techniques
employed in image processing, namely, image decomposition and dictionary learning.
At first, a combination of rain/snow detection and a guided filter is used to decompose
the input image into a complementary pair: 1) the low-frequency part that is free of
rain or snow almost completely and 2) the high-frequency part that contains not only
the rain/snow component but also some or even many details of the image. Then, we
focus on the extraction of image’s details from the high-frequency part. To this end,
we design a 3-layer hierarchical scheme. In the first layer, an over complete dictionary
is trained and three classifications are carried out to classify the high-frequency part
into rain/snow and nonrain/ snow components in which some common characteristics
of rain/snow have been utilized. In the second layer, another combination of rain/snow
detection and guided filtering is performed on the rain/snow component obtained in
the first layer. In the third layer, the sensitivity of variance across color channels is
computed to enhance the visual quality of rain/snow-removed image. The effectiveness
of our algorithm is verified through both subjective (the visual quality) and objective
(through rendering rain/snow on some ground-truth images) approaches, which shows
a superiority over several state-of-the-art works.
IGTMA29 TITLE: Unconstrained Facial Beauty Prediction Based on Multi-scale K-Means∗

Abstract — Facial beauty prediction belongs to an emerging field of human perception


nature and rule. Compared with other facial analysis tasks, this task has shown its
challenges in pattern recognition and biometric recognition.
The algorithm of presented facial beauty prediction requires burden landmark or
expensive optimization procedure. We establish a larger database and present a novel
method for predicting facial beauty, which is notably superior
to previous work in the following aspects: 1) A large scale database with more
reasonable distribution has been established and utilized in our experiments; 2) Both
female and male facial beauties are analyzed under unconstrained conditions without
landmark; 3) Multi-scale apparent features are learned to represent facial beauty which
are more expressive and require less computation expenditure. Experimental results
demonstrate the accuracy and efficiency of the presented method.

IGTMA30 TITLE: Context-Aware Local Binary Feature Learning for Face Recognition
Abstract—In this paper, we propose a context-aware local binary feature learning (CA-
LBFL) method for face recognition. Unlike existing learning-based local face
descriptors such as discriminant face descriptor (DFD) and compact binary face
descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits
the contextual information of adjacent bits by constraining the number of shifts from
different binary bits, so that more robust information can be exploited for face
representation. Given a face image, we first extract pixel difference vectors (PDV) in
local patches, and learn a discriminative mapping in an unsupervised manner to project
each pixel difference vector into a context-aware binary vector. Then, we perform
clustering on the learned binary codes to construct a codebook, and extract a histogram
feature for each face image with the learned codebook as the final representation. In
order to exploit local information from different scales, we propose a context-aware
local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple
projection matrices for face representation. To make the proposed methods applicable
for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL)
method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality
gap of corresponding heterogeneous faces in the feature level, respectively. Extensive
experimental results on four widely used face datasets clearly show that our methods
outperform most state-of-the-art face descriptors.

IGTMA31 TITLE: A Facial-Expression Monitoring System for Improved Healthcare in


Smart Cities
Human facial expressions change with different states of health; therefore, a facial-
expression recognition system can be benecial to a healthcare framework. In this paper,
a facial-expression recognition system is proposed to improve the service of the
healthcare in a smart city. The proposed system applies a bandlet transform to a face
image to extract sub-bands. Then, a weighted, center-symmetric local binary pattern is
applied to each sub-band block by block. The CS-LBP histograms of the blocks are
concatenated to produce a feature vector of the face image. An optional feature-
selection technique selects the most dominant features, which are then fed into two
classiers: a Gaussian mixture model and a support vector machine. The scores of these
classiers are fused by weight to produce a condence score, which is used to make
decisions about the facial expression's type. Several experiments are performed using
a large set of data to validate the proposed system. Experimental results show that the
proposed system can recognize facial expressions with 99.95% accuracy.
Head Office: No.1 Rated company in Bangalore for all
software courses and Final Year Projects
IGEEKS Technologies
No:19, MN Complex, 2nd Cross,
Sampige Main Road, Malleswaram, Bangalore
Karnataka (560003) India. Above HOP Salon,
Opp. Joyalukkas, Ma lleswaram,
Landmark: Near to Mantri Ma ll, Malleswaram
Bangalore.
Email: nanduigeeks2010@gmail.com ,
nandu@igeekstechnologies.com
Office Phone:
9590544567 / 7019280372
Contact Person:
Mr. Nandu Y,
Director-Projects,
Mobile: 9590544567,7019280372
E-mail: nandu@igeekstechnologies.com
nanduigeeks2010@gmail.com

Partners Address:
RAJAJINAGAR: JAYANAGAR:
#531, 63rd Cross, No 346/17, Manandi Court,
12th Main, after sevabhai hospital, 3rd Floor, 27th Cross,
5th Block, Rajajinagar, Jayanagar 3rd Block East,
Bangalore-10. Bangalore - 560011,
Landmark: Near Bashyam circle. Landmark: Near BDA Complex.

More than 12 years’ experience in IEEE Final Year Project Center,


IGEEKS Technologies Supports you in Java, IOT, Python, Bigdata
Hadoop, Machine Learning, Data Mining, Networking, Embedded,
VLSI, MATLAB, Power Electronics, Power System Technologies.

For Titles and Abstracts visit our website www.makefinalyearproject.com

You might also like