You are on page 1of 6

Confidence-Based Fusion of Multiple Feature Cues for Facial Expression Recognition

Spiros Ioannou
1
, Manolis Wallace
2
, Kostas Karpouzis
1
, Amaryllis Raouzaiou
1
and Stefanos Kollias
1
1
National Technical ni!ersity of Athens,
", Iroon #olytechniou Str$, 1%& '( )o*raphou, Athens, +reece
2
ni!ersity of Indianapolis, Athens ,ampus,
", Ipitou Str$, 1(% %& Synta*ma, Athens, +reece
Abstract Since facial expressions are a key modality in
human communication, the automated analysis of facial images
for the estimation of the displayed expression is essential in the
design of intuitie and accessi!le human computer interaction
systems" #n most existing rule-!ased expression recognition
approaches, analysis is semi-automatic or re$uires high $uality
ideo" #n this paper %e propose a feature extraction system
%hich com!ines analysis from multiple channels !ased on their
confidence, to result in !etter facial feature !oundary
detection" &he facial features are then used for expression
estimation" &he proposed approach has !een implemented as
an extension to an existing expression analysis system in the
frame%ork of the #S& ERM#S pro'ect"
Index Terms Facial feature extraction, confidence,
multiple cue fusion, human computer interaction
I$ INTR-.,TI-N
In recent years there has /een a *ro0in* interest in
impro!in* all aspects of the interaction /et0een humans and
computers, pro!idin* a realization of the term 1affecti!e
computin*2 31%4$ 5umans interact 0ith each other in a
multimodal manner to con!ey *eneral messa*es6 emphasis
on certain parts of a messa*e is *i!en !ia speech and display
of emotions /y !isual, !ocal, and other physiolo*ical means,
e!en instincti!ely 7e$*$ s0eatin*8 3194$
Interpersonal communication is for the most part
completed !ia the face$ .espite common /elief, social
psycholo*y research has sho0n that con!ersations are
usually dominated /y facial e:pressions, and not spo;en
0ords, indicatin* the spea;er<s predisposition to0ards the
listener$ Mehra/ian indicated that the lin*uistic part of a
messa*e, that is the actual 0ordin*, contri/utes only for
se!en percent to the effect of the messa*e as a 0hole6 the
paralin*uistic part, that is ho0 the specific passa*e is
!ocalized, contri/utes for thirty ei*ht percent, 0hile facial
e:pression of the spea;er contri/utes for fifty fi!e percent to
the effect of the spo;en messa*e 324$ This implies that the
facial e:pressions form the ma=or modality in human
communication, and need to /e considered /y 5,I>MMI
systems$
In most real?life applications nearly all !ideo media ha!e
reduced !ertical and horizontal color resolutions6 moreo!er,
the face occupies only a small percenta*e of the 0hole frame
and illumination is far from perfect$ When dealin* 0ith such
input 0e ha!e to accept that color @uality and !ideo
resolution 0ill /e !ery poor$ While it is feasi/le to detect the
face and all facial features, it is !ery difficult to find the
e:act /oundary of each one 7eye, eye/ro0, mouth8 in order to
estimate its deformation from the neutral?e:pression frame$
Moreo!er it is !ery difficult to fit a precise model to each
feature or to employ trac;in* since hi*h?order fre@uency
information is missin* in such situations$ A 0ay to
o!ercome this limitation is to com/ine the result of multiple
feature e:tractors into a final result /ased on the e!aluation
of their performance on each frame6 the fusion method is
/ased on the o/ser!ation that ha!in* multiple mas;s for
each feature lo0ers the pro/a/ility that all of them are
in!alid since each of them produces different error patterns$
II$ AB#RASSI-N RA#RASANTATI-N
An automated emotion reco*nition throu*h facial
e:pression analysis system, must deal mainly 0ith t0o ma=or
research areasC automatic facial feature e:traction and facial
e:pression reco*nition$ Thus, it needs to com/ine lo0?le!el
ima*e processin* 0ith the results of psycholo*ical studies
a/out facial e:pression and emotion perception$
Most of the e:istin* e:pression reco*nition systems can /e
classified in t0o ma=or cate*oriesC the former includes
techni@ues 0hich e:amine the face in its entirety 7holistic
approaches8 and ta;e into account properties such as
intensity 3"4 or optical flo0 distri/utions and the latter
includes methods 0hich operate locally, either /y analyzin*
the motion of local features, or /y separately reco*nizin*,
measurin*, and com/inin* the !arious facial element
properties 7analytic approaches8$ A *ood o!er!ie0 of the
current state of the art is presented in 3D431(4$
In this 0or; 0e estimate facial e:pression throu*h the
estimation of the M#A+ EA#s$ EA#s are measured throu*h
detection of mo!ement and deformation of local intransient
facial features such as mouth, eyes and eye/ro0s in sin*le
frames$ Eeature deformations are estimated /y comparin*
their states to some frame, in 0hich the person<s e:pression
is ;no0n to /e neutral$ Althou*h EA#s 314 pro!ide all the
necessary elements for M#A+?D compati/le animation, 0e
cannot use them directly for the analysis of e:pressions from
!ideo scenes, due to the a/sence of a clear @uantitati!e
definition frame0or;$ In order to measure EA#s in real
ima*e se@uences, 0e ha!e to define a mappin* /et0een
them and the mo!ement of specific E.# feature points 7E#s8,
0hich correspond to salient points on the human face$
III$ EAATRA ABTRA,TI-N
An o!er!ie0 of the system is *i!en in Ei*ure 1$ #recise
facial feature e:traction is performed resultin* in a set of
mas;s, i$e$ /inary maps indicatin* the position and e:tent of
each facial feature$ The left, ri*ht, top and /ottomFmost
coordinates of the eye and mouth mas;s, the left ri*ht and
top coordinates of the eye/ro0 mas;s as 0ell as the nose
coordinates, to define the considered feature points$ Eor the
nose and each of the eye/ro0s, a sin*le mas; is created$ -n
the other hand, since the detection of eyes and mouth can /e
pro/lematic in lo0?@uality ima*es, a !ariety of methods are
used, each resultin* in a different mas;$ In total, 0e ha!e
four mas;s for each eye and three for the mouth$ These
mas;s ha!e to /e calculated in near?real time6 the
methodolo*ies applied in the e:traction of these mas;s
includeC
A feed?for0ard /ac; propa*ation neural net0or; trained
to identify eye and non?eye facial area$ The net0or; has
thirteen inputs6 for each pi:el on the facial re*ion the
NN inputs are luminance G, chrominance !alues ,r H
,/ and the ten most important .,T coefficients 70ith
zi*za* selection8 of the nei*h/orin* ':' pi:el area$
A second neural net0or;, 0ith similar architecture to
the first one, trained to identify mouth re*ions$
Iuminance /ased mas;s, 0hich identify eyelid and
sclera re*ions$
Ad*e?/ased mas;s$
A re*ion *ro0in* approach to detect re*ions of hi*h
te:ture /ased on standard de!iation
Expression Recognition
Expression
Profiles
Distance
Vector
Construction
Distances
of Neutral
Face
FAP
Estimation
Facial Expression
Decision System
reco*nised
e:pression>
emotional
state
Feature Extraction
Face Detection
Face Pose
Correction
Face
segmentation into
feature-candidate
areas
Mout !oundary
extraction "# Mas$s%
Eye !oundary
extraction "& Mas$s%
Nose Detection
Eye!ro' Detection
Validation(
)eigt
Assignment
Validation(
)eigt
Assignment
Final Eye
Mas$
Nose Mas$
Eye*ro'
Mas$
A
n
t
r
o
p
o
m
e
tr
ic
E
+
a
lu
a
tio
n
Final Mout
Mas$
Feature Points
"FP% ,eneration
C
o
n
fid
e
n
c
e
M
a
s
$
F
u
s
io
n
Neutral Frame -perations
Face Detection
Eye .emplate
Extraction
Mout sape
detection
Ei*ure 1C System -!er!ie0
Since, as 0e already mentioned, the detection of a mas;
usin* any of these applied methods can /e pro/lematic, all
detected mas;s ha!e to /e !alidated a*ainst a set of criteria6
of course, different criteria are applied to mas;s of different
facial features$ Aach one of the criteria e:amines the mas;s
in order to decide 0hether they ha!e accepta/le size and
position for the feature they represent$ This set of criteria
consist of relati!e anthropometric measurements, such as the
relation of the eye and eye/ro0 !ertical positions, 0hich
0hen applied to the correspondin* mas;s produce a !alue in
the ran*e 3(,14 0ith zero denotin* a totally in!alid mas;6 in
this manner, a !alidity confidence de*ree is *enerated for
each one of the initial feature mas;s$ A su/set of the
distances used to form the acceptance criteria of the eyes is
sho0n in the follo0in* e:ampleC
1
d Aye 0idth
2
d
.istance of eye<s middle !ertical coordinate
and eye/ro0<s middle !ertical coordinate
J
d Aye/ro0 0idth
D
d ./p, Kipupil /readth
1
1
1
D
1 1 ($D"
c
eye
d
M
d
_


,
and
2
1
2 J
1
c
eye
M d d
0here
1
1
c
eye
M and
2
1
c
eye
M are the confidence de*rees
ac@uired trou*h the application of each !alidation criterion
on eye mas;
1
eye
M $ The former of the t0o criteria is /ased
on 3&4, 0here the mean ratio of eye 0idth o!er /ipupil
/readth is reported as e@ual to ($D"$ In almost all cases these
!alidation criteria, as 0ell as the other criteria utilized in
mas; !alidation, produce confidence !alues in the 3(,14
ran*e$ In the rare cases that the estimated !alue e:ceeds the
limits, it is set to the closest e:treme !alue, zero for ne*ati!e
!alues and one for !alues e:ceedin* one$
Eor the features for 0hich more than one mas;s ha!e /een
detected usin* different methodolo*ies, the multiple mas;s
ha!e then to /e fused to*ether to produce a final mas;$ The
choice for mas; fusion, rather than simple selection of the
mas; 0ith the *reatest !alidity confidence, is /ased on the
o/ser!ation that the methodolo*ies applied in the initial
mas;s< *eneration produce different error patterns from each
other, since they rely on different ima*e information or
e:ploit the same information in fundamentally different
0ays$ Thus, com/inin* information from independent
sources has the property of alle!iatin* a portion of the
uncertainty present in the indi!idual information
components$ In other 0ords, the final mas;s that are
ac@uired !ia mas; fusion are accompanied /y lesser
uncertainty than each one of the initial mas;s$
The fusion al*orithm is /ased on a .ynamic ,ommittee
Machine structure that com/ines the mas;s /ased on their
!alidity confidence, producin* a final mas; to*ether 0ith the
correspondin* estimated confidence 31'4 for each facial
feature$ Aach of those mas;s represents the /est?effort result
of the correspondin* mas;?e:traction method used$ The
most common pro/lems, especially encountered in lo0
@uality input ima*es, are connection 0ith other feature
/oundaries or mas; dislocation due to noise$ If
comb
y is the
com/ined machine output and t the desired output it has
/een pro!en in the committee machine 7,M8 theory that the
com/ination error
comb
y t from different machines fi is
*uaranteed to /e lo0er than the a!era*e errorC
2 2
2
1
7 8 7 8
1
7 8
comb i
i
i comb
i
y t y t
M
y y
M

In a Static ,M, the !otin* 0ei*ht for a component is


proportional to its error on a !alidation set$ In .,Ms,
7Ei*ure 28 input is directly in!ol!ed in the com/inin*
mechanism throu*h a +atin* Net0or; 7+N8, 0hich is used
to modify those 0ei*hts dynamically$
Ei*ure 2C .ynamic ,ommittee Machine Architecture
In our case, the final mas;s for the left eye, ri*ht eye and
mouth,
I R
e e m
, ,
f f f
M M M are considered as the machine
output and the final confidence !alues of each mas; for
feature :
f
c
x
M are considered as the confidence of each
machine$ Therefore, for feature x, each element
x
f
m of the
final mas;
:
f
M is calculated from the n mas;s asC
1
1
i
n
cx x x i i
f i f
i
m m M h g
n


,

( )
( )
!d
@
!d
@
1, t
(, t
k k
k k
c,x c,x
f q
k
c,x c,x
f q
M M
h
M M

'

<

Where
x
i
m is the element of mas;
x
i
M ,
i
c,x
f
M the final
!alidation !alue of mas; i and
i
h is used to pre!ent the
mas;s 0ith
( ) !d
@
L t
k k
c,x c,x
f q
M M to contri/ute to the
final mas;$ A sufficient !alue for
!d
t is ($'$ The role of the
*atin* !aria/le
i
g is to fa!or the color?/ased feature
e:traction methods 7
e
1
M ,
m
1
M 8 in ima*es of hi*h color and
resolution$ In this sta*e, t0o !aria/les are ta;en into
accountC ima*e resolution and color @uality6 since non?
synthetic trainin* data for the latter is difficult to ac@uire, in
our first implementation, the *atin* output of !aria/le
i
g is
not trained /ut it is defined manually as follo0sC
other0ise
, 1, 12', L , L
1> , 1, 12', L , L
1,
bp cr cb
i
bp cr cb
n i D t t
g n i D t t




>

>
'


0here
bp
D the /ipupil 0idth in pi:els and cr, cb the
standard de!iation of the Cr, Cb channels respecti!ely inside
the facial area$ It has /een found that cr, cb in the same
ima*e is less than
?J
% 1( for *ood color @uality and much
lar*er for poor @uality ima*es$
7a8
7/8 7c8
7d8 7e8
Ei*ure J$ -ri*inal frame 7a8 and the four
detected mas;s for the eyes in frame
f
1
f
2
f
n
M
+ate
output
y
1
,N
1
y
2
,N
2
y
n
,N
n
Input
N
o
t
i
n
*
*
1
*2
*
n
.ynamic
,ommittee
Machines
J%2' of the 1Alyssa2 se@uence 3&4
Ei*ure D$ Einal mas; for the eyes
Ei*ure %$ All detected feature
points from the final mas;s
IN$ AB#RASSI-N ANAIGSIS
The feature mas;s are used to e:tract the Eeature #oints
7E#s8 considered in the definition of the EA#s, used in this
0or;$ Aach E# inherits the confidence le!el of the final mas;
from 0hich it deri!es6 for e:ample, the four E#s 7top,
/ottom, left and ri*ht8 of the left eye share the same
confidence as the left eye final mas;$ ,ontinuin*, EA#s can
/e estimated !ia the comparison of the E#s of the e:amined
frame to the E#s of a frame that is ;no0n to /e neutral, i$e$ a
frame 0hich is accepted /y default as one displayin* no
facial deformations$ Eor e:ample, EA#
J&
F
(squeeze_l_eyebro! is estimated asC
J& D$% J$11 D$% J$11
n n
F F" F" F" F"
0here
n
i
F" ,
i
F" are the locations of feature point i on the
neutral and the o/ser!ed face, respecti!ely, and
i #
F" F" is the measured distance /et0een feature
points i and # $
Ei*ure 9$ M#A+?D Eeature #oints 7E#s8
-/!iously, the uncertainty in the detection of the feature
points propa*ates in the estimation of the !alue of the EA#
as 0ell$ Thus, the confidence in the !alue of the EA#, in the
a/o!e e:ample, is estimated as
J& D$% J$11
min7 , 8
c c c
F F" F"
-n the other hand, some EA#s may /e estimated in different
0ays$ Eor e:ample, EA#
J1
F is estimated asC
1
J1 J$1 J$J J$1 J$J
n n
F F" F" F" F"
or as
2
J1 J$1 "$1 J$1 "$1
n n
F F" F" F" F"
As ar*ued a/o!e, considerin* /oth sources of information
for the estimation of the !alue of the EA# alle!iates some of
the initial uncertainty in the output$ Thus, for cases in 0hich
t0o distinct definitions e:ist for a EA#, the final !alue and
confidence for the EA# are as follo0sC
1 2
2
i i
i
F F
F
+

The amount of uncertainty contained in each one of the


distinct initial EA# calculations can /e estimated /y
1 1
1
c
i i
$ F
for the first EA# and similarly for the other$ The uncertainty
present after com/inin* the t0o can /e *i!en /y some t ?
norm operation on the t0oC
1 2
7 , 8
i i i
$ t $ $
The Ga*er t ?norm 0ith parameter %& *i!es reasona/le
results for this operationC
( )
( )
1 2
1 min 1, 71 8 71 8


i i i
$ $ $ +
The o!erall confidence !alue for the final estimation of the
EA# is then ac@uired as
1
c
i i
F $
While e!aluatin* the e:pression profiles, EA#s 0ith
*reater uncertainty must influence less the profile e!aluation
outcome, thus each EA# must include a confidence !alue$
This confidence !alue is computed from the correspondin*
E#s 0hich participate in the estimation of each EA#$
Einally, EA# measurements are transformed to antecedent
!alues
#
x for the fuzzy rules usin* the fuzzy num/ers
defined for each EA#, and confidence de*rees
c
#
x are
inherited from the EA#C
c c
# i
x F
0here
i
F is the EA# /ased on 0hich antecedent
#
x is
defined$ More information a/out the used e:pression profiles
can /e found in 3J43'4$
N$ AB#ARIMANTAI RASITS
Eacial feature e:traction can /e seen as a su/cate*ory of
ima*e se*mentation, i$e$ ima*e se*mentation into facial
features$ )han* 32(4 re!ie0ed a num/er of simple
discrepancy measures of 0hich, if 0e consider ima*e
se*mentation as a pi:el classification process, only one is
applica/le hereC the num/er of misclassified pi:els on each
facial feature$ While manual feature e:traction do not
necessarily re@uire e:pert annotation, it is clear in especially
in lo0?resolution ima*es manual la/elin* introduces an
error$ It is therefore desira/le to o/tain a num/er of manual
interpretations in order to e!aluate the inter?o/ser!er
!aria/ility$ A 0ay to compensate for the latter is Williams<
Inde: 7WI8 394, 0hich compares the a*reement of an
o/ser!er 0ith the =oint a*reement of other o/ser!ers$ An
e:tended !ersion of WI 0hich deals 0ith multi!ariate data
can /e found in 31"4$ The modified Williams< Inde: di!ides
the a!era*e num/er of a*reements 7in!erse disa*reements,
D#,#'8 /et0een the computer 7o/ser!er (8 and n() human
o/ser!ers 7#8 /y the a!era*e num/er of a*reements /et0een
human o/ser!ersC
1
1 (,
OC O , O
1
2 1
7 18
n
n
# #
# # # # # #
D
*+
n n D

>



and in our case 0e define the a!era*e disa*reement /et0een
t0o o/ser!ers #,#' asC
, O O
1
x x
# # # #
bp
D M M
D

0here

denotes the pi:el?0ise :or operator,


x
#
M denotes
the cardinality of feature mas; x constructed /y o/ser!er #,
and
bp
D 7/i/upil 0idth8 is used as a normalization factor to
compensate for camera zoom on !ideo se@uences$
Erom a dataset of a/out %(((( frames, 2%( frames 0ere
selected at random and 0ere manually la/eled from t0o
o/ser!ers$ .istri/ution of WI is sho0n in Ei*ure &$ At a
!alue of (, the computer mas; is infinitely far from the
o/ser!er mas;$ When the inde: is lar*er than 1, the
computer *enerated mas; disa*rees less 0ith the o/ser!ers
than the o/ser!ers disa*ree 0ith each other$ TAKIA 1
summarizes the results$ Eor the eyes and mouth *+ has /een
calculated for the /oth the final mas; and each of the
intermediate mas;s$
x
*+ denotes *+ for sin*le mas; x and
f
*+ is the *+ for the final mas; for each facial feature6
x
*+ denotes the a!era*e *+ for mas; x calculated o!er all
test frames$ Ei*ure & illustrates the *+ distri/ution on the
test frames, calculated on each frame as the a!era*e *+ of all
the final feature mas;s$
Ei*ure &
Williams Inde: distri/ution
7a!era*e on eyes and mouth8
Ei*ure '
Williams Inde: distri/ution
7a!era*e on left and ri*ht eye/ro0s8
NI$ ,-N,ISI-NS
Automatic reco*nition of EA#s is a difficult pro/lem, and
relati!ely little 0or; has /een reported 3214$ Within the
ARMIS 3%4 frame0or; the ma=ority of collected data ha!e
had the aforementioned @uality pro/lems6 sometimes one has
to compromise /et0een @uality and the use of intrusi!e
e@uipment$ In /oth the study of emotional cues and 5,I
!ideo @uality has to /e sacrificed$ The procedure 0e ha!e
descri/ed can e:ploit anthropometric ;no0led*e 3&4 to
e!aluate a set of e:tracted features /ased on different
techni@ues in order to impro!e o!erall performance$ Aarly
tests on /oth lo0 and hi*h @uality !ideo from the ARMIS
data/ase ha!e /een !ery promisin*C the al*orithm can
perform fully unattended EA# e:traction and self?reco!ers in
cases of false detections$ The system runs currently in
MATIAK and the performance is in the order of a fe0
seconds per frame$
TAKIA 1
RASIT SMMARG
Mas;
P x
*+
f
*+
f
x
*+
*+

,
Q of frames 0here
f x
*+ *+ >
*+
in frames 0here
f x
*+ *+ <
*+
in frames 0here
f x
*+ *+ >
Ieft Aye
NN
1
($9&&1
($'J''
1$2'& ($1(J &D$2 ($9"& ($''%
1 ($&(19 1$219 ($(%9 &'$' ($&J1 ($'9'
2 ($'21" 1$(2" ($(2& '2$D ($&&( ($''&
D ($&D19 1$1J1 ($(%& &9$2 ($'11 ($'D&
J ($'&(' ($"&" ($(29 DD$J ($'12 ($'9&
Ri*ht Aye
NN
1
($'(('
($'&%9
1$("J ($(2( &%$2 ($9&2 ($"D9
1 ($&1'% 1$2DJ ($('D '1$D ($9&D ($"2"
2 ($&&D( 1$1D( ($(21 %'$2 ($'J9 ($''J
J ($9%(D 1$JD9 ($(2' 'D$% ($9J2 ($"2(
D ($'"J" ($"'2 ($(2 D'$D ($&&' ($""9
Mouth
1 ($&9J2
($&'(J
1$(%1 ($(D9 %"$2 ($&%2 ($&&2
2 ($'2J1 ($"9J ($(J' DD$' ($&21 ($'%2
J ($%&(J 1$DD9 ($2(D "9$" ($%1( ($&"J
Aye/ro0s
left 1$(JD(
ri*ht 1$(1J"
x
*+ denotes
*+
for sin*le mas; x and
f
*+ is the
*+
for the final mas; for each facial feature$
1
NN denotes the eye mas; deri!ed from the eye detection neural net0or; output
RAEARAN,AS
314 A$ M$ Te;alp, R$ -stermann, 1Eace and 2?. Mesh Animation in M#A+?
D2, Si*nal #rocessin*C Ima*e ,ommunication, Nol$ 1%, pp$ J'&?D21,
2((($
324 A$ Mehra/ian, ,ommunication 0ithout Words, #sycholo*y Today, !ol$
2, no$ D, pp$ %J?%9, 1"9'$
3J4 A$ Raouzaiou, N$ Tsapatsoulis, K$ Karpouzis and S$ Kollias,
1#arameterized facial e:pression synthesis /ased on M#A+?D2,
ARASI# Rournal on Applied Si*?nal #rocessin*, Nol$ 2((2, No$ 1(, pp$
1(21?1(J', 5in?da0i #u/lishin* ,orporation, -cto/er 2((2$
3D4 K$ Easel, et al, 1Automatic Eacial A:pression AnalysisC A Sur!ey2,
#attern Reco*nition, J9, pp 2%"?2&%, 2((J
3%4 ARMIS, Amotionally Rich Man?machine Intelli*ent System IST?2(((?
2"J1" 7httpC>>000$ima*e$ntua$*r>ermis8
394 +$ W$ Williams, ,omparin* the =oint a*reement of se!eral raters 0ith
another rater2, Kiometrics, !olJ2, pp$ 91"?92&, 1"&9
3&4 R$W$ Goun*, 5ead and face anthropometry of adult $S$ ci!ilians, EAA
,i!il Aeromedical Institute, 1""J$
3'4 K$ Karpouzis, A$ Raouzaiou, A$ .rosopoulos, S$ Ioannou, T$ Kalomenos,
N$ Tsapatsoulis and S$ Kollias$ 1Eacial e:pression and *esture analysis
for emotionally?rich man?machine interaction2 N$ Sarris,
3"4 M$ A$ Tur; and A$ #$ #entland$ Eace reco*nition usin* ei*enfaces$ In
#roc$ of ,omputer Nision and #attern Reco*nition, pa*es %'9?%"1$ IAAA,
Rune 1""1/$
31(4 M$ 5$ Gan*, .$ Krie*man, N$ Ahu=a, 1.etectin* Eaces in Ima*esCA
Sur!ey2, #AMI, Nol$2D718, pp$ JD?%', 2((2$
3114 #$ A;man, Eacial e:pression and Amotion$ Am$ #sycholo*ist, Nol$ D',
1""J$
3124 R$ ,o0ie, A$ .ou*las?,o0ie, N$ Tsapatsoulis, +$ Notsis, S$ Kollias , W$
Eellenz, R$ Taylor$ Amotion Reco*nition in 5uman?,omputer Interaction,
IAAA Si*nal #rocessin* Ma*azine, 2((1, pp$ J2?'($
31J4 R$ Eransens, Ran .e #rins, SNM?/ased Nonparametric .iscriminant
Analysis, An Application to Eace .etection, Ninth IAAA International
,onference on ,omputer Nision Nolume 2, -cto/er 1J ? 19, 2((J
31D4 R$ #lutchi;, AmotionC A psychoe!olutionary synthesis, 5arper and Ro0,
NG, SA, 1"'($
31%4 R$W$ #icard, Affecti!e ,omputin*, MIT #ress, ,am/rid*e, MA$
3194 R$W$ #icard,, Nyzas A$, -ffline and -nline Reco*nition of Amotion
A:pression from #hysiolo*ical .ata, Amotion?Kased A*ent Architectures
Wor;shop Notes, IntOl ,onf$ Autonomous A*ents, pp$ 1J%?1D2, 1"""$
31&4 S$Ioannou, A$ Raouzaiou, K$ Karpouzis, M$ #ertsela;is, N$ Tsapatsoulis,
S$Kollias sa!e Adapti!e Rule?Kased Eacial A:pression Reco*nition +$
Nouros, T$ #anayiotopoulos 7Ads$8, Iecture Notes in Artificial
Intelli*ence, Nol$ J(2%, Sprin*er?Nerla*, pp$ D99 ? D&%, 2((D$
31'4 T$+$ .ietterich, Ansem/le methods in machine learnin*, #roceedin*s of
Eirst International ,onference on Multiple ,lassifier Systems, 2((($
31"4 Ni;ram ,halana and Gon*min Kim, A Methodolo*y for A!aluation of
Koundary .etection Al*orithms on Medical Ima*es, IAAA Transactions
on Medical Ima*in*, Nol$19, No$% -cto/er 1""&
32(4 G$R$)han*, A Sur!ey on A!aluation Methods for Ima*e Se*mentation,
#attern Reco*nition, Nol 2", No$ ', pp1JJD?1JD9, 1""9
3214 Gin*?li Tian, Ta;eo Kanade and Reffrey E$ ,ohn, 1Reco*nizin* Action
nits for Eacial A:pression Analysis2 IAAA Transactions -n #attern
Analysis And Machine Intelli*ence, Nol$ 2J, No$ 2, Ee/ruary 2((1

You might also like