You are on page 1of 23

Multisensory Research (2016) DOI:10.

1163/22134808-00002535

brill.com/msr

Domestic Dogs and Human Infants Look More


at Happy and Angry Faces Than Sad Faces
Min Hooi Yong and Ted Ruffman
Department of Psychology, University of Otago, Box 56, Dunedin 9054, New Zealand
Received 19 October 2015; accepted 26 May 2016

Abstract
Dogs respond to human emotional expressions. However, it is unknown whether dogs can match
emotional faces to voices in an intermodal matching task or whether they show preferences for looking at certain emotional facial expressions over others, similar to human infants. We presented 52
domestic dogs and 24 seven-month-old human infants with two different human emotional facial expressions of the same gender simultaneously, while listening to a human voice expressing an emotion
that matched one of them. Consistent with most matching studies, neither dogs nor infants looked
longer at the matching emotional stimuli, yet dogs and humans demonstrated an identical pattern of
looking less at sad faces when paired with happy or angry faces (irrespective of the vocal stimulus),
with no preference for happy versus angry faces. Discussion focuses on why dogs and infants might
have an aversion to sad faces, or alternatively, heightened interest in angry and happy faces.
Keywords
Domestic dogs, infants, sadness, intermodal matching, preference looking

1. Introduction
Anecdotal reports of canine understanding of human emotions are a common
theme among dog owners (Howell et al., 2013). Although such reports are
notoriously difficult to interpret, they have formed the basis for several recent
studies. Dogs respond differently when a human expresses sadness (crying)
versus humming or talking (Custance and Mayer, 2012), and to happiness and
anger (Mller et al., 2015). In addition, like human adults, they also show
a unique combination of elevated cortisol and increased alertness and submissiveness when listening to infant crying versus infant babbling or white
*

To whom correspondence should be addressed. E-mail: min.yong@otago.ac.nz

Koninklijke Brill NV, Leiden, 2016

DOI:10.1163/22134808-00002535

M. H. Yong, T. Ruffman / Multisensory Research (2016)

noise (Yong and Ruffman, 2014). Moreover, there is some indication that dogs
show avoidance when a human expresses fear towards an object (Merola et al.,
2011), although see (Yong and Ruffman, 2015a), or when a human expresses
disgust toward food as opposed to happiness (Buttelmann and Tomasello,
2013).
In the present study, we used an intermodal matching task [also known as
a Matching to Sample (MTS) task] to explore canine understanding of human
emotion. Researchers have often used MTS tasks in emotion recognition studies with non-human animals and preverbal human infants because they do not
require initial task training and require minimal habituation to the experimental setting (Izumi, 2013). In MTS tasks, two facial expressions are presented,
together with an auditory expression, with the interest in whether a participant looks more at the facial expression that matches the auditory expression
(Spelke, 1976). Using this paradigm, past studies have shown that rhesus monkeys match conspecific faces to their calls (Ghazanfar and Logothetis, 2003),
as well as conspecific faces and voices, and familiar human faces and voices
(Sliwa et al., 2011), as do capuchin monkeys (Evans et al., 2005).
In addition, dogs have been shown to match dog growls to images of dogs
(Farago et al., 2010; Taylor et al., 2010) as well as life-sized replicas of dogs
(Taylor et al., 2011). They also show a preference for looking at a familiar conspecific or human face compared to an unfamiliar one (Somppi et al., 2013),
although in another study they also preferred to look at novel images and human faces compared to dog faces and inanimate everyday objects (e.g., chair,
spoon) (Racca et al., 2010). Two studies have found evidence that dogs were
able to match the gender of a live human when listening to vocal playback
(Ratcliffe et al., 2014; Yong and Ruffman, 2015b). Nevertheless, dogs are
not always successful in MTS tasks. For instance, dogs looked longer at a
strangers face when listening to their owners voice (Adachi et al., 2007),
suggesting that looking is sometimes driven by idiosyncratic features of the
stimulus rather than matching. Dogs also show some form of preference when
selecting faces as shown in this study (Nagasawa et al., 2011). Nagasawa et
al. (2011) first trained nine dogs to discriminate between photographs of their
owners smiling and blank neutral faces. Five dogs that successfully learned
to discriminate the photos were then tested with a new set of photos of the
owner and stranger while smiling or with neutral expressions. They found that
dogs preferred a smiling face compared to a neutral face, with the preference
limited to faces of the same gender as the owner. Nevertheless, although dogs
discriminated the faces, the sample was very small (five) and had been selected
on the basis of previous success and was therefore likely unrepresentative of
dogs in general.

Multisensory Research (2016) DOI:10.1163/22134808-00002535

In a study that used emotive stimuli, Racca et al. (2012) compared the
looking biases of 37 dogs and 25 four-year-old children when viewing different human and canine emotional expressions (threatening, friendly, neutral).
Dogs had a left-side gaze bias when viewing negative (threatening) and neutral human expressions, but no bias towards the friendly face. Like dogs, the
four-year-old children showed a left-side gaze bias to negative expressions,
although they differed in also showing a left-side bias to positive human expressions while showing no bias to neutral expressions. Muller et al. (2015)
found that dogs approached happy faces faster compared to angry faces, and
learned discriminations more quickly when rewarded with happy versus angry
faces. Albuquerque et al. (2016) examined emotion matching and found that
dogs could match visual and auditory expressions of anger and happiness in
both conspecific and human stimuli.
Albuquerque et al.s (2016) findings seem to provide support for canine
matching of emotion faces and voices. However, since their study included
only 17 dogs, it is important to determine whether such findings can be replicated. Furthermore, Albuquerque et al. only used expressions of happiness
and anger, expressions that comprised two broad categories of positive versus
negative emotion. At a minimum, dogs might have broad conceptions of positive and negative emotion and thus might be able to match faces and voices
when exemplars of each category are contrasted, yet they might not be able
to match when different exemplars from within a category are included such
as two negative emotions (anger and sadness). Another possibility is that dogs
only possess an understanding of one category of emotion (positive: happiness) and are confused by the other (e.g., anger). If so, they might look to a
happy face when a happy voice was played because they understood these two
expressions went together. Such dogs might then understand that the angry
voice does not go with the happy face, and then by default, look to the angry
face because there is no other face to look at rather than through a genuine
understanding that the angry face goes with the angry voice. In essence, happiness is likely to be much more familiar to the dog and familiarity with an
expression does seem to play a role in deciding which object a dog will select
(Buttelmann and Tomasello, 2013; Merola et al., 2011). It would be helpful
to include other emotions in a matching task. While happiness is arguably the
only positive basic emotion (e.g., surprise can be positive or negative), there
are other negative basic emotions (e.g., anger versus sadness) that could be
included in addition to anger to examine whether dogs can match emotions
within a single category of negative emotions.
While MTS has been reliably used in animal studies, MTS tasks have also
been used with human infants to show that they can reliably discriminate emotional expressions between different persons from 3.5 to 7 months of age (De
Haan and Nelson, 1998; Leppnen and Nelson, 2006), and perhaps as early as

M. H. Yong, T. Ruffman / Multisensory Research (2016)

ten weeks of age (Haviland and Lelwica, 1987). At this age, preverbal infants
also follow anothers gaze direction (DEntremont and Muir, 1997) and recognize various affective expressions (Kestenbaum and Nelson, 1990; LaBarbera
et al., 1976; Montague and Walker-Andrews, 2002; Soken and Pick, 1992).
Furthermore, by 7 months of age, infants recognize upright versus inverted facial orientation (Kestenbaum and Nelson, 1990) and are sensitive to changes
in intonation or prosody that typically convey emotion (Walker-Andrews and
Grolnick, 1983; Walker-Andrews and Lennon, 1991). From an evolutionary
perspective, there are benefits (e.g., survival) for infants to recognize and respond to emotional expressions. Emotional expressions are evolutionary adaptations to social environments to create and maintain social relationships and
interactions (Darwin, 1872; Keltner and Kring, 1998).
Nevertheless, like dogs, human infants do not always show a successful
matching outcome in the MTS paradigm. Studies have shown that while infants sometimes can match affective facial displays that correspond to vocal
displays (Montague and Walker-Andrews, 2002; Soken and Pick, 1992, 1999;
Walker, 1982), they more frequently fail to do so. Table 1 summarizes the results from infant intermodal matching studies and shows that matching has
occurred in less than half (41%) of published studies. Yet, even this success
rate should be viewed with caution given the possibility of a publication bias
that results in an exaggeration of the success rate of infant matching abilities. Table 1 also demonstrates that infants sometimes display preferences for
certain facial emotions over others when faces are presented simultaneously.
Of particular interest are the summaries at the bottom of the table that show
the percentage of studies indicating a preference for one emotion over another
when preferences arose (i.e., ignoring studies in which there was no difference). These results indicate a trend for human infants to look more at happy
faces when paired with either sad or angry faces, and for no preference when
sad and angry faces are paired (see Table 1).
There have been various explanations of infants pattern of looking at faces.
They have sometimes been described as demonstrating a gaze aversion to sad
and angry faces because of the negative valence of such emotions (DEntremont and Muir, 1997, 1999; Haviland and Lelwica, 1987; Izard, 1991;
Termine and Izard, 1988), as indicating a preference to view familiar emotions
over unfamiliar ones (Soken and Pick, 1999), or as demonstrating empathy
in the case of looking away from sad faces (Hoffman, 1981; Termine and
Izard, 1988). Yet, it has also been argued that increased attention to angry
faces might reflect the unique function of anger, one that includes vigilance
and high arousal to a potentially stressful event (Izard, 1993). However, Hunnius et al. (2011) showed that four- and seven-month-olds fixate less on the
inner features of faces (eyes, nose, mouth) showing anger and fear compared
to happy ones. Infants tendency to shift attention away from a parents sad

ND = No difference.

41%
matching

no
no
yes
no
yes
yes
no
no
yes
no
no
yes
no

Using unfamiliar persons as emoters


Montague and Walker-Andrews, 2002
Soken and Pick, 1999
Soken and Pick, 1999
Soken and Pick, 1992
Soken and Pick, 1992
Walker, 1982
Walker, 1982
Walker-Andrews, 1986
Walker-Andrews, 1986
Schwartz et al., 1985
Kahana-Kalman and Walker-Andrews, 2001
Vaillant-Molina et al., 2013
Vaillant-Molina et al., 2013

Summary

yes
no
no
yes

Matching
of face to
voice?

Using familiar persons as emoters


Montague and Walker-Andrews, 2002
Montague and Walker-Andrews, 2002
Montague and Walker-Andrews, 2002
Kahana-Kalman and Walker-Andrews, 2001

Studies

Table 1.
Summary of findings for intermodal matching tasks

100%
happy

ND
ND
ND

happy

ND
happy

ND
happy

happy

HappySad

100%
happy

ND
ND
ND
ND

happy
happy
happy

ND

ND
ND

HappyAngry

no
preference

sad

angry

SadAngry

Preference for an affective expression

5
7
7
7
7
5 and 7
7
5
7
5
3.5
5
3.5

5
5
5
3.5

Infants
age
(months)

men and women


woman
woman
woman
point light illumination
woman
woman
woman
woman
women
women
infants aged 7.5 to 8.5 months
infants aged 7.5 to 8.5 months

mother
mother
father
mother

Emoter
Multisensory Research (2016) DOI:10.1163/22134808-00002535

M. H. Yong, T. Ruffman / Multisensory Research (2016)

face is also thought to be a way of reducing negative feelings by averting gaze


(Grossmann, 2010; Nesse, 1990). Sadness is often characterized by low physiological and physical activity, tiredness, decreased interest in the outer world,
low mood, rumination, decreased linguistic communication, and a withdrawal
from social settings (Schwartz et al., 1981; Shields, 1984; Sobin and Alpert,
1999). Studies in adults tells us that sadness is undesirable and thus to be
avoided because it is associated with the awareness of an irrevocable separation, the loss of an attachment figure or of a valued aspect of the self, as well
as the breaking of social bonds (Ellsworth and Smith, 1988; Keltner et al.,
1993; Rivers et al., 2006). Such arguments suggest that infants should tend to
look toward happy faces and away from sad faces, and that they might either
look toward or away from angry faces, as shown in the summaries in Table 1.
In sum, previous research has demonstrated that dogs and seven-month-old
human infants sometimes match emotion faces and sounds in MTS tasks, although equally, there is evidence that both often fail to do so.
1.1. Aims and Hypotheses
In the present study, we examined whether dogs can match emotional faces to
voices in an MTS task or whether they show preferences for looking at certain faces over others, similar to human infants and as shown in Table 1. We
compared dogs to seven-month-old human infants for several reasons. First,
there is previous evidence that seven-month-olds can sometimes match emotional faces and voices, and our interest was in whether any looking trends for
dogs were species-specific or more general and also present in human infants.
Human infants, similar to dogs, have relatively little experience in learning
to recognize human emotions. Second, canine insights are likely to be fairly
rudimentary, with suggestions that dogs might have the cognitive abilities of
young children (Coren, 2006; Hare and Tomasello, 2005). Our aim was to
establish whether the findings we obtained for dogs with a particular set of
stimuli would be similar to those for infants using the same stimuli. Given that
just over half of studies (59%) using the MTS paradigm with human infants
have failed to elicit matching, we also aimed to examine participant preference for looking at one emotional face over another. Third, our study has the
potential to indicate a primitive form of empathy if dogs (and infants) show an
aversion to looking at sad faces (Hoffman, 1981; Termine and Izard, 1988), in
line with recent findings that dogs experience a rise in cortisol when listening
to human infant crying (Yong and Ruffman, 2014). Fourth, our study provides
the first (so far as we are aware) comparison of canine and human infant responding to the same set of emotional faces, and therefore, an interesting test
of whether responding is species-specific or species-general. Fifth, there might
be wariness of angry emotional expressions because of their threat value, and

Multisensory Research (2016) DOI:10.1163/22134808-00002535

avoidance of sad expressions because of the reduced reward value of interactions with sad individuals, and characteristic responding to such expressions
could have been selected for in human and canine evolution. Emotion matching paradigms have also been viewed as a good choice for examining potential
abilities selected through evolution (Preston and De Waal, 2002; Prguda and
Neumann, 2014). We argue that the capacity for learning about face-voice
contingencies could have been selected for in evolution (or canine breeding)
because it facilitates infant and canine social functioning in the world.
We used three expressions frequently used in previous research: human
expressions of anger, sadness and happiness. Thus, we included two negative emotions anger and sadness to more comprehensively test canine
matching ability than previous matching studies (Albuquerque et al., 2016).
Participants were simultaneously presented with pairs of facial expressions
depicting either happiness and sadness, happiness and anger, or sadness and
anger, while listening to a human voice expressing an emotion that matched
one of the facial expressions. All stimuli were adult human males or females.
We examined whether dogs and human infants looked more at the face that
matched the voice, and if not, whether they had a preference for looking at
certain emotional faces over others irrespective of the auditory expression. If
the latter, our expectation based on previous research was that infants would
look toward happy faces when paired with sad or angry faces (see Table 1).
2. Material and Methods
2.1. Participants
Fifty-two pet dogs (32 females, M = 5.47 years, SD = 4.01) of different
breeds participated in this study (Appendix A). Dogs were recruited from advertisements placed in the university newsletter, local canine clubs, and flyers
distributed to dog owners from the local city council. Dog owners were given
a petrol voucher as compensation for participating in the study and dogs received sausage pieces as a reward upon completion.
Twenty-four human infants (10 females) participated in the task (M = 7.16
months, SD = 0.85) (Appendix B). Twenty infants were European New
Zealanders while four infants were mixed-parentage (Maori, Chinese). All infants were born healthy with no known visual or auditory impairments, and
were carried full-term with the exception of one born pre-term. The infant
participants were volunteers or were referrals from previous participants, and
each received a petrol voucher and a toy for their participation. Twenty-one
infants (87.5%) came from middle-class families in which at least one parent
had a university education and was working full-time.

M. H. Yong, T. Ruffman / Multisensory Research (2016)

2.2. Experimental Design


This experiment employed a within-subjects design, with each participant
matching happy, sad and angry emotion faces to emotion voices.
2.3. Stimuli
2.3.1. Face
We originally obtained 32 facial expressions depicting anger, happiness and
sadness that were either purchased from an image depository or sourced from
a public sharing website. The images were edited using Adobe Photoshop
CS4 to remove irrelevant background, and re-sized to 1024 972 resolutions
(96 dpi). The happy expressions included a wide smile with visible teeth and
a Duchenne smile (eyes crinkled), with hair away from the face. The angry
expressions had brows pointing downward toward the centre, a wrinkled forehead, visible teeth, and raised cheeks. The sad expressions included crying
with visible tears, redness on the cheeks, eyes and nose, the eyebrows raised
toward the centre, and lowered mouth corners with visible teeth. We specifically chose faces that were not distinctive, i.e., no face had visible scars, tattoos
or any other distinguishable marks, nor were they exceptionally good looking,
e.g., no beauty queens or celebrities. All faces were middle-aged, representative of our dog owners sample. We also chose typical hair lengths, i.e., short
hair for men and long hair for women. We included Asian ethnicities, e.g.,
brown and yellow skin tone to represent the NZ population. In the 2013 census, New Zealanders identified themselves as of European descent (74% White
Caucasian), Asians (11.8%) and Maori and Pacific peoples (22.3%) (Statistics
New Zealand, 2015). Each face was analysed for colour contrast (including
white balance). The faces (excluding hair and neck) occupied an average of
36% of the total screen display (M = 42.92 cm2 , SD = 16.23). A one-way
analysis of variance (ANOVA) showed that there was no difference in the
screen area covered by the three emotion faces, F (2, 9) = 0.11, p = 0.89.
2.3.2. Voice
The recordings used in this experiment were made in stereo at 44 100 Hz with
a 32-bit float. The recording equipment included a Crown PZM-185 boundary microphone, a line mixer Phonic MU1002, and audio software (Audacity
version 1.3.13). The distance between the microphone and the speaker was
0.75 m and the noise level in the room when quiet was 30 decibels (dB).
Each speaker was recorded individually in a 5.0 m 3.5 m room. They
were asked to convey an emotion using two short content-free sentences: Hat
sundig pron you venzy. Fee gott laish jonkill gosterr (Banse and Scherer,
1996, p. 619). The sentences were accompanied with matching scenarios for
each emotion. Each speaker made multiple recordings for the six emotions
(anger, disgust, fear, happiness, sadness, neutral) and was instructed not to

Multisensory Research (2016) DOI:10.1163/22134808-00002535

produce recognizable and overt cues such as yuck, or yippee! in their


speech. The recordings were edited using Audacity to remove background
noises, and shortened to five seconds. Each five-second recording was analysed with Praat version 5.3.45 (Boersma and Weenink, 2012) to determine the
mean fundamental frequency (F0: cross-correlation method, 125 ms time window, 501000 Hz frequency range), intensity, harmonics to noise ratio (HNR),
and formants (F15: burg method, 25 ms time window, maximum frequency
5500 Hz, maximum five formants) for each emotion. We found no significant
differences for any parameter across the three emotions (all ps > 0.06).
2.3.3. Stimuli Validation
We chose to validate our faces and vocal recordings with a sample of young
adult humans for ecological validity. All facial expressions and voice recordings were presented individually to 16 university students (seven females,
M = 22 years, SD = 2.83). Participants were asked to identify the emotion
from three possible choices: angry, happy, and sad. Using a 90% agreement
criterion, participants identified 28 (out of 32) faces, and 29 (out of 32) vocal
recordings. To be truly confident of the target emotional expression, we chose
12 faces and six voices that obtained 100% agreement. The 12 faces consist of
six females, six males, and were made up of eight Caucasian, three Asian, and
one Latin American individual. The three male and three female voices were
made up of four Caucasian, one Pacific Islander, and one Asian individual.
The selections for faces and voices were made to match the gender, hair and
skin colour.
In other words, for six trials per subject, there were four faces per emotion, three malemale pairs and three femalefemale pairs, with each matching
emotion appearing twice; once on the left screen and once on the right screen
(see Table 2 for an example of combinations). The three emotion combinations
were HappySad, SadAngry, and HappyAngry, with each combination appearing twice.
Table 2.
An example of trial combinations for all six trials
Pairs

Left image

Right image

Sound

Emotion

1
2
3
4
5
6

happy male
sad female
angry male
sad male
angry female
happy female

sad male
angry female
happy male
angry male
happy female
sad female

happy male
angry female
angry male
sad male
happy female
sad female

happy
angry
angry
sad
happy
sad

Correct response in bold font.

10

M. H. Yong, T. Ruffman / Multisensory Research (2016)

2.4. Testing Setup


The setup consisted of a chair, three black (2.0 m 2.0 m) felt-covered walls,
two 48 cm computer monitor screens (refresh rate of 60 Hz), one computer
to run the computer program, two audio speakers and two video cameras. The
black walls were placed to the front, left and right of the participant. Both
computer monitors were placed on the front black wall at a 1.0 m height from
the floor and the distance between the two monitors was 0.3 m. That same wall
also contained a pinhole between the two computer monitors for inconspicuous and close-up video recording for the primary video camera. A secondary
video camera was placed on top and at the middle of the front black wall for
wide-angle recording. Each audio speaker was placed behind each computer
monitor and hidden from the participant. The loudness of each vocal recording was measured using a Digitech sound level meter, QM 1588, 1.8 m from
the chair, and comprised an average loudness of 65 dB. The facial and vocal expressions were presented using in-house customized software for this
experiment.
2.5. Procedure
Dog owners were blindfolded, and held onto their dogs collar lightly. Dogs
sat between their owners knees at a 1.8 m distance, equidistant from the two
computer monitors (see Appendix C). Shorter dogs were placed on their owners knees at a 1.5 m distance equidistant from the two monitors. Similar to
dog owners, the infants parent was also blindfolded. Parents held their infants
on their lap at a 1.5 m distance from the two monitors. Infants and shorter dogs
were placed closer to the monitors due to the limited flexibility in positioning
the video camera. No reinforcement was given to the participant and none
were trained on any other task with these stimuli. The experimenter remained
out of sight throughout the session.
In total, six trials were used for each participant with two trials for each
emotion. The six pairs of trials, with the correct face in italics, consisted of a
happy voice paired with either happy and sad faces or happy and angry faces,
an angry voice paired with either angry and happy faces or angry and sad
faces, and a sad voice paired with either sad and angry faces or sad and happy
faces. The matching face position (either left or right monitor) was counterbalanced and the order of the emotion trials was randomised.
Each trial was made up of two phases. In the first phase, participants listened to a vocal recording expressing one emotion (e.g., angry male voice)
while looking at two blank monitors with a black background. In the second
phase, the same vocal recording was again played but each monitor was then
accompanied by a face (different individuals, same gender) expressing two
different emotions, with one of the faces matching the voice (see Table 2).

Multisensory Research (2016) DOI:10.1163/22134808-00002535

11

Each trial started with a clicking tone for one second. The clicking tone
attracted participants attention. One voice recording was then played for five
seconds (e.g., angry male voice) with the computer monitors remaining blank
(showing a black screen). The voice recording was then repeated for another
five seconds (angry male voice) and two different faces were shown simultaneously on each computer monitor (angry male, happy male). Then a different
tone was played for one second and the computer monitors went blank to indicate the end of that trial. This format was repeated for the remaining five trials.
To maintain interest, for the human infants, the tones at the beginning and end
of each trial were substituted with animal sounds (e.g., bird chirps, cow moos)
and the blank monitors with still cartoon images (e.g., Winnie the Pooh).
2.6. Measured Variables
All behavioural coding was conducted by two coders. The primary coder was
blind to the conditions and to the hypothesis, and the secondary coder coded
33% of the videos for inter-rater reliability. Both coders were able to hear
the auditory stimuli because they included tones that identified coding start
times. However, they were unable to view the emotion stimuli presented on
the monitors and were therefore unaware of what constituted correct matching.
The primary coder measured canine and infant time spent looking at the left
and right screens, time spent looking away from the screens, and time spent
looking at the owner or parent. The participants eyes were clearly visible
on the primary camera view to indicate looking direction, and the secondary
camera confirmed the participants head turning. The inter-rater correlations
between the two coders were good; looking duration at the left screen: rs =
0.91, looking duration at the right screen: rs = 0.94, looking away: rs = 0.86,
and looking at parent/owner: rs = 0.95. The primary coders ratings were used
in the analyses.
3. Results
Shapiro-Wilks tests for normality (all ps < 0.05) and histograms suggested
that the data for dogs and infants were not normally distributed, and for this
reason, non-parametric analyses were used.
3.1. Looking Duration at Matching Concordant Emotions
Two dogs did not look at any of the six pairs of faces over the six trials and
were excluded from analysis, leaving a final sample of 50 dogs for analysis. No
infants were excluded on this or any other basis. We examined the time spent
looking at the matching (concordant) image (e.g., looking time at angry male
face when listening to the angry male voice) and non-matching (discordant)
image (e.g., looking time at the happy male face when listening to the angry

12

M. H. Yong, T. Ruffman / Multisensory Research (2016)

Table 3.
Mean looking duration to a concordant or discordant face when listening to an emotional voice
(seconds)
Dogs (n = 50)

Vocal expression

Infants (n = 24)

M (SD)

M (SD)

Happy

concordant
discordant

1.40 (1.68)
1.30 (1.66)

0.74

4.23 (1.77)
2.93 (1.52)

0.01

Anger

concordant
discordant

1.56 (1.78)
1.00 (1.42)

0.10

4.13 (2.38)
3.06 (1.90)

0.10

Sad

concordant
discordant

0.88 (1.29)
1.96 (2.42)

0.02

2.40 (1.22)
4.62 (2.04)

<0.01

male voice) using Wilcoxon Signed-Ranks test. Table 3 shows dogs and infants looking durations at the concordant and discordant face when listening
to a particular vocal expression.
Dogs looked longer at the discordant face when listening to a sad voice,
Z = 2.37, p = 0.02, r = 0.34 (Wilcoxon Signed-Ranks test). When dogs were
presented with a happy or angry voice, there was no difference in their looking
at concordant or discordant faces, Z = 0.33, p = 0.74, r = 0.05 and Z = 1.67,
p = 0.10, r = 0.24 respectively.
Similar to dogs, infants looked longer at the discordant face when listening
to a sad voice, Z = 3.77, p < 0.01, r = 0.77 (Wilcoxon Signed-Ranks test)
and there was no significant difference in infant looking at the concordant or
discordant face when listening to an angry voice, Z = 1.63, p = 0.10, r =
0.33. When infants were listening to a happy voice, they looked longer at the
concordant happy face, Z = 2.63, p = 0.01, r = 0.54 (see Table 3).
In general, there was no evidence of reliable matching for either dogs or
infants, a result consistent with the majority of infant matching studies summarized in Table 1. Nevertheless, canine and infant looking were not random.
Instead, both tended to look toward angry and happy faces, and away from
sad faces. For this reason, we examined total looking time at sad faces, happy
faces, and angry faces, irrespective of the accompanying voice (see Fig. 1).
Dogs and infants showed the same pattern of looking across the three pairings
of emotion faces, both ps < 0.01 (Friedmans two-way analysis of variance by
ranks). Even after correcting for multiple comparisons using Holms correction, dogs looked less at the sad face compared to both the happy and angry
faces, Z = 2.66, p < 0.01, r = 0.38 and Z = 4.16, p < 0.01, r = 0.59, respectively (Wilcoxon Signed-Ranks Test), with similar findings for infants,
Z = 4.14, p < 0.01, r = 0.85 and Z = 3.66, p < 0.01, r = 0.75, respectively.
There was no preference in looking when the happy and angry faces were

Multisensory Research (2016) DOI:10.1163/22134808-00002535

13

Figure 1. Box and whisker plot displaying canine and infant median interest for an emotion
face (regardless of the accompanying voice). p < 0.01.

paired for dogs, Z = 1.70, p = 0.09, r = 0.24, or infants, Z = 0.89, p = 0.38,


r = 0.18. Thus, dogs and infants demonstrated the same pattern of looking,
tending to look at happy faces when paired with sad faces, and angry faces
when paired with sad faces.
The other two variables time spent looking away from the screen, and
time spent looking at owner or parent were not related to canine or infant looking across the three pairs of emotion faces, all ps > 0.07 (Wilcoxon
Signed-Ranks Test).
3.2. Emotion Intensity
One possibility is that emotion preferences stemmed from the intensity of the
expressions. For this reason, we presented the 16 faces used in the emotion
task to 18 university students (eight females, M = 28.8 years, SD = 2.96)
to determine whether one emotion face was more intense compared to the
others. Participants were asked to rate the intensity on a five-point Likert scale,
ranging from 1 (not intense) to 5 (very intense). The intensity ratings were
not normally distributed, and were therefore analysed using non-parametric
analyses (Fig. 2). There was no difference in intensity ratings between happy
and angry faces, Z = 1.66, p = 0.10, r = 0.39, happy and sad faces, Z =
1.71, p = 0.09, r = 0.40, or angry and sad faces, Z = 0.05, p = 0.96, r =

14

M. H. Yong, T. Ruffman / Multisensory Research (2016)

Figure 2. Box and whisker plot displaying intensity ratings for angry, happy and sad faces.

0.01 (Wilcoxon Signed-Ranks Test). A power analysis for each comparison


revealed that there was sufficient power to achieve significance, all gs > 0.70
using = 0.05 (Cohen, 1988). In general, the non-significant trends were for
happy faces to be rated as less intense (yet dogs and humans looked more at
happy than sad faces), and for angry and sad faces to be rated as more intense
(yet dogs and humans looked more at angry than sad faces). Thus, ratings of
intensity were not consistently related to looking preferences.
4. Discussion
In the present study, our aims were fourfold: to shed light on canine matching
of voices to faces, to examine viewing preferences for emotional faces, to examine looking at emotional faces as a potential rudimentary form of empathy,
and to provide a cross-species comparison of responding to emotional faces.
We found little evidence of matching overall, consistent with the majority of
infant studies listed in Table 1 (59% non-matching). Instead, looking seemed
to be guided by preferences to look at certain types of emotion faces over
others.
Both dogs and infants tended to look away from sad faces when paired with
happy or angry faces (see Fig. 1). In contrast, when happy and angry faces
were paired together, there was no preference for either dogs or infants to
look at either. Overall, dogs and infants looked significantly less at sad faces
relative to both happy and angry faces. Such tendencies can be seen either as a
preference for looking at happy and angry faces, or as an aversion to looking
at sad faces. Interestingly, past studies of human infants have demonstrated a
strong preference for happy faces (see Table 1). That is, just like dogs, human
infants tend to look at happy rather than sad faces. In contrast, in previous
research infants tended to look more at happy expressions when paired with
angry expressions, whereas in our study dogs and infants looked equally at the

Multisensory Research (2016) DOI:10.1163/22134808-00002535

15

angry and happy faces, although our findings are consistent with Mller et al.
(2015). Also in contrast, in previous research infants tended to look equally
to angry and sad faces, although only two studies provided this comparison
(Schwartz et al., 1985; Soken and Pick, 1999), whereas in our study dogs and
infants looked more at angry than sad faces.
Our findings are also discrepant with Albuquerque et al. (2016) who found
that dogs could match human expressions of anger and happiness. We pointed
out that Albuquerque et al. compared just happiness and anger whereas we
included a comparison of two negative emotions (anger and sadness) as well
as happiness and sadness. We did not find that dogs could match human expressions for any of the three comparisons, leaving it unclear as to how easily
dogs can match human emotional expressions. Certainly, our failure to match
was not due to a lack of statistical power because Albuquerque et al. tested
just 17 dogs whereas we tested 52.
It is possible that the differences between our results and previous results
were due to differences in stimuli. For instance, if our angry faces were more
intense than in previous research, this might have increased looking and might
have led to our findings of equal looking at angry and happy faces, and more
looking at angry than sad faces (because of the perceived threat value of angry
faces). In general, angry faces do tend to capture attention (Fox et al., 2000;
Ruffman et al., 2009). Yet, given canine and infant interest in happy facial
expressions, when they viewed happy and angry facial expressions together,
both expressions would tend to attract attention equally, which might explain
the lack of a difference in looking at either emotion. One might argue that
each face shown was distinctive or unique in some other manner other than
the emotion expressed. However, as described in the Stimuli section above, we
selected stimuli that corresponded to a typical representation of the NZ population. We also have minimised the characteristic differences by specifically
choosing middle-aged Caucasian and Asian people without visible markings,
that is, faces that these dogs would likely be familiar with in their everyday life.
What is perhaps more striking than the differences between the present
studys results and those of previous research, is the similarity between canine and infant looking preferences in our study, which as stated above, can be
viewed either as a preference to look at happy and angry faces or as an aversion
to sad faces. One of the possible reasons for the lack of looking at sad faces
in both dogs and infants is that they try to reduce stressful visual information
from sad faces (Grossmann, 2010; Nesse, 1990). Shifting attention away from
a parents sad face has been considered a way for infants to reduce negative
feelings generated by the sad face, and therefore, indicative of a rudimentary
form of empathy (Hoffman, 1981; Termine and Izard, 1988). This result is
also consistent with previous studies. For instance, when infants view a sad
expression on their mothers face, they play less, have greater gaze aversion,

16

M. H. Yong, T. Ruffman / Multisensory Research (2016)

less smiling, and increased grimacing (DEntremont and Muir, 1999, 1997),
and when they witness an adult frowning or crying, infants become more
agitated and distressed (DEntremont and Muir, 1999; Kahana-Kalman and
Walker-Andrews, 2001), experiences that would likely lead to gaze aversion.
In addition, dogs also have an aversion to auditory expressions of sadness; like
adult humans, they experience an increase in cortisol (a stress hormone linked
to empathy) as well as behavioural signs of stress when listening to human infant crying but not other sounds (Yong and Ruffman, 2014). This finding that
dogs show aversion when listening to sad voices suggests that their tendency
to look away from sad faces in the present study might also be best interpreted
as aversion and a rudimentary form of empathy (i.e., a spontaneous response to
sad stimuli not requiring explicit insight that an individual is experiencing sadness). Such findings point to the possibility that sensitivity to certain emotional
expressions has been selected for in the evolution of human infants and/or the
breeding of dogs, and that the preferential looking behaviour is aligned with
an evolutionary response towards emotional expressions.
Our results are consistent with the majority of previous infant studies that
failed to find emotion matching. One possible limitation of the present study is
that participants were unfamiliar with the emoter. It could be argued that stimuli utilizing familiar individuals would more likely lead to matching. However,
there is little difference in success rates when infants have matched emotions
with familiar and unfamiliar individuals in previous research (see Table 1) so
that this seems an unlikely possibility.
Another possible limitation is that dogs could not clearly see the images.
However, the faces shown on the computer screen were life-sized and similar to those used in previous research (Huber et al., 2013; Nagasawa et al.,
2011). Visual acuity in dogs is known to be worse than humans (Miller and
Murphy, 1995) and wolves (Peichl, 1992), with brachycephalic (short-nosed)
dog breeds having better visual acuity compared to dolichocephalic (longnosed) breeds due to the fact that the ganglion cells in their retinas are in a
more central location (McGreevy et al., 2004). The images in the present study
were either 1.5 m or 1.8 m away from dogs and at this distance should have
enabled dogs to distinguish facial features and expressions. For instance, this
distance has been successfully used in other studies with no known impairment
of canine performance (Huber et al., 2013; Nagasawa et al., 2011). Moreover,
dogs did respond differently to the different emotion faces, indicating that they
could see them.
Our findings demonstrate that dogs process emotional faces similarly to
human infants, detecting differences in facial emotions and showing viewing
preferences just like human infants. Our findings certainly do not mean that
dogs (or infants) have deeper insight into the emotional experience accompanying facial expressions. Instead, they suggest that dogs can differentiate facial

Multisensory Research (2016) DOI:10.1163/22134808-00002535

17

expressions on the basis of facial characteristics, and in combination with


dogs response when listening to infant crying (Yong and Ruffman, 2014),
likely find expressions of sadness aversive.
Acknowledgements
We thank dog owners in Dunedin whom have participated in this research, and
the Department of Psychology, University of Otago for funding the research.
References
Adachi, I., Kuwahata, H. and Fujita, K. (2007). Dogs recall their owners face upon hearing the
owners voice, Anim. Cogn. 10, 1721.
Albuquerque, N., Guo, K., Wilkinson, A., Savalli, C., Otta, E. and Mills, D. (2016). Dogs recognize dog and human emotions, Biol. Lett. 12, 20150883. DOI:10.1098/rsbl.2015.0883.
Banse, R. and Scherer, K. R. (1996). Acoustic profiles in vocal emotion expression, J. Pers. Soc.
Psychol. 70, 614636.
Boersma, P. and Weenink, D. (2012). Praat: doing phonetics by computer. Retrieved from http://
www.fon.hum.uva.nl/praat/.
Buttelmann, D. and Tomasello, M. (2013). Can domestic dogs (Canis familiaris) use referential
emotional expressions to locate hidden food? Anim. Cogn. 16, 137145.
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences, 2nd edn. L. Erlbaum
Associates, Hillsdale, NJ, USA.
Coren, S. (2006). The Intelligence of Dogs: a Guide to the Thoughts, Emotions, and Inner Lives
of Our Canine Companions. Simon and Schuster, New York, NY, USA.
Custance, D. and Mayer, J. (2012). Empathic-like responding by domestic dogs (Canis familiaris) to distress in humans: an exploratory study, Anim. Cogn. 15, 851859.
Darwin, C. (1872 [1998]). The Expression of the Emotions in Man and Animals, 3rd edn. Oxford
University Press, New York, NY, USA.
De Haan, M. and Nelson, C. A. (1998). Discrimination and categorization of facial expressions
of emotion during infancy, in: Perceptual Development: Visual, Auditory, and Language
Perception in Infancy, A. M. Slater (Ed.), pp. 287309. University College London Press,
London, UK.
DEntremont, B. and Muir, D. W. (1997). Five-months-olds attention and affective responses
to still-faced emotional expressions, Infant Behav. Dev. 20, 563568.
DEntremont, B. and Muir, D. W. (1999). Infant responses to adult happy and sad vocal and
facial expressions during face-to-face interactions, Infant Behav. Dev. 22, 527539.
Ellsworth, P. C. and Smith, C. A. (1988). From appraisal to emotion: differences among unpleasant feelings, Motiv. Emot. 12, 271302.
Evans, T. A., Howell, S. and Westergaard, G. C. (2005). Auditory-visual cross-modal perception
of communicative stimuli in tufted capuchin monkeys (Cebus apella), J. Exp. Psychol. Anim.
Behav. Process. 31, 399406.
Farago, T., Pongracz, P., Miklosi, A., Huber, L., Viranyi, Z. and Range, F. (2010). Dogs
expectation about signalers body size by virtue of their growls, PLoS ONE 5, e15175.
DOI:10.1371/journal.pone.0015175.

18

M. H. Yong, T. Ruffman / Multisensory Research (2016)

Fox, E., Lester, V., Russo, R., Bowles, R. J., Pichler, A. and Dutton, K. (2000). Facial expressions of emotion: are angry faces detected more efficiently? Cogn. Emot. 14, 6192.
Ghazanfar, A. A. and Logothetis, N. K. (2003). Neuroperception: facial expressions linked to
monkey calls, Nature 423, 937938.
Grossmann, T. (2010). The development of emotion perception in face and voice during infancy,
Restor. Neurol. Neurosci. 28, 219236.
Hare, B. and Tomasello, M. (2005). Human-like social skills in dogs? Trends Cogn. Sci. 9,
439444.
Haviland, J. M. and Lelwica, M. (1987). The induced affect response: 10-week-old infants
responses to three emotion expressions, Dev. Psychol. 23, 97104.
Hoffman, M. L. (1981). Is altruism part of human nature? J. Pers. Soc. Psychol. 40, 121137.
Howell, T. J., Toukhsati, S., Conduit, R. and Bennett, P. (2013). The perceptions of dog intelligence and cognitive skills (PoDIaCS) survey, J. Vet. Behav. Clin. Appl. Res. 8, 418424.
Huber, L., Racca, A., Scaf, B., Virnyi, Z. and Range, F. (2013). Discrimination of familiar
human faces in dogs (Canis familiaris), Learn. Motiv. 44, 258269.
Hunnius, S., de Wit, T. C. J., Vrins, S. and von Hofsten, C. (2011). Facing threat: infants
and adults visual scanning of faces with neutral, happy, sad, angry, and fearful emotional
expressions, Cogn. Emot. 25, 193205.
Izard, C. E. (1991). The Psychology of Emotions. Springer, New York, NY, USA.
Izard, C. E. (1993). Organizational and motivational functions of discrete emotions, in: Handbook of Emotions, M. Lewis and J. M. Haviland (Eds), pp. 631641. Guilford Press, New
York, NY, USA.
Izumi, A. (2013). Cross-modal representation in humans and nonhuman animals: a comparative
perspective, in: Integrating Face and Voice in Person Perception, P. Belin, S. Campanella and
T. Ethofer (Eds), pp. 2943. Springer, New York, NY, USA.
Kahana-Kalman, R. and Walker-Andrews, A. S. (2001). The role of person familiarity in young
infants perception of emotional expressions, Child Dev. 72, 352369.
Keltner, D. and Kring, A. M. (1998). Emotion, social function, and psychopathology, Rev. Gen.
Psychol. 2, 320342.
Keltner, D., Ellsworth, P. C. and Edwards, K. (1993). Beyond simple pessimism: effects of
sadness and anger on social perception, J. Pers. Soc. Psychol. 64, 740752.
Kestenbaum, R. and Nelson, C. A. (1990). The recognition and categorization of upright and
inverted emotional expressions by 7-month-old infants, Infant Behav. Dev. 13, 497511.
LaBarbera, J. D., Izard, C. E., Vietze, P. and Parisi, S. A. (1976). Four- and six-month-old
infants visual responses to joy, anger, and neutral expressions, Child Dev. 47, 535538.
Leppnen, J. M. and Nelson, C. A. (2006). The development and neural bases of facial emotion
recognition, in: Advances in Child Development and Behavior, Vol. 34, R. V. Kail (Ed.),
pp. 207246. Elsevier, Amsterdam, The Netherlands.
McGreevy, P., Grassi, T. D. and Harman, A. M. (2004). A strong correlation exists between
the distribution of retinal ganglion cells and nose length in the dog, Brain Behav. Evol. 63,
1322.
Merola, I., Prato-Previde, E. and Marshall-Pescini, S. (2011). Social referencing in dog-owner
dyads? Anim. Cogn. 15, 175185.
Miller, P. E. and Murphy, C. J. (1995). Vision in dogs, J. Am. Vet. Med. Assoc. 207, 16231634.

Multisensory Research (2016) DOI:10.1163/22134808-00002535

19

Montague, D. P. F. and Walker-Andrews, A. S. (2002). Mothers, fathers, and infants: the role of
person familiarity and parental involvement in infants perception of emotion expressions,
Child Dev. 73, 13391352.
Mller, C. A., Schmitt, K., Barber, A. L. A. and Huber, L. (2015). Dogs can discriminate emotional expressions of human faces, Curr. Biol. 25, 601605.
Nagasawa, M., Murai, K., Mogi, K. and Kikusui, T. (2011). Dogs can discriminate human
smiling faces from blank expressions, Anim. Cogn. 14, 525533.
Nesse, R. M. (1990). Evolutionary explanations of emotions, Hum. Nat. 1, 261289.
Peichl, L. (1992). Topography of ganglion cells in the dog and wolf retina, J. Comp. Neurol.
324, 603620.
Preston, S. D. and De Waal, F. B. M. (2002). Empathy: its ultimate and proximate bases, Behav.
Brain Sci. 25, 120.
Prguda, E. and Neumann, D. L. (2014). Inter-human and animal-directed empathy: a test for
evolutionary biases in empathetic responding, Behav. Processes 108, 8086.
Racca, A., Amadei, E., Ligout, S., Guo, K., Meints, K. and Mills, D. (2010). Discrimination
of human and dog faces and inversion responses in domestic dogs (Canis familiaris), Anim.
Cogn. 13, 525533.
Racca, A., Guo, K., Meints, K. and Mills, D. S. (2012). Reading faces: differential lateral gaze
bias in processing canine and human facial expressions in dogs and 4-year-old children,
PLoS ONE 7, e36076. DOI:10.1371/journal.pone.0036076.
Ratcliffe, V., McComb, K. and Reby, D. (2014). Cross-modal discrimination of human gender
by domestic dogs, Anim. Behav. 91, 126134.
Rivers, S. E., Brackett, M. A., Katulak, N. A. and Salovey, P. (2006). Regulating anger and
sadness: an exploration of discrete emotions in emotion regulation, J. Happiness Stud. 8,
393427.
Ruffman, T., Ng, M. and Jenkin, T. (2009). Older adults respond quickly to angry faces despite
labeling difficulty, J. Gerontol. B. Psychol. Sci. Soc. Sci. 64B, 171179.
Schwartz, G. E., Weinberger, D. A. and Singer, J. A. (1981). Cardiovascular differentiation of
happiness, sadness, anger, and fear following imagery and exercise, Psychosom. Med. 43,
343364.
Schwartz, G. M., Izard, C. E. and Ansul, S. E. (1985). The 5-month-olds ability to discriminate
facial expressions of emotion, Infant Behav. Dev. 8, 6577.
Shields, S. A. (1984). Reports of bodily change in anxiety, sadness, and anger, Motiv. Emot. 8,
121.
Sliwa, J., Duhamel, J.-R., Pascalis, O. and Wirth, S. (2011). Spontaneous voiceface identity
matching by rhesus monkeys for familiar conspecifics and humans, Proc. Natl Acad. Sci.
USA 108, 17351740.
Sobin, C. and Alpert, M. (1999). Emotion in speech: the acoustic attributes of fear, anger, sadness, and joy, J. Psycholinguist. Res. 28, 347365.
Soken, N. H. and Pick, A. D. (1992). Intermodal perception of happy and angry expressive
behaviors by seven-month-old infants, Child Dev. 63, 787795.
Soken, N. H. and Pick, A. D. (1999). Infants perception of dynamic affective expressions: do
infants distinguish specific expressions? Child Dev. 70, 12751282.
Somppi, S., Trnqvist, H., Hnninen, L., Krause, C. M. and Vainio, O. (2013). How dogs scan
familiar and inverted faces: an eye movement study, Anim. Cogn. 17, 793803.

20

M. H. Yong, T. Ruffman / Multisensory Research (2016)

Spelke, E. (1976). Infants intermodal perception of events, Cognit. Psychol. 8, 553560.


Statistics New Zealand (2015). 2013 Census major ethnic groups in New Zealand, WWW
Document. Retrieved from http://www.stats.govt.nz/Census/2013-census/profile-andsummary-reports/infographic-culture-identity (accessed Jan. 25, 2016).
Taylor, A. M., Reby, D. and McComb, K. (2010). Size communication in domestic dog, Canis
familiaris, growls, Anim. Behav. 79, 205210.
Taylor, A. M., Reby, D. and McComb, K. (2011). Cross modal perception of body size in domestic dogs (Canis familiaris), PLoS ONE 6, e17069. DOI:10.1371/journal.pone.0017069.
Termine, N. T. and Izard, C. E. (1988). Infants responses to their mothers expressions of joy
and sadness, Dev. Psychol. 24, 223229.
Vaillant-Molina, M., Bahrick, L. E. and Flom, R. (2013). Young infants match facial and vocal
emotional expressions of other infants, Infancy 18, E97E111.
Walker, A. S. (1982). Intermodal perception of expressive behaviors by human infants, J. Exp.
Child Psychol. 33, 514535.
Walker-Andrews, A. S. (1986). Intermodal perception of expressive behaviors: relation of eye
and voice? Dev. Psychol. 22, 373377.
Walker-Andrews, A. S. and Grolnick, W. (1983). Discrimination of vocal expressions by young
infants, Infant Behav. Dev. 6, 491498.
Walker-Andrews, A. S. and Lennon, E. (1991). Infants discrimination of vocal expressions:
contributions of auditory and visual information, Infant Behav. Dev. 14, 131142.
Yong, M. H. and Ruffman, T. (2014). Emotional contagion: dogs and humans show a similar
physiological response to human infant crying, Behav. Processes 108, 155165.
Yong, M. H. and Ruffman, T. (2015a). Is that fear? Domestic dogs use of social referencing
signals from an unfamiliar person, Behav. Processes 110, 7481.
Yong, M. H. and Ruffman, T. (2015b). Domestic dogs match human male voices to faces, but
not for females, Behaviour 152, 15851600.

Multisensory Research (2016) DOI:10.1163/22134808-00002535

21

Appendix A. Demographic data of participating dogs

No

Breed

Include

Sex

Neutered

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43

Greyhound
German Shepherd Greyhound
Golden Retriever
Lab/Collie
Border Collie
Golden Retriever
Labrador/Greyhound/Collie/Bully
Husky Golden Retriever cross
Border Retriever cross
Labrador/Border Collie
German Short-Haired Pointer
Labrador/Poodle
Staffy
Shihtzu/Lhasa Apso
Miniature Poodle
Labrador
French Mastiff
Foxy Jack Russell cross
Lab cross
Cocker Spaniel/Labrador/Hungarian Visla
Heading
Collie/Husky/Heading
Schnauzer
Shetland Sheepdog
Labrador Retriever
Border Collie
Border Collie
Golden Retriever
Labrador Retriever
Bichon/Poodle/Chihuahua/Terrier
Terrier cross
German Shepherd Greyhound
Great Dane cross
Huntaway cross
Belgian Shepherd
Standard Poodle
Schnauzer, Staffy and Labrador
Spoodle
Weimaraner
German Short-Haired Pointer
Labrador/Staffordshire Terrier
American Red-Nose Pitbull
Blue Heeler/Beardie

yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes
yes

female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
female
male
male
male
male
male
male
male
male
male
male
male
male

no
no
yes
yes
yes
yes
yes
yes
no
yes
yes
yes
no
yes
yes
no
yes
yes
yes
yes
yes
no
yes
yes
yes
yes
yes
yes
yes
yes
yes
no
yes
yes
no
yes
no
yes
yes
no
yes
no
yes

Age
1.17
1.17
1.20
1.64
1.73
1.78
1.85
2.08
2.18
2.37
2.48
2.48
2.55
2.78
3.19
3.32
3.71
3.90
4.27
5.35
5.77
5.88
8.56
8.93
9.92
11.79
11.82
12.84
12.95
14.77
15.21
1.17
1.42
2.74
2.95
3.93
4.12
4.79
5.17
5.18
5.38
5.56
6.08

22

M. H. Yong, T. Ruffman / Multisensory Research (2016)

(Appendix A. Continued)
No

Breed

Include

Sex

Neutered

44
45
46
47
48
49
50
51
52

Labrador/Huntaway
Black Lab
German Shepherd Greyhound
Border Collie
Golden Retriever
English Setter
Boxer
Maltese/Cavalier King Charles Spaniel cross
Whippet

yes
yes
yes
yes
yes
yes
yes
no
no

male
male
male
male
male
male
male
female
male

yes
yes
no
yes
no
yes
yes
yes
yes

Appendix B. Demographic data of participating infants

No

Sex

Task

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

female
female
female
female
female
female
female
female
female
female
male
male
male
male
male
male
male
male
male
male
male
male
male
male

emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion
emotion

Age (months)
7.00
7.43
7.20
7.67
7.67
6.73
6.83
6.53
7.30
8.50
7.13
7.17
5.90
6.93
7.77
7.23
8.20
9.07
7.07
5.73
7.30
5.13
7.77
6.57

Age
6.72
6.76
7.59
8.14
8.14
9.16
10.17
2.92
6.84

Multisensory Research (2016) DOI:10.1163/22134808-00002535

Appendix C. Experiment set-up

23

You might also like