You are on page 1of 15

PSIHOLOGIJA, 2014, Vol.

47(4), 433447
2014 by the Serbian Psychological Association

UDC 159.953.35.072
159.937.5.072
DOI: 10.2298/PSI1404433T

The Effect of Texture on Face Identification and


Configural Information Processing
Eva Alica Tzschaschel, Malte Persike, and Bozana Meinhardt-Injac
Johannes Gutenberg University, Department of Psychology, Mainz, Germany

Shape and texture are an integral part of face identity. In the present study, the importance
of face texture for face identification and detection of configural manipulation (i.e., spatial
relation among facial features) was examined by comparing grayscale face photographs (i.e.,
real faces) and line drawings of the same faces. Whereas real faces provide information about
texture and shape of faces, line drawings are lacking texture cues. A change-detection task and
a forced-choice identification task were used with both stimuli categories. Within the change
detection task, participants had to decide whether the size of the eyes of two sequentially
presented faces had changed or not. After having made this decision, three faces were shown
to the subjects and they had to identify the previously shown face among them. Furthermore,
context (full vs. cropped faces) and orientation (upright vs. inverted) were manipulated. The
results obtained in the change detection task suggest that configural information was used
in processing real faces, while part-based and featural information was used in processing
line-drawings. Additionally, real faces were identified more accurately than line drawings,
and identification was less context but more orientation sensitive than identification of line
drawings. Taken together, the results of the present study provide new evidence stressing the
importance of face texture for identity encoding and configural face processing.
Key words: face perception, face identification, texture, shape

The human face is a complex composition of various features whose


distinctiveness makes every face unique and individual, which allows
recognizing familiar people within milliseconds. Processing faces is one of the
most important, but also one of the most complex tasks one has to perform in
daily life. We do not only recognize people we already know, but we are also
able to remember a face after seeing it for a very short period of time (Chance,
Goldstein, & McBride, 1975). According to Ellis (1986), we can name about
700 known faces and we are able to recognize faces even if they are shown to
us with a significant delay and if presented to us only once (Chance, Goldstein,
& McBride, 1975). Given the aforementioned characteristics of faces, one
question naturally arises from a psychological point of view, namely how does
recognition and processing of faces actually work and which factors have a main
influence on those processes?
Corresponding author: meinharb@uni-mainz.de

434

THE EFFECT OF TEXTURE ON FACE IDENTIFICATION AND


CONFIGURAL INFORMATION PROCESSING

In literature, there is a distinction made between three kinds of information


that can be used in face recognition: isolated features, first-order relations and
second-order or configural relations (Diamond & Carey, 1986). Isolated features
can be specified without reference to other parts of the stimulus (e.g., hair,
eyebrows, eyes). First-order relations refer to the relations among these basics
facial features which are shared by all faces and allow distinction between faces
and non-facial objects (e.g., nose is located above a mouth and eyes are located
above a nose). Second-order relations refer to the set of spatial relations that
characterizes the specific arrangement of the basic features of an individual
face. The information given by second-order (configural) relations helps to
recognize faces because of the relative distances between facial features (e.g.,
eyes and nose) (Nishimura & Maurer, 2008). The exact nature and processing
mechanisms underlying face perception are, however, not fully understood.
While some experimental evidence suggest parallel processing of featural and
configural information, but with more weight on the later (Bartlett & Searcy,
1993; Diamond & Carey, 1986; Rhodes, 1988; Rhodes, Brake, & Atkinson,
1993; Searcy & Bartlett, 1996), the proponents of holistic hypothesis propose
that faces are perceived and remembered as unparsed perceptual wholes (i.e. a
facial Gestalt) whose parts belong together (Maurer, Le Grand, & Mondloch,
2002; Tanaka & Farah, 1993; Young, Hellawell & Hay, 1987). Although there
is no consensus about terminology, and distinction between configural and
holistic processing remains difficult, there are few experimental paradigms apt
to measure these aspects of face processing.
Evidence of holistic face perception comes from three basic findings. One
is the composite effect, which shows that it takes longer to identify a person
who appears as the upper and lower part of a composite face when the two
parts are aligned, compared to when the halves are misaligned (e.g., Young et
al., 1987). The second finding known as the part-to-whole effect refers to
the better identification of a facial feature (e.g. a nose) when it is seen in the
context of a face that has previously been viewed, compared to seeing the part
in isolation (e.g., Tanaka & Farah, 1993). The third finding is the context effect
that refers to the holistic integration across the whole face between internal
(eyes, eyebrows, nose, mouth) and external facial features (hair, head and face
outline). The contribution of external features to face identification (Megreya &
Bindemann, 2009; Nachson, Moscovitch & Umilta, 1995; Veres-Injac & Persike,
2009) and holistic face processing (Andrews & Thompson, 2010; Frowd et al.,
2012; Meinhardt-Injac, Meinhardt, & Schwaninger, 2009; Nachson et al., 1995;
Sinha & Poggio, 1996; Veres-Injac & Schwaninger, 2009) is well demonstrated.
External features also strongly influence the processing of internal facial features
suggesting that faces are seen as holistic percept. This claim is corroborated
by neuroimaging data providing evidence that external facial features are an
integral part of holistic face representations in the fusiform face area (Andrews,
Davies-Thompson, Kingstone & Young, 2010; Axelrod & Yovel, 2010, 2011).
Importantly, all three findings (i.e., composite effect, part-to-whole effect and
context effect) have been reported only for faces, but not for common objects

Eva Alica Tzschaschel, Malte Persike, and Bozana Meinhardt-Injac

435

(i.e., Tanaka & Farah, 1993; Robbins & McKone, 2007; Meinhardt-Injac, 2013)
suggesting that holistic processing mechanisms are face-specific. The holistic
processing is particularly strong when faces are seen upright, in their canonical
orientation, but it is strongly hampered in inverted orientation. However, the
inversion effect (Yin, 1969) is not just related to the aspects of holistic face
processing i.e., automatic integration of facial parts in a holistic percept but
also to the processing of configural relations in faces (e.g., Meinhardt-Injac,
Persike, Meinhardt, 2011; Maurer et al., 2002; Thompson, 1980) and objects of
expertise (Diamond & Carey, 1986; Bilalic, Langner, Ulrich, & Grodd, 2011).
The main question addressed in the present study is in how far holistic
face processing is supported by shape and texture (also referring to terms like
surface-reflectance or pigmentation). For accurate face identification, neither
texture nor shape alone are sufficient (Jiang, Blanz & OToole, 2006; OToole,
Vetter, & Blanz, 1999; Russell, Sinha, Nederhouser, & Biederman, 2004, Russell,
Biederman, Nederhouser, & Sinha 2007). However, since shape alone provides
information about isolated features (e.g., size of eyes), as well as configural
relations between them (e.g., inter eye distances), it is widely accepted that faceshape per se might trigger holistic processing. Less clear, however, is in how
far holistic face processing is supported by texture (Hole, George & Dunsmore,
1999; Jiang, Dricot, Blanz, Goebel & Rossion, 2009; Leder 1996, 1999;
Meinhardt-Injac, Persike & Meinhardt, 2013; Russell et al., 2007).
In order to assess the influence that face texture and shape have on
recognition, Leder (1996; 1999) used line drawings portraying a face without its
texture. Line drawings represent faces and keep the position, size, and borders of
its features, but lack texture cues. As Bruce already stated in 1988, photographs
can be seen as a realistic portray, wherefore comparing line drawings to
photographs investigates the contribution of texture on face processing (Bruce,
1988). Previous investigations (Davies, Ellis, & Shephert, 1978; Bruce,
Hanna, Dench, Healey & Burton, 1992) have found that line drawings are
always identified worse than photographs, regardless of the viewpoint and
the familiarity of the presented person to the participants. Leder (1996; 1999)
investigated the detection sensitivity and its dependence on texture by showing
line drawings instead of photographs and varying the size of the targets eyes.
As a result, participants sensitivity to recognize configural changes decreased
with linedrawings compared to photographs. Based on the data, he concluded
that the lack of texture does not cause an absolute, but a relative decrease of
performance and an attenuated weighting of configural information.
In a recent study, the contribution of texture information to holistic face
processing was studied by Meinhardt-Injac et al. (2013) by comparing fullcolor photographs to line drawings. Investigating summation, context and
orientation effects, they found that real face, but not line drawing stimuli were
always processed holistically even when the task required only detection of local
changes. With shape information (line-drawings) alone, participants performed
very well in a change detection task and their performance was not affected by
inversion. The authors concluded that participants were using local cues rather

436

THE EFFECT OF TEXTURE ON FACE IDENTIFICATION AND


CONFIGURAL INFORMATION PROCESSING

than holistic representation. Performance for real face stimuli was even better
than for line drawings, but only when the stimuli were presented in upright
orientation and when all internal features were manipulated in size (summation
effect). For inverted orientation, performance in change detection task did not
differ between real faces and line drawings. Assuming that orientation is a
determinant of holistic face processing, the authors concluded that real faces
were processed holistically.
However, the faces were not equalized in color, which might have caused
stronger holistic effects for real faces and higher identification rate. Russell et al.
(2007) have found an asymmetry in the use of shape and texture cues depending on
the presence of color. In gray-scale pictures, shape cues were more efficiently used
than reflectance cues, whereas subjects performed better using texture than shape
information with color versions of the same images. Colored faces were also better
recognized than gray-scale images and were less affected by inversion (Russel
et al., 2007; Yip & Sinha, 2002). Lee and Perret (2000) showed the importance
of texture and shape by finding an advantage for line-drawn and photographic
shape caricatures of faces compared to veridical faces in terms of recognition. In
order to re-evaluate the importance of texture cues for holistic face processing, an
adapted version of the paradigm by Meinhardt-Injac et al. (2013) was employed,
with some distinctions. First, gray-scaled instead of colored faces were used and
compared with the line drawings of the same faces. Second, the strength of holistic
and configural information processing in both stimuli categories was measured as
the magnitude of the context and inversion effects.
Method
Participants
18 students of the Psychological Department of the Johannes Gutenberg University
Mainz participated in this experiment and were rewarded with course credits. The average age
of the test group was 22.6 years (range 2025). Nine of the 18 participants were females. All
participants had normal or corrected to normal vision.

Stimuli and Apparatus


Frontal view photographs of five male face models were used as templates for stimulus
construction. The photographs were taken in front of a white background and under the same
lightning conditions. They were converted to 300 400 pixel grayscale images and equalized
in contrast. Line drawings were created by manually tracing photographs using Ulead
PhotoImpact 10 and only showed the shape of the face, the hair, and all internal features (i.e.,
eyes, eyebrows, nose, mouth), without providing any information of texture or shadows (see
Meinhardt-Injac et al., 2013, for more details about stimuli construction). None of the faces
were presented with a beard, glasses, or jewelry.
The original grayscale photographs and line drawings were systematically edited
by changing the size of the eyes and eyebrows compared to the size of the whole face in
equally spaced steps of 3.5% in a range from 82.5%114% of the original size (i.e., 100%).
This resulted in ten different versions of the same-identity face stimuli concerning eye and

Eva Alica Tzschaschel, Malte Persike, and Bozana Meinhardt-Injac

437

eyebrows size: 82.5%, 86%, 89.5%, 93%, 96.5%, 100%, 103.5%, 107%, 110.5%, and 114%.
In the next step, the internal features of each version of the targets were cut out to create
cropped stimuli without context (i.e., without face and head-outline). An overview of the
independent variables are given below, an example of the different presentation forms of one
face is given in Figure 1.
Stimulus. Fifty percent of the time, the stimulus was a black-and-white photography
(providing information of texture and shape) and 50% a line drawing (only providing shape
information).
Orientation. Stimuli were presented in upright and in inverted orientation for half of
the trials. In inverted orientation, the stimuli were flipped 180 degrees.
Context. Two different presentation modes were given: 1) cropped faces including
only the internal features (eyes, eyebrows, nose and mouth) and 2) the whole faces showing a
naturally looking face with hair and head and face-outline.
The experiment was executed with Milliseconds Inquisit 2.0 runtime units. Stimuli
were displayed on NEC Spectra View 2090 TFT displays in 1280 x 1024 resolution at a
refresh rate of 60 Hz. The viewing distance from participant to display equaled approximately
70 centimeters. Stimulus patterns and masks subtended 300 x 400 pixels, which corresponds
to 12 x 15 cm of the screen, or 9.65 x 12 measured in degree of visual angle at 70 cm
viewing distance. Subjects used a distance marker but no chin rest.

Figure 1. Example of stimuli for five of the ten different sizes. Rows a) and c) show
targets whose eyes have been scaled for full faces, and in b) and d) for cropped faces.

438

THE EFFECT OF TEXTURE ON FACE IDENTIFICATION AND


CONFIGURAL INFORMATION PROCESSING

Experimental Design and Procedure


Percent correct data were measured as dependent variable in a 3-factor repeated
measurements experimental design with stimulus (2; real faces vs. line drawings), orientation
(2; upright vs. inverted), and context (2; full face vs. cropped face) as independent variables.
For each experimental condition, 16 repetitions for same and 16 repetitions for different trials
were conducted, resulting in 256 trials per participant.
In each trial, the subjects had to perform a change-detection task and a forced-choice
identification tasks, an example for one trial is given in Figure 2. In the change-detection task,
participants were asked to judge whether the size of eyes and eyebrows in two sequentially
presented examples of a face was changed or not. Different trials were realized by randomly
pairing two instances from the pool of 10 differently scaled versions of the same face. The
two presented faces differed by 17.5% of the original size, since this difference is detected
most accurately (see also Meinhardt-Injac et al., 2013). Note that different trials always
resulted from the difference between the scaling of the two presented faces, thereby avoiding
absolute cues. An example is given in Figure 3. The presentation positions of each of the two
face images were shifted by 20 pixels away from the center in random directions in order to
preclude pixel matching. If faces differed, subjects were instructed to answer yes (i.e., there
is a change), and no otherwise. The responses were given by pressing corresponding buttons
on the keyboard. The assignment of answers (yes/no) to the buttons was counterbalanced
across participants. In half of the trials, the faces differed in size of their eyes and eyebrows,
but were always versions from the same original, i.e., were identical with respect to all other
characteristics (e.g., hair, nose, mouth see Figure 1).
The second task required identification of the just-presented face in a forced-choice
identification task. After the participants had responded to the change detection task, a blank
screen was shown for 200 ms and afterwards they were asked to identify the just-presented
face (target face) among two distractor faces. All three faces (target and two distractor faces)
were presented simultaneously until the response, and were always original faces (100 % of
the original size). Answers were given by pressing corresponding buttons on the keyboard in
order to identify the target: y for the left face, c for the right face, and x for the face in
the middle. As the target always figured among the three presented faces, this was a forcedchoice task, which means that the participants always had to make a decision and were not
able to answer none of them. The position of the target was randomized across the trials and
participants. The faces were presented in upright and inverted orientation, depending on the
orientation of the stimulus in the change-detection task (see Figure 2).
The experiment consisted of four blocks: real faces presented upright, real faces
presented inverted, line drawings presented upright, and line drawings presented inverted.
The presentation of experimental blocks was randomized over participants, whereas context
(cropped vs. full faces) was randomized within each block. The temporal order of events in
a trial sequence was as follows: fixation (300 ms); first stimulus frame (350 ms); mask (350
ms); blank (200 ms); second stimulus frame (350 ms); mask (350 ms); change detection task
(until response); blank (200 ms); identification task (until response). An example for one trial
is depicted in Figure 2.

Figure. 2. Example for one trial of the block with the stimulus condition
line drawings and orientation upright.

Eva Alica Tzschaschel, Malte Persike, and Bozana Meinhardt-Injac

439

Figure. 3. Some possible combination of the stimuli. Trials with and without change in the
size of eyes and eyebrows are realized by pairing different faces. Figure. 1.
Example of stimuli for five of the ten different sizes. Rows a) and b) show targets whose
eyes and eyebrows have been scaled for full faces, and c) and d) for cropped faces.

Results
The Change Detection Task
A 3-factor repeated measurements ANOVA (see Figure 4) was
calculated with stimulus (2; real faces vs. line drawings), context (2; full
face vs. cropped face), and orientation (2; upright vs. inverted) as withinsubjects factors. The main effect of context (cropped face vs. full face)
was statistically significant: F(1, 17) = 8.002, p <.01, partial 2 = .32. The
change detection was more accurate with full faces than with cropped faces
without external contour. Furthermore, there were no significant main effects
of stimulus and orientation, but a significant two-way interaction between
stimulus x orientation: F(1, 17) = 9.38, p <.005, partial 2 = .35, and a
significant three-way interaction between stimulus x orientation x context:
F(1, 17) = 8.86, p <.005, partial 2 = .34. No other interactions were
significant. The data are shown in Figure 4.

440

THE EFFECT OF TEXTURE ON FACE IDENTIFICATION AND


CONFIGURAL INFORMATION PROCESSING

Figure 4. Mean proportion correct in the change detection task for real faces and line drawings.
The left box indicates the results for real faces (black circles) and line drawings (gray squares)
in upright orientation, while the right box indicates inverted orientation of the stimuli. The x-axis
shows two levels of context, where full or cropped stimuli were presented.

Figure. 5. Orientation and context effects calculated as differences of proportion correct data
(see x-axis) for real faces (open symbols) and line drawings (filled gray symbols). The box
whiskers indicate mean with standard error (box) and confidence interval (whiskers). An
effect is significant if 0 is outside the confidence interval (CI).

In order to further analyze the three-way interaction [stimulus x


orientation x context: F(1, 17) = 8.86, p <.005, partial 2 = .34.] and to evaluate

Eva Alica Tzschaschel, Malte Persike, and Bozana Meinhardt-Injac

441

inversion and context effects for real faces and line drawings, the differences
of the data were calculated on the level of individual measurements. The
orientation effect was calculated as accuracy difference between upright and
inverted presented stimuli (P = PUPR PINV), for both stimulus and context
levels. The context effect was calculated as the accuracy difference between
full and cropped faces (P = PFULL PCROPPED), again for both stimulus types
and orientations. The results are shown in Fig. 5. Note that since Figure
5 shows difference measures, the effect is significant if 0 is outside the
confidence interval. For real faces, the accuracy difference between upright
and inverted presented stimuli was positive, which is due to higher accuracy
in the change detection task in upright orientation. Additionally, accuracy in
change detection with real faces was higher for full than cropped faces, but
only in upright orientation. For line drawings, however, the changes in size of
eyes and eyebrows were more accurately detected in inverted than in upright
presented stimuli. Moreover, the context seems to be particularly supporting
in inverted orientation and has virtually no effect on the change detection
in upright orientation. This is exactly the opposite pattern of the data when
compared with real faces (see Figure 5).
Identification task
A 3-factor repeated measurements ANOVA was calculated with stimulus
(2; real faces vs. line drawings), context (2; full face vs. cropped face) and
orientation (2; upright vs. inverted) as within-subjects factors. The main effects
of stimulus [F(1, 17) = 24.92, p <.001, partial 2 = .59.], context [F(1, 17) =
57.17, p <.001, partial 2 = .77] and orientation [F(1, 17) = 6.46, p <.05, partial
2 = .27.] were all statistically significant. The interaction between stimulus x
context was also statistically significant [F(1, 17) = 20.85, p <.001, partial 2
= .55.]. No other two or three-way interactions reached statistical significance.
Again, the orientation and context effects were calculated as the accuracy
difference at the level of individual measurements. The results are shown in
Figure 7.
In sum, identification of the two facial stimulus categories was very
successful (see Figure 6). However, there were also marked differences between
the accuracy of data obtained with real faces and with line drawings (main
effect of stimulus). With real faces, accuracy reached ceiling level with about
97 % of correct responses. Identification of line drawings was less accurate
and depended strongly on the presence of face context (i.e., external features).
External facial features were used as a cue for identification of real faces, but
they were especially important for identification of inverted presented real faces
(see Figure 7). Orientation effects were overall small and only significant for
cropped real faces (see Figure 7).

442

THE EFFECT OF TEXTURE ON FACE IDENTIFICATION AND


CONFIGURAL INFORMATION PROCESSING

Figure 6. Mean proportion correct in the identification task for real faces (black circles) and
line drawings (gray squares). The x-axis shows context level (full vs. cropped), on the left
side are the data for upright and on the right side for inverted presented stimuli.

Figure 7. Orientation and context effects in identification task. The effects are calculated
as differences of proportion correct data (see x-axis) for real faces (open symbols) and line
drawings (filled gray symbols). The boxwhiskers indicate mean with standard error (box)
and confidence interval (whiskers). An effect is significant if 0 is outside the confidence
interval (CI).

Eva Alica Tzschaschel, Malte Persike, and Bozana Meinhardt-Injac

443

Discussion
To what extent do texture cues trigger holistic face processing? In order
to shed light on the question raised above, this study compared participants
performance for real faces, presented as gray-scaled photographs, with the
performance for line drawings, which serve as a representation of a face without
its texture. The strength of context and inversion effects was used as the measure
of holistic and configural face processing. In the experiment, the participants
had to perform two different tasks: detect a change of size in the targets eye
region (i.e., eyes and eyebrows) and identify the previously shown target among
distractor faces. It was crucial for the experiments success and viability to
include both tasks in order to ensure that participants do not only focus on the
eyes as a separate feature, but on the face as a whole. Note that by changing
the size of eyes and eyebrows, not only configural aspects of a face (i.e., spatial
relations among eyes region and other internal features) are altered, but also
the featural aspects (i.e., the absolute size of eyes and eyebrows) (see also
Riesenhuber & Wolff, 2009). The context effect was varied by presenting either
full faces (with external features) or just cropped internal features without face
contour, while faces were presented in upright and inverted orientation. Based
on the data obtained in the present study, we come to the conclusion that texture
cues trigger configural face processing and improve participants performance in
face identification. The presence of external features is a supporting, but not a
necessary condition for processing configural information in real faces. However,
external features are an important cue for face identification, especially when
line drawings are used as stimuli. The results are discussed in more detail below.
In the detection task, maximal performance was reached for real faces in full
face context and upright orientation. Comparable performance for line drawings
was reached in full face context, but in inverted orientation. Similarly, face context
in form of external features was helpful for change detection in upright presented
real faces and also for inverted presented line drawings. Taken together, these
findings indicate that configural (orientation effect) and holistic (context effect)
information is encoded when change detection in real faces is required. Facial
context supports processing of configural information, but is not necessary to
trigger configural processes in real faces as an inversion effect was also obtained for
cropped faces (see Figure. 5). The opposite pattern of the data with line drawings
suggests that the absolute size of eyes and eyebrows was used to complete the task
rather than altered configural relations between facial features.
With respect to the face detection task, the results obtained in the present
study are in line with previous findings, suggesting the importance of face texture
for encoding of configural information in a face (Meinhardt-Injac et al., 2013;
Leder, 1996; Russel, et al., 2007). Namely, in a complex experimental design,
Meinhardt-Injac et al. (2013) have found evidence that face shape and texture
may support different aspects of face processing. Whereas shape is important for
initial stages of face perception and feature binding, texture seems to be involved
in processing configural or second-order relations. Similar findings indicating that

444

THE EFFECT OF TEXTURE ON FACE IDENTIFICATION AND


CONFIGURAL INFORMATION PROCESSING

texture has an important impact on configural and face-specific processing were


reported in two studies by Leder (1996, 1999). In all three studies, the comparison
between line drawings and real faces provided stable effects. Nevertheless, in
this kind of stimuli, shape and texture are not independently manipulated, but
conclusions are driven in a subtractive way as the difference between real faces
(= texture + shape) and line drawings (= only shape). Since it might be taken as
a limitation of the study, it is important to note that similar findings were found
also when face shape and face texture were manipulated independently (Russel
et al., 2007). By asking the participants to do a forced-choice matching task,
the authors found that inversion disrupted performance about equally for faces
differing only in texture or only in shape (Russel et al., 2007). They concluded
that poorer recognition of inverted faces is not specific to face shape, but also
caused by impairments in processing texture cues.
Within the identification task, real faces (grayscale and equated in
image similarity) were recognized more accurately than line drawings,
thereby supporting previous claims that texture is an intrinsic property of face
identity (Bruce et al., 1992; OToole, et al., 1999; Russell, Sinha, Biederman,
& Nederhouser, 2006; Russel et al., 2007; Vuong, Peissig, Harrison, & Tarr,
2005). By using faces with only texture information or only shape-from-shading
information, Troje and Blthoff (1996) have demonstrated that texture facilitates
face learning and generalization from learned views to new views. As adding
shape-from-shading to the line drawings dramatically improves recognition of
familiar faces (Davies et al., 1978; Bruce, Hanna, Dench, Healey, & Burton,
1992), the study by Troje and Blthoff (1996) provided strong evidence
supporting importance of texture for face perception and recognition. The
particular importance of texture is unique for face recognition since no similar
effect has been found in object recognition tasks (Nederhouser, Yue, Mangini, &
Biederman, 2007).
More accurate identification of real faces (gray-scaled or colored)
compared to line drawings has been reported previously (Leder, 1996, 1999;
Meinhardt-Injac et al., 2013). Relative importance of color in face identification
tasks was typically demonstrated with familiar faces (Lee & Perret, 2000; Yip &
Sinha 2002). It seems to be of less relevance when unfamiliar faces are used as
results of this study and the study of Meinhardt-Injac et al. (2013) are similar,
even though this study used gray-scaled instead of colored faces. Hence, face
identification in these two studies was probably not influenced by color cues,
but by texture itself. Alternatively, overall high identification rates and small
number of faces used might occlude importance of the color for identification
of unfamiliar faces. Successful identification of line drawings supports this
notion. Although less accurate compared to real faces, the identification of
line drawings was quite accurate. However, identification of line drawings was
more strongly affected by face context than the identification of real faces. Face
context or external facial features are particularly important for identification
of unfamiliar faces (e.g., Frowd, Bruce, McIntyre, Hancock, 2007; Veres-Injac
& Persike, 2009) and are also less sensitive to orientation than internal features

Eva Alica Tzschaschel, Malte Persike, and Bozana Meinhardt-Injac

445

alone (Meinhardt-Injac et al., 2009; Nachson & Shechory, 2002). The strong
context effects in combination with lacking inversion effects and a significant
drop in the accuracy data when cropped stimuli are used, indicate that subjects
have used rather part-based than holistic strategies when identifying line
drawings (Leder, 1996, 1999). In real faces, a context effect was present, but to a
lesser extent than in line drawings. Additionally, there was a significant effect of
inversion for cropped presented faces. Together, these two findings suggest that
1) external features were less important for identification of real faces than line
drawings and 2) configural information was processed only in real faces, but
not in line drawings. The encoding of configural information in real faces most
likely accounts for high accuracy levels with cropped faces in upright orientation
(see Figure. 6).
Conclusions
Taking into consideration the influence of texture on face processing and
recognition, this study finds that texture has a significant impact on holistic
face processing. Findings are in line with previous studies, suggesting that both,
shape and texture cues, are encoded when processing faces (Meinhardt-Injac et
al., 2013; Jiang et al., 2006; Leder, 1996; Russel et al., 2007).
References
Andrews, T. J., Davies-Thompson, J., Kingstone, A., & Young, A. W. (2010). Internal and
external features of the face are represented holistically in face-selective regions of visual
cortex. Journal of Neuroscience, 30(9), 35443552.
Andrews, J. T., & Thompson, P. (2010). Face-to-face coalition. Iperception, 1(1), 2830.
doi: 10.1068/i0402:
Axelrod, A., & Yovel, G. (2010). External facial features modify the representation of internal
facial features in the fusiform face area. Neuroimage, 52(2), 720725.
Axelrod, V., & Yovel, G. (2011). Nonpreferred stimuli modify the representation of faces in
the fusiform face area. Journal of Cognitive Neuroscience, 23(3), 746756.
Bartlett, J. C. & Searcy, J. (1993). Inversion and configuration of faces. Cognitive psychology,
25(3), 281316.
Bilalic, M., Langner, R., Ulrich, R., & Grodd, W. (2011). Many Faces of Expertise: Fusiform
Face Area in Chess Experts and Novices. Journal of Neuroscience, 31(28), 1020610214.
Bruce, V. (1988).Recognising Faces. Sussex: Lawrence Earlbaum: London.
Bruce, V., Hanna, E., Dench, N., Healey, P., & Burton, M. (1992). The importance of mass
in line drawings of faces. Applied cognitive psychology, 6(7), 619628.
Chance, J., Goldstein, A. G., & McBride, L. (1975). Differential experience and recognition
memory for faces. Journal of Social Psychology, 97(2), 243253.
Davies, G., Ellis, H., & Shepherd, J. (1978). Face recognition accuracy as a function of mode
of representation. Journal of applied psychology, 63(2), 180187.
Diamond, R. & Carey, S. (1986). Why faces are and are not special: an effect of expertise.
Journal of Experimental Psychology: General, 115(2), 107117.
Ellis, H. D. (1986). Processes underlying face recognition. In R. Bruyer (Ed.), The
Neuropsychology of face perception and facial expression. Hillsdale, NJ: Lawrence Erlbaum.

446

THE EFFECT OF TEXTURE ON FACE IDENTIFICATION AND


CONFIGURAL INFORMATION PROCESSING

Frowd, C., Bruce, V., McIntyre, A., & Hancock, P. J. (2007). The relative importance of external
and internal features of facial composites. British Journal of Psychology, 98(1), 6177.
Frowd, C.D., Skelton F., Atherton, C., Pitchford, M., Hepton, G., Holden, L., ... Hancock,
P.J. (2012). Recovering faces from memory: the distracting influence of external facial
features. Journal of Experimental Psychology: Applied, 18(2), 224238.
Hole, G. J., George, P. A., & Dunsmore, V. (1999). Evidence for holistic processing of faces
viewed as photographic negatives. Perception, 28(3), 341359.
Jiang, F., Blanz, V., & OToole, A. J. (2006). Probing the visual representation of faces with
adaptation: A view from the other side of the mean. Psychological Science, 17(6), 493500.
Jiang, F., Dricot, L., Blanz, V., Goebel, R., & Rossion, B. (2009). Neural correlates of shape
and surface reflectance information in individual faces. Neuroscience, 163(4), 1078-1091.
Leder, H. (1996). Line drawings of faces reduce configural processing. Perception, 25(3),
355-366.
Leder, H. (1999). Matching person identity from facial line drawings. Perception, 28(9),
11711175.
Lee, K. J., & Perret, D. I. (2000). Manipulation of colour and shape information and its
consequence upon recognition and best-likeness judgments. Perception, 29(11), 1291-1312.
Maurer, D., Le Grand, R., & Mondloch, C. J. (2002). The many faces of configural processing.
Trends in Cognitive Sciences, 6(6), 255260.
Megreya, A. M. & Bindemann, M. (2009). Revisiting the processing of internal and external
features of unfamiliar faces: The headscarf effect. Perception, 38(12), 18311848.
Meinhardt-Injac, B. (2013). The context congruency effect is face specific. Acta Psychologica,
142, 265-272.
Meinhardt-Injac, B., Meinhardt, G., & Schwaninger, A. (2009). Does matching of internal and
external facial features depend on viewpoint? Acta Psychologica, 132, 267278.
Meinhardt-Injac, B., Persike, M., & Meinhardt, G. (2011). The time course of face matching
for featural and relational image manipulations. Acta Psychologica, 137,4855.
Meinhardt-Injac, B., Persike, M., & Meinhardt, G. (2013). Holistic Face Processing Is Induced
by Shape and Texture. Perception, 42(7), 716732.
Nachson, I., Moscovitch, M., & Umilta, C. (1995). The contribution of external and internal
features to the matching of unfamiliar faces. Psychological Research, 58(1), 3137.
Nachson, I., & Shechory, M. (2002). Effect of inversion on the recognition of external and
internal facial features. Acta Psychologica, 109, 227238.
Nederhouser, M., Yue, X., Mangini, M. C., & Biederman, I. (2007). The deleterious effect of
contrast reversal on recognition is unique to faces, not objects. Vision Research, 47(16),
21342142.
Nishimura, M. & Maurer, D. (2008). The effect of categorization on sensitivity to secondorder relations in novel objects. Perception, 37(4), 584601.
OToole, A. J., Vetter, T., & Blanz, V. (1999). Three-dimensional shape and two-dimensional
surface reflectance contributions to face recognition: an application of three-dimensional
morphing. Vision Research, 39(18), 31453155.
Rhodes, G. (1988). Looking at faces: first-order and second-order features as determinants of
facial appearance. Perception, 17(1), 4363.
Rhodes, G., Brake, S., & Atkinson, A. (1993). What`s lost in inverted faces? Cognition, 47(1),
2557.
Riesenhuber, M., & Wolff, B. S. (2009). Task effects, performance levels, features,
configurations, and holistic face processing: A reply to Rossion. Acta Psychologica, 132,
286292.
Robbins, R., & McKone, E. (2007). No face-like processing for objects-of expertise in three
behavioral tasks. Cognition, 103(1), 3479.

Eva Alica Tzschaschel, Malte Persike, and Bozana Meinhardt-Injac

447

Russell, R., Sinha, P., Nederhouser, M., & Biederman I. (2004). The importance of
pigmentation for face recognition. Journal of Vision, 4(8), 418 .
Russel, R., Sinha, P., Biederman, I., & Nederhouser, M. (2006). Is pigmentation important for
face recognition? Evidence from contrast negation. Perception, 35(6), 749759.
Russell, R., Biederman, I., Nederhouser, M., & Sinha, P. (2007). The utility of surface reflectance
for the recognition of upright and inverted faces. Vision Research, 47(2), 157165.
Searcy, J. H., & Bartlett, J. C. (1996). Inversion and processing of component and spatial
relational information in faces. Journal of Experimental Psychology: Human Perception
and Performance, 22(4), 904915.
Sinha, P., & Poggio, T. (1996). I think I know that face. Nature, 384, 404.
Tanaka, J. W., & Farah, M. J. (1993). Parts and wholes in face recognition. The Quarterly
Journal of Experimental Psychology, 46A(2), 225245.
Thompson, P. (1980). Margaret Thatcher: A new illusion. Perception, 9, 483-484.
Troje, N. F., & Blthoff, H. H. (1996). Face recognition under varying poses: The role of
texture and shape. Vision Research, 36(12), 17611771.
Veres-Injac, B., & Persike, M. (2009). Recognition of briefly presented familiar and unfamiliar
faces. Psihologija, 42(1), 4766.
Veres-Injac, B., & Schwaninger, A. (2009). The time course of processing external and
internal features of unfamiliar faces. Psychological Research, 73(1), 4353.
Vuong, Q. C., Peissig, J. J., Harrison, M. C., & Tarr, M. J. (2005). The role of surface
pigmentation for recognition revealed by contrast reversal in faces and Greebles. Vision
Research, 45(10), 12131223.
Yin R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81
141145.
Yip, A., & Sinha, P. (2002). Contribution of color to face recognition. Perception, 31(8), 995-1003.
Young, A.W., Hellawell, D., & Hay, D. C. (1987). Configurational information in face
perception. Perception, 16(6), 747759.

You might also like