You are on page 1of 37

https://en.wikipedia.

org/wiki/Stroop_effect
Ritardo nel tempo di azione, come quello che ho provato io quando ho
fatto The Healing!

In psychology, the Stroop effect is a demonstration of interference in the


reaction time of a task.
When the name of a color (e.g., "blue", "green", or "red") is printed in a
color which is not denoted by the name (i.e., the word "red" printed in blue
ink instead of red ink), naming the color of the word takes longer and is
more prone to errors than when the color of the ink matches the name of
the color.
The effect is named after John Ridley Stroop, who first published the
effect in English in 1935.[1] The effect had previously been published in
Germany in 1929 by other authors.[2][3][4] The original paper has been
one of the most cited papers in the history of experimental psychology,
leading to more than 700 replications.[4] The effect has been used to create
a psychological test (Stroop test) that is widely used in clinical practice
and investigation.

Contents
• 1 Experiment
• 2 Experimental findings
• 3 Neuroanatomy
• 4 Theories
• 4.1 Processing speed
• 4.2 Selective attention
• 4.3 Automaticity
• 4.4 Parallel distributed processing
• 5 Cognitive development
• 6 Uses
• 6.1 Stroop test
• 7 Variations
• 7.1 Warped words
• 7.2 Emotional
• 7.3 Spatial
• 7.4 Numerical
• 7.5 Reverse
• 8 In popular culture
• 9 References
• 10 External links

Experiment
Figure 1 from Experiment 2 of the original description of the Stroop Effect
(1935). 1 is the time that it takes to name the color of the dots while 2 is
the time that it takes to say the color when there is a conflict with the
written word.[1]
The effect was named after John Ridley Stroop, who published the effect
in English in 1935 in an article in the Journal of Experimental Psychology
entitled "Studies of interference in serial verbal reactions" that includes
three different experiments.[1] However, the effect was first published in
1929 in Germany by Erich Rudolf Jaensch,[2] and its roots can be
followed back to works of James McKeen Cattell and Wilhelm Maximilian
Wundt in the nineteenth century.[3][4]
In his experiments, Stroop administered several variations of the same test
for which three different kinds of stimuli were created: Names of colors
appeared in black ink: Names of colors in a different ink than the color
named; and Squares of a given color.[1]
In the first experiment, words and conflict-words were used (see first
figure). The task required the participants to read the written color names
of the words independently of the color of the ink (for example, they
would have to read "purple" no matter what the color of the font). In
experiment 2, stimulus conflict-words and color patches were used, and
participants were required to say the ink-color of the letters independently
of the written word with the second kind of stimulus and also name the
color of the patches. If the word "purple" was written in red font, they
would have to say "red", rather than "purple". When the squares were
shown, the participant spoke the name of the color. Stroop, in the third
experiment, tested his participants at different stages of practice at the
tasks and stimuli used in the first and second experiments, examining
learning effects.[1]
Unlike researchers now using the test for psychological evaluation,[5]
Stroop used only the three basic scores, rather than more complex
derivative scoring procedures. Stroop noted that participants took
significantly longer to complete the color reading in the second task than
they had taken to name the colors of the squares in Experiment 2. This
delay had not appeared in the first experiment. Such interference were
explained by the automation of reading, where the mind automatically
determines the semantic meaning of the word (it reads the word "red" and
thinks of the color "red"), and then must intentionally check itself and
identify instead the color of the word (the ink is a color other than red), a
process that is not automated.[1]

Experimental findings
Stimuli in Stroop paradigms can be divided into 3 groups: neutral,
congruent and incongruent. Neutral stimuli are those stimuli in which only
the text (similarly to stimuli 1 of Stroop's experiment), or color (similarly
to stimuli 3 of Stroop's experiment) are displayed.[6] Congruent stimuli
are those in which the ink color and the word refer to the same color (for
example the word "pink" written in pink). Incongruent stimuli are those in
which ink color and word differ.[6] Three experimental findings are
recurrently found in Stroop experiments.[6] A first finding is semantic
interference, which states that naming the ink color of neutral stimuli (e.g.
when the ink color and word do not interfere with each other) is faster than
in incongruent conditions. It is called semantic interference since it is
usually accepted that the relationship in meaning between ink color and
word is at the root of the interference.[6] The second finding, semantic
facilitation, explains the finding that naming the ink of congruent stimuli is
faster (e.g. when the ink color and the word match) than when neutral
stimuli are present (e.g. stimulus 3; when only a coloured square is
shown). The third finding is that both semantic interference and facilitation
disappear when the task consists of reading the word instead of naming the
ink. It has been sometimes called Stroop asynchrony, and has been
explained by a reduced automatization when naming colors compared to
reading words.[6]
In the study of interference theory, the most commonly used procedure has
been similar to Stroop's second experiment, in which subjects were tested
on naming colors of incompatible words and of control patches. The first
experiment in Stroop's study (reading words in black versus incongruent
colors) has been discussed less. In both cases, the interference score is
expressed as the difference between the times needed to read each of the
two types of cards.[4] Instead of naming stimuli, subjects have also been
asked to sort stimuli into categories.[4] Different characteristics of the
stimulus such as ink colors or direction of words have also been
systematically varied.[4] None of all these modifications eliminates the
effect of interference.[4]

Neuroanatomy
Brain imaging techniques including magnetic resonance imaging (MRI),
functional magnetic resonance imaging (fMRI), and positron emission
tomography (PET) have shown that there are two main areas in the brain
that are involved in the processing of the Stroop task.[7][8] They are the
anterior cingulate cortex, and the dorsolateral prefrontal cortex.[9] More
specifically, while both are activated when resolving conflicts and catching
errors, the dorsolateral prefrontal cortex assists in memory and other
executive functions, while the anterior cingulate cortex is used to select an
appropriate response and allocate attentional resources.[10]
The posterior dorsolateral prefrontal cortex creates the appropriate rules
for the brain to accomplish the current goal.[10] For the Stroop effect, this
involves activating the areas of the brain involved in color perception, but
not those involved in word encoding.[11] It counteracts biases and
irrelevant information, for instance, the fact that the semantic perception of
the word is more striking than the color in which it is printed. Next, the
mid-dorsolateral prefrontal cortex selects the representation that will fulfil
the goal. The relevant information must be separated from irrelevant
information in the task; thus, the focus is placed on the ink color and not
the word.[10] Furthermore, research has suggested that left dorsolateral
prefrontal cortex activation during a Stroop task is related to an
individual's’ expectation regarding the conflicting nature of the upcoming
trial, and not so much on the conflict itself. Conversely, the right
dorsolateral prefrontal cortex aims to reduce the attentional conflict and is
activated after the conflict is over.[9]
Moreoever, the posterior dorsal anterior cingulate cortex is responsible for
what decision is made (i.e. whether you will say the incorrect answer
[written word] or the correct answer [ink color]).[9] Following the
response, the anterior dorsal anterior cingulate cortex is involved in
response evaluation—deciding whether the answer is correct or incorrect.
Activity in this region increases when the probability of an error is higher.
[12]

Theories
There are several theories used to explain the Stroop effect and are
commonly known as ‘race models’. This is based on the underlying notion
that both relevant and irrelevant information are processed in parallel, but
"race" to enter the single central processor during response selection.[13]
They are:

Processing speed
This theory suggests there is a lag in the brain's ability to recognize the
color of the word since the brain reads words faster than it recognizes
colors.[14] This is based on the idea that word processing is significantly
faster than color processing. In a condition where there is a conflict
regarding words and colors (e.g., Stroop test), if the task is to report the
color, the word information arrives at the decision-making stage before the
color information which presents processing confusion. Conversely, if the
task is to report the word, because color information lags after word
information, a decision can be made ahead of the conflicting information.
[15]

Selective attention
The Selective Attention Theory suggests that color recognition, as opposed
to reading a word, requires more attention. The brain needs to use more
attention to recognize a color than to encode a word, so it takes a little
longer.[16] The responses lend much to the interference noted in the
Stroop task. This may be a result of either an allocation of attention to the
responses or to a greater inhibition of distractors that are not appropriate
responses.

Automaticity
This theory is the most common theory of the Stroop effect.[17][not in
citation given] It suggests that since recognizing colors is not an
"automatic process" there is hesitancy to respond; whereas, the brain
automatically understands the meaning of words as a result of habitual
reading. This idea is based on the premise that automatic reading does not
need controlled attention, but still uses enough attentional resources to
reduce the amount of attention accessible for color information processing.
[18] Stirling (1979) introduced the concept of response automaticity. He
demonstrated that changing the responses from colored words to letters
that were not part of the colored words increased reaction time while
reducing Stroop interference.[19]

Parallel distributed processing


This theory suggests that as the brain analyzes information, different and
specific pathways are developed for different tasks.[20] Some pathways,
such as reading, are stronger than others, therefore, it is the strength of the
pathway and not the speed of the pathway that is important.[17] In
addition, automaticity is a function of the strength of each pathway, hence,
when two pathways are activated simultaneously in the Stroop effect,
interference occurs between the stronger (word reading) path and the
weaker (color naming) path, more specifically when the pathway that leads
to the response is the weaker pathway.[21]

Cognitive development
In the neo-Piagetian theories of cognitive development, several variations
of the Stroop task have been used to study the relations between speed of
processing and executive functions with working memory and cognitive
development in various domains. This research shows that reaction time to
Stroop tasks decreases systematically from early childhood through early
adulthood. These changes suggest that speed of processing increases with
age and that cognitive control becomes increasingly efficient. Moreover,
this research strongly suggests that changes in these processes with age are
very closely associated with development in working memory and various
aspects of thought.[22][23] The stroop task also shows the ability to
control behavior. If asked to state the color of the ink rather than the word,
the participant must overcome the initial and stronger stimuli to read the
word. These inhibitions show the ability for the brain to regulate behavior.
[24]

Uses
The Stroop effect has been widely used in psychology. Among the most
important uses is the creation of validated psychological tests based on the
Stroop effect permit to measure a person's selective attention capacity and
skills, as well as their processing speed ability.[25] It is also used in
conjunction with other neuropsychological assessments to examine a
person's executive processing abilities,[17] and can help in the diagnosis
and characterization of different psychiatric and neurological disorders.
Researchers also use the Stroop effect during brain imaging studies to
investigate regions of the brain that are involved in planning, decision-
making, and managing real-world interference (e.g., texting and driving).
[26]
Stroop test
The Stroop effect has been used to investigate a person's psychological
capacities; since its discovery during the twentieth century, it has become a
popular neuropsychological test.[27]
There are different test variants commonly used in clinical settings, with
differences between them in the number of subtasks, type and number of
stimuli, times for the task, or scoring procedures.[27][28] All versions
have at least two numbers of subtasks. In the first trial, the written color
name differs from the color ink it is printed in, and the participant must say
the written word. In the second trial, the participant must name the ink
color instead. However, there can be up to four different subtasks, adding
in some cases stimuli consisting of groups of letters "X" or dots printed in
a given color with the participant having to say the color of the ink; or
names of colors printed in black ink that have to be read.[27] The number
of stimuli varies between fewer than twenty items to more than 150, being
closely related to the scoring system used. While in some test variants the
score is the number of items from a subtask read in a given time, in others
it is the time that it took to complete each of the trials.[27] The number of
errors and different derived punctuations are also taken into account in
some versions.[27]
This test is considered to measure selective attention, cognitive flexibility
and processing speed, and it is used as a tool in the evaluation of executive
functions.[27][28] An increased interference effect is found in disorders
such as brain damage, dementias and other neurodegenerative diseases,
attention-deficit hyperactivity disorder, or a variety of mental disorders
such as schizophrenia, addictions, and depression.[27][29][30]

Variations
The Stroop test has additionally been modified to include other sensory
modalities and variables,[31] to study the effect of bilingualism,[32] or to
investigate the effect of emotions on interference.[33][34][35]
Warped words – Parole deformate, distorte, curvate, falsate.
For example, the warped words Stroop effect produces the same findings
similar to the original Stroop effect. Much like the Stroop task, the printed
word's color is different from the ink color of the word; however, the
words are printed in such a way that it is more difficult to read (typically
curved-shaped).[36] The idea here is the way the words are printed slows
down both the brain's reaction and processing time, making it harder to
complete the task.

Emotional
The emotional Stroop effect serves as an information processing approach
to emotions. In an emotional Stroop task, an individual is given negative
emotional words like "grief," "violence," and "pain" mixed in with more
neutral words like "clock," "door," and "shoe".[36] Just like in the original
Stroop task, the words are colored and the individual is supposed to name
the color. Research has revealed that individuals that are depressed are
more likely to say the color of a negative word slower than the color of a
neutral word.[37] While both the emotional Stroop and the classic Stroop
involve the need to suppress irrelevant or distracting information, there are
differences between the two. The emotional Stroop effect emphasizes the
conflict between the emotional relevance to the individual and the word;
whereas, the classic Stroop effect examines the conflict between the
incongruent color and word.[36]

Spatial
The spatial Stroop effect demonstrates interference between the stimulus
location with the location information in the stimuli.[38] In one version of
the spatial Stroop task, an up or down-pointing arrow appears randomly
above or below a central point. Despite being asked to discriminate the
direction of the arrow while ignoring its location, individuals typically
make faster and more accurate responses to congruent stimuli (i.e., an
down-pointing arrow located below the fixation sign) than to incongruent
ones (i.e., a up-pointing arrow located below the fixation sign).[38] A
similar effect, the Simon effect, uses non-spatial stimuli.[13]

Numerical
The Numerical Stroop effect demonstrates the close relationship between
numerical values and physical sizes. Digits symbolize numerical values
but they also have physical sizes. A digit can be presented as big or small
(e.g., 5 vs. 5), irrespective of its numerical value. Comparing digits in
incongruent trials (e.g., 3 5) is slower than comparing digits in congruent
trials (e.g., 5 3) and the difference in reaction time is termed the numerical
Stroop effect. The effect of irrelevant numerical values on physical
comparisons (similar to the effect of irrelevant color words on responding
to colors) suggests that numerical values are processed automatically (i.e.,
even when they are irrelevant to the task).[39]

Reverse
Another variant of the classic Stroop effect is the reverse Stroop effect. It
occurs during a pointing task. In a reverse Stroop task, individuals are
shown a page with a black square with an incongruent colored word in the
middle — for instance, the word "red" written in the color green — with
four smaller colored squares in the corners.[40] One square would be
colored green, one square would be red, and the two remaining squares
would be other colors. Studies show that if the individual is asked to point
to the color square of the written color (in this case, red) they would
present a delay.[40] Thus, incongruently-colored words significantly
interfere with pointing to the appropriate square. However, some research
has shown there is very little interference from incongruent color words
when the objective is to match the color of the word.[17]
In popular culture
The Brain Age: Train Your Brain in Minutes a Day! software program,
produced by Ryūta Kawashima for the Nintendo DS portable video game
system, contains an automated Stroop Test administrator module,
translated into game form.[41]
MythBusters used the Stroop effect test to see if males and females are
cognitively impaired by having an attractive person of the opposite sex in
the room. The "myth", e.g. hypothesis, was "busted", i.e., disproved.[42]
A Nova episode used the Stroop Effect to illustrate the subtle changes of
the mental flexibility of Mount Everest climbers in relation to altitude.[43]

References

…………………….

https://en.wikipedia.org/wiki/Simon_effect
Interferenza spaziale, simile a sopra.
Simon effect
Jump to navigation Jump to search
In psychology, the Simon effect is the finding that reaction times are usually
faster, and reactions are usually more accurate, when the stimulus occurs in
the same relative location as the response, even if the stimulus location is
irrelevant to the task. It is named for J. R. Simon who first published the
effect in the late 1960s. Simon's original explanation for the effect was that
there is an innate tendency to respond toward the source of stimulation.

According to the simple models of information processing that


existed at the time, there are three stages of
processing: stimulus identification, response selection,
and response execution or the motor stage. The Simon
Effect is generally thought to involve interference
which occurs in the response-selection stage. This is
similar to, yet distinct from, the interference that
produces the better-known Stroop effect.
Original experiment
In Simon's original study, two lights (the stimulus) were placed on a
rotating circular panel. This device would be rotated at varying degrees
(away from the horizontal plane). Simon wished to see if an alteration of
the spatial relationship, relative to the response keys, affected
performance. Age was also a probable factor in reaction time. As predicted
the reaction time of the groups increased based on the relative position of
the light stimulus (age was not a factor). The reaction time increased by as
much as 30%. (Simon & Wolf, 1963).
However, what is usually seen as the first genuine demonstration of the
effect that became known as the Simon effect is by Simon & Rudell
(1967). Here, they had participants respond to the words "left" and "right"
that were randomly presented to the left or right ear. Although the auditory
location was completely irrelevant to the task, participants showed marked
increases in reaction latency if the location of the stimulus was not the
same as the required response (if, for example, they were to react left to a
word that was presented in the right ear).

Method
A typical demonstration of the Simon effect involves placing a participant
in front of a computer monitor and a panel with two buttons on it, which
he or she may press. The participant is told that they should press the
button on the right when they see something red appear on the screen, and
the button on the left when they see something green. Participants are
usually told to ignore the location of the stimulus and base their response
on the task-relevant color.
Participants typically react faster to red lights that appear on the right hand
side of the screen by pressing the button on the right of their panel
(congruent trials). Reaction times are typically slower when the red
stimulus appears on the left hand side of the screen and the participant
must push the button on the right of their panel (incongruent trials). The
same, but vice versa, is true for the green stimuli.
This happens despite the fact that the position of the stimulus on the screen
relative to the physical position of the buttons on the panel is irrelevant to
the task and not correlated with which response is correct. The task, after
all, requires the subject to note only the colour of the object (i.e., red or
green) by pushing the corresponding button, and not its position on the
screen.

Explanation
According to Simon himself (1969), the location of the stimulus, although
irrelevant to the task, directly influences response-selection due to an
automatic tendency to 'react towards the source of the stimulation'.
Although other accounts have been suggested (cf. Hommel, 1993),
explanations for the Simon effect generally refer back to the interference
that occurs in the response-selection stage of decision making.
Neurologically there could be involvement of the dorsolateral prefrontal
cortex, as well as the Anterior cingulate cortex, which is thought to be
responsible for conflict monitoring. The Simon Effect shows that location
information cannot be ignored and will affect decision making, even if the
participant knows that the information is irrelevant.
Logical argument for response selection:
The challenge in the Simon effect is said to occur during the response
selection stage of judgment. This is due to two factors which eliminate the
stimulus identification stage and the execution state. In the stimulus
identification stage the participant only needs to be cognitively aware that
a stimulus is present. An error would not occur at this stage unless he or
she were visually impaired or had some sort of stimulus deficit. As well,
an error or delay cannot occur during the execution state because an action
has already been decided upon in the previous stage (the response
selection stage) and no further decision making takes place (i.e. you cannot
make a change to your response without going back to the second stage).

Practical implications
A knowledge of the Simon effect is useful in the design of man-machine
interfaces. Aircraft cockpits, for example, require a person to react quickly
to a situation. If a pilot is flying a plane and there is a problem with the left
engine, an aircraft with a good man-machine interface design (which most
have) would position the indicator light for the left engine to the left of the
indicator light for the right engine. This interface would display
information in a way that matches the types of responses that people
should make. If it were the other way around, the pilot might may respond
incorrectly and adjust the wrong engine.

References
• Simon, J. R., and Wolf, J. D. (1963). Choice reaction times as a
function of angular stimulus-response correspondence and age.
Ergonomics, 6, 99–105.
• Simon, J. R. & Rudell, A. P. (1967). Auditory S-R compatibility: the
effect of an irrelevant cue on information processing. Journal of
Applied Psychology, 51, 300–304.
• Simon, J. R. (1969). Reactions towards the source of stimulation.
Journal of experimental psychology, 81, 174–176.
• Bernard Hommel (1993). "Inverting the Simon effect by intention:
Determinants of direction and extent of effects of irrelevant spatial
information" (PDF). Psychological Research. 55: 270–279.
doi:10.1007/bf00419687.

…………………..
https://en.wikipedia.org/wiki/Language_processing_in_the_brain
Language processing refers to the way humans use words to
communicate ideas and feelings, and how such communications are
processed and understood. Language processing is considered to be an
uniquely human ability that is not produced with the same grammatical
understanding or systematicity in even human's closest primate relatives.
[1]
Throughout the 20th century the dominant model[2] for language
processing in the brain was the Geschwind-Lichteim-Wernicke model,
which is based primarily on the analysis of brain damaged patients.
However, due to improvements in intra-cortical electrophysiological
recordings of monkey and human brains, as well non-invasive techniques
such as fMRI, PET, MEG and EEG, a dual auditory pathway[3][4] has
been revealed. In accordance with this model, there are two pathways that
connect the auditory cortex to the frontal lobe, each pathway accounting
for different linguistic roles. The auditory ventral stream connects the
auditory cortex with the middle temporal gyrus and temporal pole, which
in turn connects with the inferior frontal gyrus. This pathway is
responsible for sound recognition, and is accordingly known as the
auditory 'what' pathway. The auditory dorsal stream connects the
auditory cortex with the parietal lobe, which in turn connects with inferior
frontal gyrus. In both humans and non-human primates, the auditory dorsal
stream is responsible for sound localization, and is accordingly known as
the auditory 'where' pathway. In humans, this pathway (especially in the
left hemisphere) is also responsible for speech production, speech
repetition, lip-reading, and phonological working memory and long-term
memory. In accordance with the 'from where to what' model of language
evolution.[5][6] the reason the ADS is characterized with such a broad
range of functions is that each indicates a different stage in language
evolution.

Contents
• 1 Neurological mechanism of language processing
• 2 History of neurolinguistics
• 3 Anatomy of the auditory ventral and dorsal streams
• 4 Auditory ventral stream
• 4.1 Sound Recognition
• 4.2 Sentence comprehension
• 4.3 Bilaterality
• 5 Auditory dorsal stream
• 5.1 Sound localization
• 5.2 Guidance of eye movements
• 5.3 Integration of locations with auditory objects
• 5.4 Integration of phonemes with lip-movements
• 5.5 Phonological long-term memory
• 5.6 Phonological working memory
• 5.7 Evolution of language
• 6 See also
• 7 References

Throughout the 20th century, our knowledge of language processing in the


brain was dominated by the Wernicke-Lichtheim-Geschwind model.[7][2]
[8] This model is primarily based on research conducted on brain-damaged
individuals who were reported to possess a variety of language related
disorders. In accordance with this model, words are perceived via a
specialized word reception center (Wernicke’s area) that is located in the
left temporoparietal junction. This region then projects to a word
production center (Broca’s area) that is located in the left inferior frontal
gyrus. Because almost all language input was thought to funnel via
Wernicke’s area and all language output to funnel via Broca’s area, it
became extremely difficult to identify the basic properties of each region.
This lack of clear definition for the contribution of Wernicke’s and Broca’s
regions to human language rendered it extremely difficult to identify their
homologues in other primates.[9] With the advent of the MRI and its
application for lesion mappings, however, it was shown that this model is
based on incorrect correlations between symptoms and lesions.[10][11]
[12][13][14][15][16] The refutation of such an influential and dominant
model opened the door to new models of language processing in the brain.
Anatomy of the auditory ventral and dorsal streams
In the last two decades, significant advances occurred in our understanding
of the neural processing of sounds in primates. Initially by recording of
neural activity in the auditory cortices of monkeys[17][18] and later
elaborated via histological staining[19][20][21] and fMRI scanning
studies,[22] 3 auditory fields were identified in the primary auditory
cortex, and 9 associative auditory fields were shown to surround them
(Figure 1 top left). Anatomical tracing and lesion studies further indicated
of a separation between the anterior and posterior auditory fields, with the
anterior primary auditory fields (areas R-RT) projecting to the anterior
associative auditory fields (areas AL-RTL), and the posterior primary
auditory field (area A1) projecting to the posterior associative auditory
fields (areas CL-CM).[19][23][24][25] Recently, evidence accumulated
that indicates homology between the human and monkey auditory fields.
In humans, histological staining studies revealed two separate auditory
fields in the primary auditory region of Heschl’s gyrus,[26][27] and by
mapping the tonotopic organization of the human primary auditory fields
with high resolution fMRI and comparing it to the tonotopic organization
of the monkey primary auditory fields, homology was established between
the human anterior primary auditory field and monkey area R (denoted in
humans as area hR) and the human posterior primary auditory field and the
monkey area A1 (denoted in humans as area hA1).[28][29][30][31][32]
Intra-cortical recordings from the human auditory cortex further
demonstrated similar patterns of connectivity to the auditory cortex of the
monkey. Recording from the surface of the auditory cortex (supra-
temporal plane) reported that the anterior Heschl’s gyrus (area hR) projects
primarily to the middle-anterior superior temporal gyrus (mSTG-aSTG)
and the posterior Heschl’s gyrus (area hA1) projects primarily to the
posterior superior temporal gyrus (pSTG) and the planum temporale (area
PT; Figure 1 top right).[33][34] Consistent with connections from area hR
to the aSTG and hA1 to the pSTG is an fMRI study of a patient with
impaired sound recognition (auditory agnosia), who was shown with
reduced bilateral activation in areas hR and aSTG but with spared
activation in the mSTG-pSTG.[35] This connectivity pattern is also
corroborated by a study that recorded activation from the lateral surface of
the auditory cortex and reported of simultaneous non-overlapping
activation clusters in the pSTG and mSTG-aSTG while listening to
sounds.[36]
Downstream to the auditory cortex, anatomical tracing studies in monkeys
delineated projections from the anterior associative auditory fields (areas
AL-RTL) to ventral prefrontal and premotor cortices in the inferior frontal
gyrus (IFG)[37][38] and amygdala.[39] Cortical recording and functional
imaging studies in macaque monkeys further elaborated on this processing
stream by showing that acoustic information flows from the anterior
auditory cortex to the temporal pole (TP) and then to the IFG.[40][41][42]
[43][44][45] This pathway is commonly referred to as the auditory ventral
stream (AVS; Figure 1, bottom left-red arrows). In contrast to the anterior
auditory fields, tracing studies reported that the posterior auditory fields
(areas CL-CM) project primarily to dorsolateral prefrontal and premotor
cortices (although some projections do terminate in the IFG.[46][38]
Cortical recordings and anatomical tracing studies in monkeys further
provided evidence that this processing stream flows from the posterior
auditory fields to the frontal lobe via a relay station in the intra-parietal
sulcus (IPS).[47][48][49][50][51][52] This pathway is commonly referred
to as the auditory dorsal stream (ADS; Figure 1, bottom left-blue arrows).
Comparing the white matter pathways involved in communication in
humans and monkeys with diffusion tensor imaging techniques indicates
of similar connections of the AVS and ADS in the two species (Monkey,
[51] Human[53][54][55][56][57][58]). In humans, the pSTG was shown to
project to the parietal lobe (sylvian parietal-temporal junction- inferior
parietal lobule; Spt-IPL), and from there to dorsolateral prefrontal and
premotor cortices (Figure 1, bottom right-blue arrows), and the aSTG was
shown to project to the anterior temporal lobe (middle temporal gyrus-
temporal pole; MTG-TP) and from there to the IFG (Figure 1 bottom right-
red arrows).

Auditory ventral stream


Sound Recognition
Accumulative converging evidence indicates that the AVS is involved in
recognizing auditory objects. At the level of the primary auditory cortex,
recordings from monkeys showed higher percentage of neurons selective
for learned melodic sequences in area R than area A1,[59] and a study in
humans demonstrated more selectivity for heard syllables in the anterior
Heschl’s gyrus (area hR) than posterior Heshcl’s gyrus (area hA1).[60] In
downstream associative auditory fields, studies from both monkeys and
humans reported that the border between the anterior and posterior
auditory fields (Figure 1-area PC in the monkey and mSTG in the human)
processes pitch attributes that are necessary for the recognition of auditory
objects.[17] The anterior auditory fields of monkeys were also
demonstrated with selectivity for con-specific vocalizations with intra-
cortical recordings.[40][18][61] and functional imaging[62][41][42] One
fMRI monkey study further demonstrated a role of the aSTG in the
recognition of individual voices.[41] The role of the human mSTG-aSTG
in sound recognition was demonstrated via functional imaging studies that
correlated activity in this region with isolation of auditory objects from
background noise,[63][64] and with the recognition of spoken words,[65]
[66][67][68][69][70][71] voices,[72] melodies,[73][74] environmental
sounds,[75][76][77] and non-speech communicative sounds.[78] A Meta-
analysis of fMRI studies[79] further demonstrated functional dissociation
between the left mSTG and aSTG, with the former processing short speech
units (phonemes) and the latter processing longer units (e.g., words,
environmental sounds). A study that recorded neural activity directly from
the left pSTG and aSTG reported that the aSTG, but not pSTG, was more
active when the patient listened to speech in her native language than
unfamiliar foreign language.[80] Consistently, electro stimulation to the
aSTG of this patient resulted in impaired speech perception[80] (see
also[81][82] for similar results). Intra-cortical recordings from the right
and left aSTG further demonstrated that speech is processed laterally to
music.[80] An fMRI study of a patient with impaired sound recognition
(auditory agnosia) due to brainstem damage was also shown with reduced
activation in areas hR and aSTG of both hemispheres when hearing spoken
words and environmental sounds.[35] Recordings from the anterior
auditory cortex of monkeys while maintaining learned sounds in working
memory,[45] and the debilitating effect of induced lesions to this region on
working memory recall,[83][84][85] further implicate the AVS in
maintaining the perceived auditory objects in working memory. In humans,
area mSTG-aSTG was also reported active during rehearsal of heard
syllables with MEG.[86] and fMRI[87] The latter study further
demonstrated that working memory in the AVS is for the acoustic
properties of spoken words and that it is independent to working memory
in the ADS, which mediates inner speech. Working memory studies in
monkeys also suggest that in monkeys, in contrast to humans, the AVS is
the dominant working memory store.[88]
In humans, downstream to the aSTG, the MTG and TP are thought to
constitute the semantic lexicon, which is a long-term memory repository of
audio-visual representations that are interconnected on the basis of
semantic relationships. (See also the reviews by[3][4] discussing this
topic). The primary evidence for this role of the MTG-TP is that patients
with damage to this region (e.g., patients with semantic dementia or herpes
simplex virus encephalitis) are reported[89][90] with an impaired ability to
describe visual and auditory objects and a tendency to commit semantic
errors when naming objects (i.e., semantic paraphasia). Semantic
paraphasias were also expressed by aphasic patients with left MTG-TP
damage[13][91] and were shown to occur in non-aphasic patients after
electro-stimulation to this region.[92][82] or the underlying white matter
pathway[93] Two meta-analyses of the fMRI literature also reported that
the anterior MTG and TP were consistently active during semantic analysis
of speech and text;[65][94] and an intra-cortical recording study correlated
neural discharge in the MTG with the comprehension of intelligible
sentences.[95]

Sentence comprehension
In addition to extracting meaning from sounds, the MTG-TP region of the
AVS appears to have a role in sentence comprehension, possibly by
merging concepts together (e.g., merging the concept 'blue' and 'shirt to
create the concept of a 'blue shirt'). The role of the MTG in extracting
meaning from sentences has been demonstrated in functional imaging
studies reporting stronger activation in the anterior MTG when proper
sentences are contrasted with lists of words, sentences in a foreign or
nonsense language, scrambled sentences, sentences with semantic or
syntactic violations and sentence-like sequences of environmental sounds.
[96][97][98][99][100][101][102][103] One fMRI study[104] in which
participants were instructed to read a story further correlated activity in the
anterior MTG with the amount of semantic and syntactic content each
sentence contained. An EEG study[105] that contrasted cortical activity
while reading sentences with and without syntactic violations in healthy
participants and patients with MTG-TP damage, concluded that the MTG-
TP in both hemispheres participate in the automatic (rule based) stage of
syntactic analysis (ELAN component), and that the left MTG-TP is also
involved in a later controlled stage of syntax analysis (P600 component).
Patients with damage to the MTG-TP region have also been reported with
impaired sentence comprehension.[13][106][107] See review[108] for
more information on this topic.

Bilaterality
In contradiction to the Wernicke-Lichtheim-Geschwind model that
implicates sound recognition to occur solely in the left hemisphere, studies
that examined the properties of the right or left hemisphere in isolation via
unilateral hemispheric anesthesia (i.e., the WADA procedure[109]) or
intra-cortical recordings from each hemisphere[95] provided evidence that
sound recognition is processed bilaterally. Moreover, a study that
instructed patients with disconnected hemispheres (i.e., split-brain
patients) to match spoken words to written words presented to the right or
left hemifields, reported vocabulary in the right hemisphere that almost
matches in size with the left hemisphere[110] (The right hemisphere
vocabulary was equivalent to the vocabulary of a healthy 11-years old
child). This bilateral recognition of sounds is also consistent with the
finding that unilateral lesion to the auditory cortex rarely results in deficit
to auditory comprehension (i.e., auditory agnosia), whereas a second
lesion to the remaining hemisphere (which could occur years later) does.
[111][112] Finally, as mentioned earlier, an fMRI scan of an auditory
agnosia patient demonstrated bilateral reduced activation in the anterior
auditory cortices,[35] and bilateral electro-stimulation to these regions in
both hemispheres resulted with impaired speech recognition.[80]

Auditory dorsal stream


Sound localization
The most established role of the ADS is with audiospatial processing. This
is evidenced via studies that recorded neural activity from the auditory
cortex of monkeys, and correlated the strongest selectivity to changes in
sound location with the posterior auditory fields (areas CM-CL),
intermediate selectivity with primary area A1, and very weak selectivity
with the anterior auditory fields.[113][114][18][115][116] In humans,
behavioral studies of brain damaged patients[117][118] and EEG
recordings from healthy participants[119] demonstrated that sound
localization is processed independently of sound recognition, and thus is
likely independent of processing in the AVS. Consistently, a working
memory study[120] reported two independent working memory storage
spaces, one for acoustic properties and one for locations. Functional
imaging studies that contrasted sound discrimination and sound[121][122]
[123][124][77][125] localization reported a correlation between sound
discrimination and activation in the mSTG-aSTG, and correlation between
sound localization and activation in the pSTG and PT, with some studies
further reporting of activation in the Spt-IPL region and frontal lobe.[126]
[76][127] Some fMRI studies also reported that the activation in the pSTG
and Spt-IPL regions increased when individuals perceived sounds in
motion.[128][129][130] EEG studies using source-localization also
identified the pSTG-Spt region of the ADS as the sound localization
processing center[131][132] A combined fMRI and MEG study
corroborated the role of the ADS with audiospatial processing by
demonstrating that changes in sound location resulted in activation
spreading from Heschl’s gyrus posteriorly along the pSTG and terminating
in the IPL.[133] In another MEG study, the IPL and frontal lobe were
shown active during maintenance of sound locations in working memory.
[134]

Guidance of eye movements


In addition to localizing sounds, the ADS appears also to encode the sound
location in memory, and to use this information for guiding eye
movements. Evidence for the role of the ADS in encoding sounds into
working memory is provided via studies that trained monkeys in a delayed
matching to sample task, and reported of activation in areas CM-CL[135]
and IPS[136][137] during the delay phase. Influence of this spatial
information on eye movements occurs via projections of the ADS into the
frontal eye field (FEF; a premotor area that is responsible for guiding eye
movements) located in the frontal lobe. This is demonstrated with
anatomical tracing studies that reported of connections between areas CM-
CL-IPS and the FEF,[46][138] and electro-physiological recordings that
reported neural activity in both the IPS[136][137][139][138] and the
FEF[140][141] prior to conducting saccadic eye-movements toward
auditory targets.

Integration of locations with auditory objects


A surprising function of the ADS is with the discrimination and possible
identification of sounds, which is commonly ascribed with the anterior
STG and STS of the AVS. However, electrophysiological recordings from
the posterior auditory cortex (areas CM-CL),[142][115] and IPS of
monkeys,[143] as well a PET monkey study[144] reported of neurons that
are selective to monkey vocalizations. One of these studies[115] also
reported of neurons in areas CM-CL that are characterized with dual
selectivity for both a vocalization and a sound location. A monkey study
that recorded electrophysiological activity from neurons in the posterior
insula also reported of neurons that discriminate monkey calls based on the
identity of the speaker.[145] Similarly, human fMRI studies that instructed
participants to discriminate voices reported an activation cluster in the
pSTG.[146][147][148] A study that recorded activity from the auditory
cortex of an epileptic patient further reported that the pSTG, but not aSTG,
was selective for the presence of a new speaker.[80] A study that scanned
fetuses in their third trimester of pregnancy with fMRI further reported of
activation in area Spt when the hearing of voices was contrasted to pure
tones.[149] The researchers also reported that a sub-region of area Spt was
more selective to their mother’s voice than unfamiliar female voices. This
study thus suggests that the ADS is capable of identifying voices in
addition to discriminating them.
The manner in which sound recognition in the pSTG-PT-Spt regions of the
ADS differs from sound recognition in the anterior STG and STS of the
AVS[146][72][150][40][41] was shown via electro-stimulation of an
epileptic patient.[80] This study reported that electro-stimulation of the
aSTG resulted in changes in the perceived pitch of voices (including the
patient’s own voice), whereas electro-stimulation of the pSTG resulted in
reports that her voice was “drifting away.” This report indicates a role for
the pSTG in the integration of sound location with an individual voice.
Consistent with this role of the ADS is a study that reported patients, with
AVS damage but spared ADS (surgical removal of the anterior
STG/MTG), were no longer capable of isolating environmental sounds in
the contralesional space, whereas their ability of isolating and
discriminating human voices remained intact.[151] Supporting a role for
the pSTG-PT-Spt of the ADS with integrating auditory objects with sound
locations are also studies that demonstrate a role for this region in the
isolation of specific sounds. For example, two functional imaging studies
correlated circumscribed pSTG-PT activation with the spreading of sounds
into an increasing number of locations.[152][153] Accordingly, an fMRI
study correlated the perception of acoustic cues that are necessary for
separating musical sounds (pitch chroma) with pSTG-PT activation.[125]

Integration of phonemes with lip-movements


Although sound perception is primarily ascribed with the AVS, the ADS
appears associated with several aspects of speech perception. For instance,
in a meta-analysis of fMRI studies[154] in which the auditory perception
of phonemes was contrasted with closely matching sounds, and the studies
were rated for the required level of attention, the authors concluded that
attention to phonemes correlates with strong activation in the pSTG-pSTS
region. An intra-cortical recording study in which participants were
instructed to identify syllables also correlated the hearing of each syllable
with its own activation pattern in the pSTG.[155] Consistent with the role
of the ADS in discriminating phonemes,[154] studies have ascribed the
integration of phonemes and their corresponding lip movements (i.e.,
visemes) to the pSTS of the ADS. For example, an fMRI study[156] has
correlated activation in the pSTS with the McGurk illusion (in which
hearing the syllable “ba” while seeing the viseme “ga” results in the
perception of the syllable “da”). Another study has found that using
magnetic stimulation to interfere with processing in this area further
disrupts the McGurk illusion.[157] The association of the pSTS with the
audio-visual integration of speech has also been demonstrated in a study
that presented participants with pictures of faces and spoken words of
varying quality. The study reported that the pSTS selects for the combined
increase of the clarity of faces and spoken words.[158] Corroborating
evidence has been provided by an fMRI study[159] that contrasted the
perception of audio-visual speech with audio-visual non-speech (pictures
and sounds of tools). This study reported the detection of speech-selective
compartments in the pSTS. In addition, an fMRI study[160] that contrasted
congruent audio-visual speech with incongruent speech (pictures of still
faces) reported pSTS activation. For a review presenting additional
converging evidence regarding the role of the pSTS and ADS in phoneme-
viseme integration see.[161]

Phonological long-term memory


A growing body of evidence indicates that humans, in addition to having a
long-term store for word meanings located in the MTG-TP of the AVS
(i.e., the semantic lexicon), also have a long-term store for the names of
objects located in the Spt-IPL region of the ADS (i.e., the phonological
lexicon). For example, a study[162][163] examining patients with damage
to the AVS (MTG damage) or damage to the ADS (IPL damage) reported
that MTG damage results in individuals incorrectly identifying objects
(e.g., calling a “goat” a “sheep,” an example of semantic paraphasia).
Conversely, IPL damage results in individuals correctly identifying the
object but incorrectly pronouncing its name (e.g., saying “gof” instead of
“goat,” an example of phonemic paraphasia). Semantic paraphasia errors
have also been reported in patients receiving intra-cortical electrical
stimulation of the AVS (MTG), and phonemic paraphasia errors have been
reported in patients whose ADS (pSTG, Spt, and IPL) received intra-
cortical electrical stimulation.[82][164][93] Further supporting the role of
the ADS in object naming is an MEG study that localized activity in the
IPL during the learning and during the recall of object names.[165] A study
that induced magnetic interference in participants’ IPL while they
answered questions about an object reported that the participants were
capable of answering questions regarding the object’s characteristics or
perceptual attributes but were impaired when asked whether the word
contained two or three syllables.[166] An MEG study has also correlated
recovery from anomia (a disorder characterized by an impaired ability to
name objects) with changes in IPL activation.[167] Further supporting the
role of the IPL in encoding the sounds of words are studies reporting that,
compared to monolinguals, bilinguals have greater cortical density in the
IPL but not the MTG.[168][169] Because evidence shows that, in
bilinguals, different phonological representations of the same word share
the same semantic representation,[170] this increase in density in the IPL
verifies the existence of the phonological lexicon: the semantic lexicon of
bilinguals is expected to be similar in size to the semantic lexicon of
monolinguals, whereas their phonological lexicon should be twice the size.
Consistent with this finding, cortical density in the IPL of monolinguals
also correlates with vocabulary size.[171][172] Notably, the functional
dissociation of the AVS and ADS in object-naming tasks is supported by
cumulative evidence from reading research showing that semantic errors
are correlated with MTG impairment and phonemic errors with IPL
impairment. Based on these associations, the semantic analysis of text has
been linked to the inferior-temporal gyrus and MTG, and the phonological
analysis of text has been linked to the pSTG-Spt- IPL[173][174][175]

Phonological working memory


Working memory is often treated as the temporary activation of the
representations stored in long-term memory that are used for speech
(phonological representations). This sharing of resources between working
memory and speech is evident by the finding[176][177] that speaking
during rehearsal results in a significant reduction in the number of items
that can be recalled from working memory (articulatory suppression). The
involvement of the phonological lexicon in working memory is also
evidenced by the tendency of individuals to make more errors when
recalling words from a recently learned list of phonologically similar
words than from a list of phonologically dissimilar words (the
phonological similarity effect).[176] Studies have also found that speech
errors committed during reading are remarkably similar to speech errors
made during the recall of recently learned, phonologically similar words
from working memory.[178] Patients with IPL damage have also been
observed to exhibit both speech production errors and impaired working
memory[179][180][181][182] Finally, the view that verbal working
memory is the result of temporarily activating phonological
representations in the ADS is compatible with recent models describing
working memory as the combination of maintaining representations in the
mechanism of attention in parallel to temporarily activating representations
in long-term memory.[177][183][184][185] It has been argued that the role
of the ADS in the rehearsal of lists of words is the reason this pathway is
active during sentence comprehension[186] For a review of the role of the
ADS in working memory, see.[187]

The 'from where to what' model of language evolution hypotheses 7 stages


of language evolution: 1. The origin of speech is the exchange of contact
calls between mothers and offspring used to relocate each other in cases of
separation. 2. Offspring of early Homo modified the contact calls with
intonations in order to emit two types of contact calls: contact calls that
signal low level of distress and contact calls that signal high-level of
distress. 3. The use of two types of contact calls enabled the first question-
answer conversation. In this scenario, the offspring emits a low-level
distress call to express a desire to interact with an object, and the mother
responds with a low-level distress call to enable the interaction or high-
level distress call to prohibit it. 4. The use of intonations improved over
time, and eventually, individuals acquired sufficient vocal control to invent
new words to objects. 5. At first, offspring learned the calls from their
parents by imitating their lip-movements. 6. As the learning of calls
improved, babies learned new calls (i.e., phonemes) through lip imitation
only during infancy. After that period, the memory of phonemes lasted for
a lifetime, and older children became capable of learning new calls
(through mimicry) without observing their parents' lip-movements. 7.
Individuals became capable of rehearsing sequences of calls. This enabled
the learning of words with several syllables, which increased vocabulary
size. Further developments to the brain circuit responsible for rehearsing
poly-syllabic words resulted with individuals capable of rehearsing lists of
words (phonological working memory), which served as the platform for
communication with sentences. Based on the papers:

Evolution of language
It is presently unknown why so many functions are ascribed to the human
ADS. An attempt to unify these functions under a single framework was
conducted in the ‘From where to what’ model of language evolution.[5][6]
In accordance with this model, each function of the ADS indicates of a
different intermediate phase in the evolution of language. The roles of
sound localization and integration of sound location with voices and
auditory objects is interpreted as evidence that the origin of speech is the
exchange of contact calls (calls used to report location in cases of
separation) between mothers and offspring. The role of the ADS in the
perception and production of intonations is interpreted as evidence that
speech began by modifying the contact calls with intonations, possibly for
distinguishing alarm contact calls from safe contact calls. The role of the
ADS in encoding the names of objects (phonological long-term memory)
is interpreted as evidence of gradual transition from modifying calls with
intonations to complete vocal control. The role of the ADS in the
integration of lip movements with phonemes and in speech repetition is
interpreted as evidence that spoken words were learned by infants
mimicking their parents’ vocalizations, initiailly by imitating their lip
movements. The role of the ADS in phonological working memory is
interpreted as evidence that the words learned through mimicry remained
active in the ADS even when not spoken. This resulted with individuals
capable of rehearsing a list of vocalizations, which enabled the production
of words with several syllables. Further developments in the ADS enabled
the rehearsal of lists of words, which provided the infra-structure for
communicating with sentences.
https://imotions.com/blog/the-stroop-effect/
To see and interact with the world, we first need to understand it. Visual
processing is one way we do this, and is composed of many parts. When
we see an object, we don’t just see its physical attributes, we also
comprehend the meaning behind them. We know that a chair needs legs
because the seat needs to be raised, we know that the wood comes from
trees, we know we could sit in it, and so on. There is information that we
process about the things we see without even being aware of that
processing.
So when John Ridley Stroop asked people to read words on a sheet of
paper in 1929, he knew that their automatic processing would come into
play, and could offer a breakthrough insight into brain function.
Research from as early as 1894 had shown that associations of even
nonsense syllables would become embedded into a person’s
understanding, and could interfere with how they processed and recalled
these syllables, despite no real meaning being attached to them. It was
therefore clear, even in the beginnings of contemporary psychological
research, that associations are powerful and pervasive.

What is the Stroop Effect?


Stroop’s innovation was to show, clearly and definitively, that our
embedded knowledge about our environment impacts how we interact
with it. His research method is now one of the most famous and well-
known examples of a psychological test, and is elegant in its simplicity.
First, the participant reads a list of words for colors, but the words are
printed in a color different to the word itself. For example, the word
“orange” would be listed as text, but printed in green. The participant’s
reading time of the words on the list is then recorded. Next, the participant
has to repeat the test with a new list of words, but should name the colors
that the words are printed in. So, when the word “orange” is printed in
green, the participant should say “green” and move on to the next word.
Below is a brief example of the Stroop test, try it out! You can also click
here for a printable version to test it with others.
First, time yourself while you read the following text, ignoring the colors
the words are printed in.
Immagine che ho cancellato
Now time yourself while you state the colors of the following words,
ignoring the actual text (as best as you can!).

Immagine che ho cancellato


In most cases, it takes longer to state the colors of the words, rather than to
read the text they are printed in, despite the incongruence being essentially
the same across both lists (i.e. both show words in the wrong color). It
appears we are more influenced by the physical text than than the text
color.

Why does this happen?


What this reveals is that the brain can’t help but read. As habitual readers,
we encounter and comprehend words on such a persistent basis that the
reading occurs almost effortlessly, whereas declaration of a color requires
more cognitive effort. When there is a conflict between these two sources
of information, our cognitive load is increased, and our brains have to
work harder to resolve the required difference. Performing these tasks
(preventing reading, processing word color, and resolving information
conflict) ultimately slows down our responses, and makes the task take
longer.
There are a few theories that slightly differ in their definitions of the
Stroop Effect, yet their differences mostly lie in which part that they
emphasize. For example, one theory emphasizes that the automaticity of
reading as the principal cause of Stroop interference, while another
emphasizes the mental prioritizing which we perform when reading, as
compared to defining colors. While differences in theories may therefore
exist, all essentially converge on the central premise that reading is a
simpler and more automatic task than stating colors, and that a conflict
between the two will increase the time needed for processing.

What can we use it for?


Using this paradigm, we can assess an individual’s cognitive processing
speed, their attentional capacity, and their level of cognitive control
(otherwise known as their executive function). These skills and facets are
implicit in so many ways in which we interact with the world, suggesting
that this test reveals a brief – yet incisive – view into human thought and
behavior.

The test is also used in a variety of different ways to the original, in an


effort to exploit the experimental setup to reveal more about a clinical
population, for example. Even neurodevelopmental disorders such as
schizophrenia and autism have been examined with the Stroop test.
Furthermore, there are several variations and differing implementations of
the test available, allowing different aspects of cognition to be honed in on.
One of these variations is the “emotional Stroop test” in which participants
complete both the original Stroop, and a version which has both neutral
and emotionally charged words. The resulting text features words such as
“pain” or “joy” amongst everyday words. Research has shown that anxious
people were likely to experience more interference (i.e. more time spent
declaring word color) with emotionally charged words, suggesting a
preponderance of the emotional word content.
Experimental designs like this allow researchers to target and observe
cognitive processes that underlie explicit thought. The test reveals the
working of non-conscious brain function and reduces some of the biases
that can otherwise emerge in testing.
Other experimental setups utilize the lessons of the Stroop Effect – that
incongruent information will require more mental resources to resolve
correctly – with numbers, rather than words. Termed the “Numerical
Stroop Effect”, this experiment has shown that presenting numbers of
incongruent sizes next to each other will slow down reading and
comprehension. For an example, see the image below:

Examples of the different test types that are used in the Numerical
Stroop.

This experiment shows that, with all else being controlled for,
incongruence in numerical size will cause the greatest interference,
increasing the delay in comprehension. An interesting feature with the
Numerical Stroop is that the interference is found for both types of
incongruence – when the numbers are incongruent with size, then a delay
is shown for reporting the size, as well as for reporting the numbers. This
effect reveals that the automatic processing is not just limited to words,
suggesting that the brain looks for normal patterns in a variety of presented
stimuli, as it appears to struggle when this doesn’t occur.

How can the Stroop test be used?


The Stroop test can be simply administered with a basic experimental
setup. At its most fundamental, all you need is an image of the Stroop test
words, a stopwatch, and someone to record the time and answers (and a
willing participant!). However, if you want to gain more insights from the
data, there are plenty of ways to take the test further. With iMotions you
can simply set up and present the Stroop test, while also expanding the
data collection possibilities. Using the survey function, the test can be
quickly and simply added. This can be done with either the built-in
iMotions survey tool, or with the Qualtrics survey tool, which allows even
more metrics to be taken into account.
The ability to record from various synchronized biometric devices opens
up new avenues for research. For example, with an eye-tracking tool, you
can examine exactly how long each participant looks at each word, and
their precise speed of comprehension. Using areas of interest (AOIs) can
be of particular use as this allows you to analyze specific parts of the scene
in isolation, or compared to the data for the scene as a whole, or even with
other AOIs. It’s then possible to determine which words demanded the
most visual attention, allowing you to accurately dissect the data in fine
detail.
Below are a few examples of that idea in practice, each of which took only
minutes to set up and start.
First, we’ve added an image of a Stroop test to the survey function – one
version is essentially the same as the original, while another has neutral
words mixed with food related words. This version of the Stroop test
would require that the participant verbally declare the color of each word –
audio recording could help in accurately measuring participant responses.
We have also included an example using a multiple choice paradigm that is
detailed below, and using the Qualtrics survey function below that.

The normal Stroop test inserted as a survey image into iMotions.

A modified Stroop test inserted as a survey image into iMotions.

After we’ve set up eye-tracking and added a participant list, we can add
AOIs to the words, so that we can view and analyze data for each. Below
is an image of how this looks:

The Stroop test in iMotions with AOIs placed over the color words.

After running through a few participants, we can start to visualize and


analyze their data, producing both detailed AOI data, and heatmaps
showing overview data. Below are examples of what this data could look
like. Of course, more detailed data is available to export and analyze, if
desired.

Data displayed in iMotions showing the time-to-first-fixation (TTFF),


the time spent in seconds looking at the AOI (which is only shown to
one decimal point in the image above), and the ratio of participants
who viewed the AOI.

A heatmap showing the level of fixation across the words shown in the
Stroop test.

Alternatively, we can insert each word of the Stroop test within the survey
setup, and use the keyboard input function for the participant to answer
each word color. This would also allow us to investigate the error rate in a
more systematic manner. This is shown across the two images below.

The survey setup with an incongruent word-color stimuli. Several of


these surveys can be quickly arranged for multiple tests.

How the above survey appears to the participant. The participant is


required to choose one of the predefined colors before advancing to
the next question.

Within this paradigm, eye movements can also be measured, providing


information about the amount of time taken to process the information.
The approach may take longer for each participant, and remembering the
keyboard-color combinations may encumber their cognitive processing
(although this shouldn’t present a problem if this approach is used with the
correct controls), however it does allow a finer dissection of eye
movement for each word, and also informs us about the error rate from
incorrect answers.

Using Qualtrics
Finally, we can see how this test is implemented in iMotions using the
Qualtrics survey function. This is easily implemented, and appears in a
similar way to the above surveys that are built by iMotions. One of the
advantages of using Qualtrics is that feedback to participant answers can
be immediately provided, should this be desired. The following image
shows how the stimulus presentation appears on screen.

Qualtrics implementation of the Stroop test.

The participant can then click on the corresponding color to answer the
question. If an incorrect answer is chosen, the response would be shown as
below.

Feedback for participant in Qualtrics.

The participant can then proceed to complete other questions, and their
answers will be recorded, allowing later analysis and visualization of the
results.
With all of the information completed and data analyzed, we can now start
to discern which words showed the greatest amount of Stroop interference
(the latency produced when naming the color that the word is printed in).
Having several paradigms with different colors, words, and with only
blocks of colors will provide more baseline information and control for
experimental error. Ultimately this gives a good basis for the participant
data to be normalized, and compared with more validity. We can now test
if there is any difference with the words of interest and potentially start to
draw conclusions about the implicit thoughts of participants (with the
example above, it could be that participants who are hungrier would spend
a longer duration in naming the colors of the words, suggesting those
words are more salient to them).

Conclusion
The Stroop test is a widely-used, well established methodology that
reveals various brain functions, and implicit cognitive workings. The
original article has now been cited over 13,000 times and that number will
surely continue to rise well into the future. With iMotions, it’s easy to start
asking questions with the Stroop Task and to get to the answers quickly.
Contact us and hear how we can help with your research needs and
questions.

You might also like