Why is decision making a perceptual issue?

  • Journal List
  • Hum Brain Mapp
  • v.41(6); 2020 Apr 15
  • PMC7267943

Hum Brain Mapp. 2020 Apr 15; 41(6): 1532–1556.

Abstract

Humans make various kinds of decisions about which emotions they perceive from others. Although it might seem like a split‐second phenomenon, deliberating over which emotions we perceive unfolds across several stages of decisional processing. Neurocognitive models of general perception postulate that our brain first extracts sensory information about the world then integrates these data into a percept and lastly interprets it. The aim of the present study was to build an evidence‐based neurocognitive model of perceptual decision‐making on others' emotions. We conducted a series of meta‐analyses of neuroimaging data spanning 30 years on the explicit evaluations of others' emotional expressions. We find that emotion perception is rather an umbrella term for various perception paradigms, each with distinct neural structures that underline task‐related cognitive demands. Furthermore, the left amygdala was responsive across all classes of decisional paradigms, regardless of task‐related demands. Based on these observations, we propose a neurocognitive model that outlines the information flow in the brain needed for a successful evaluation of and decisions on other individuals' emotions.

Highlights

  • Emotion classification involves heterogeneous perception and decision‐making tasks

  • Decision‐making processes on emotions rarely covered by existing emotions theories

  • We propose an evidence‐based neuro‐cognitive model of decision‐making on emotions

  • Bilateral brain processes for nonverbal decisions, left brain processes for verbal decisions

  • Left amygdala involved in any kind of decision on emotions

Keywords: amygdala, decision‐making, emotion, fMRI, neural network, perception

1. INTRODUCTION

The process of perceiving and identifying emotions signaled by others is often a split‐second instance of emotion perception. However, this apparently rapid action is actually the outcome of multiple stages of decision‐making based on various levels of neural and cognitive processing. This decisional process might differ according to the specific requirements of certain contexts and situations. Some contexts, for example, require the perceiver to label the emotions they recognize in another to communicate them to others involved, thereby transposing a sensory percept into a verbal category (i.e., verbal labeling). Other contexts might require more basic types of recognition below the level of verbalizations, such as matching the emotions of two individuals (i.e., emotional matching) or deciding that one person shows an emotion different from others or from previous encounters (i.e., emotional discrimination). Finally, other contexts might only require rating the intensity of emotions regardless of the emotion perceived (i.e., emotional intensity rating). These different types of decisions on perceived emotions are assumed to imply different neurocognitive mechanisms (Figure 1).

Why is decision making a perceptual issue?

(a) Paradigms of perceptual decision‐making on others' emotional expressions. On‐screen labeling (top left) consists of matching the perceived emotional expression to a verbal label that is simultaneously displayed on‐screen. In off‐screen emotion labeling (top middle), participants are asked to keep a mental trace of the possible verbal labels throughout the experiment and match the perceived expression to the correct label. The emotion matching task (top right) consists of a triad of facial expressions, in which participants must match the expression of the target face to the expression of one of two simultaneously presented faces. Emotion rating asks participants to rate the level of arousal (bottom left) or valence (bottom middle) of the emotional expression. The bottom right corner depicts a variant of emotion discrimination (“same or different” task), in which participants must determine whether two target stimuli portray identical or different emotional expressions. For ease of illustration, all stimuli depicted here consist of facial expressions of emotions. However, except for emotion matching, which consists exclusively of facial expressions, the stimuli included in our meta‐analyses consisted of facial expressions, vocal prosody, and body postures. (b) Summary of the findings on perceptual decision‐making on emotions, with a special focus on the unique regions revealed by the contrast analyses. (c) A neurocognitive model of decision‐making on emotions based on general principles of perceptual processing connecting sensory regions (visual, auditory), association areas (lexicon, dynamic), and limbic areas (emotion) with higher‐cognitive areas in the frontal cortex (mental state, verbal, nonverbal). Amy, amygdala; dMFC, dorsomedial frontal cortex; dTri, dorsal pars triangularis; FG, fusiform gyrus; IFJ, inferior frontal junction; IPS, intraparietal sulcus; mTri, mid pars triangularis; Oper, pars opercularis; Orb, pars orbitalis; pMTG, posterior middle temporal gyrus; Vis, visual cortex; vTri, ventral pars triangularis

During these various situational types of decision‐making on perceived emotions, the human neurocognitive system needs to extract sensory information from different sensory channels, such as facial and vocal expressions and body postures, integrate this data into a gestalt percept, and then interpret it (Belin, Fecteau, & Bedard, 2004; Bernstein & Yovel, 2015; de Gelder, De Borst, & Watson, 2015). The full process of extraction, integration, and interpretation of sensory information enables individuals to perform perceptual decisions about the most likely emotion expressed by other individuals (i.e., categorization of facial, vocal, or bodily features as expressing an emotional state, e.g., joy). This process of deliberation in which sensory information is used to decode and evaluate the external world is called perceptual decision‐making (Hauser & Salinas, 2014; Heekeren, Marrett, Bandettini, & Ungerleider, 2004; Mulder, van Maanen, & Forstmann, 2014; Schall, 2001). In contrast to other forms of decision‐making, perceptual choices emphasize the role of sensory information in reaching a decision and in directing reactive behavior (Summerfield & De Lange, 2014; Wiech et al., 2014).

There are currently no formal detailed neurobiological, neurocognitive, or psychological models of how humans perceive emotions in individuals. The present work aims to fill this gap of knowledge. Much has been said by biological, neurocognitive, and psychological theories concerning emotion elicitation and emotion expression, but few of these theories directly address the process of perceiving emotions in other individuals (e.g., Coppin & Sander, 2013; Faucher, 2013; Nesse, 2014). For example, affect program theories postulate that individuals from different cultures and even species are born with the same capabilities of expressing emotions through sets of motor responses, such as facial and vocal expressions and body postures (Ekman et al., 1987). In return, this makes emotion perception and inference possible with a high level of certainty without the use of any other information (Ekman, 1992; Ekman et al., 1987; Panksepp, 2000; Scherer, Clark‐Polner, & Mortillaro, 2011). Strong appraisal theories, on the other hand, postulate that we are capable of inferring the emotions of others by reverse‐engineering the individual appraisal patterns associated with each perceived emotional expression (Scherer & Ellgring, 2007; Scherer, Mortillaro, & Mehu, 2013). For example, upon encountering someone's facial expression of wide eyes and open mouth (i.e., common expressions for both surprise‐inducing and fear‐inducing stimuli; Scherer & Ellgring, 2007), we can deduce that the individual appraises the stimulus as novel and unexpected but not as threatening because no fear‐related expressions follow, such as backing up and moving away. Nevertheless, the exact mechanism of inferring the emotional experiences in others remains vaguely formulated in both appraisal theories and affect program theories (Ekman & Cordaro, 2011; Scherer, Banse, & Wallbott, 2001; Scherer & Ellgring, 2007), though it has been proposed that the act of emotion inference is separate from the act of perception (Scherer & Ellgring, 2007; Scherer et al., 2013).

Perhaps the one account that has come closest to providing a mechanistic account of emotion perception consists of constructivism theories, which argue that others' emotions can be accurately inferred from a combination of motor expression perception, context processing and conceptual knowledge about the relationships between emotions, desires, and beliefs (Barrett, 2006; Barrett & Kensinger, 2010; Russell, 2005, 2009). The perception of motor expressions can inform the emotion inference process not because motor expressions reflect emotions per se (i.e., there is no one‐on‐one mapping) but because the perceiver has learned through experience to associate certain motor expressions with certain emotional experiences via a bootstrapping process (Barrett, Mesquita, & Gendron, 2011; Lindquist, 2013) or even a process of elimination (DiGirolamo & Russell, 2017; Nelson & Russell, 2016). For example, individuals may have come to learn that frown eyebrows and pouted lips are often associated with a limited set of mental states (e.g., anger and sudden euphoric joy). Here, the perception of context is crucial in determining which of the possible mental states from the set is the most likely felt emotion (Barrett & Kensinger, 2010; Barrett et al., 2011; Sommer, Dohnel, Meinhardt, & Hajak, 2008). Finally, constructivism theories argue that conceptual knowledge in the form of folk theories about emotions and mental states can help refine the inference process (Barrett, 2006; Lindquist, 2013; Ochsner et al., 2009; Zaki, 2013) and that language processes play a crucial role in the categorical perception of emotional expressions (Barrett, Lindquist, & Gendron, 2007; Lindquist, Barrett, Bliss‐Moreau, & Russell, 2006; Lindquist & Gendron, 2013). However, this evidence is not uncontroversial (Deonna & Scherer, 2010; Panksepp, 2007; Sauter, 2018; Sauter, LeGuen, & Haun, 2011).

In summary, the majority of psychological and neurocognitive theories only go as far as describing how an individual experiences and manifests an emotion, which represents only the input for the perceptual decision‐making process on others' emotions (Scherer et al., 2013). One possible reason for this discrepancy in theory coverage is the overwhelming evidence that the emotional expressions of others are perceived in a categorical manner (Fugate, 2013; Jaywant & Pell, 2012), which is compatible with multiple theories of emotion (e.g., Scherer et al., 2011), hence providing little incentive for further scrutiny. The term categorical perception describes the subjective experience in which a perceived dimension jumps abruptly from one category to another at a certain point along a continuum, instead of changing gradually (Liberman, Harris, Hoffman, & Griffith, 1957). For example, faces or voices in a morphing sequence between two prototypical emotions are perceived as either one or the other but not as something in between (Cheal & Rutherford, 2015; Etcoff & Magee, 1992; Fujimura, Matsuda, Katahira, Okada, & Okanoya, 2012; Jaywant & Pell, 2012; Korolkova, 2014; Laukka, 2005). The robust phenomenon of categorical perception of emotions has been replicated with various response formats and analysis methods (Bimler & Kirkland, 2001; Campanella, Quinet, Bruyer, Crommelinck, & Guerit, 2002; Cheal & Rutherford, 2010, 2015; Dailey, Cottrell, Padgett, & Adolphs, 2002; Fujimura et al., 2012; Kotsoni, de Haan, & Johnson, 2001; Sauter et al., 2011). Categorical and continuous perception often co‐occur when processing the emotions of others: the former allows for a gestalt perception of a single emotion, while the latter enables to us to perceive subtle variances within an emotional construct (Fujimura et al., 2012). However, categorical perception appears to dominate the way we process and attribute emotions in others (Fugate, 2013) and the reason for this could be to achieve cognitive efficiency by parsing out information into meaningful, but limited, pieces of information (Goldstone & Hendrickson, 2010; Harnad, 1987; Schusterman, Reichmuth, & Kastak, 2000). Indeed, Etcoff and Magee (1992) argued that, if the perceiver were to quickly detect the sender's mental state, a blend of emotions would be difficult to interpret meaningfully. Instead, relying on the dominant emotion in the signal would be more likely to give an accurate prediction about the sender's mental state and, by proxy, about the environment.

The highly debated question in the emotion perception field is not whether emotions are perceived categorically, but rather to what extent is the phenomenon of categorical perception more perceptual or more conceptual (Fugate, 2013). In other words, do we subjectively perceive others' emotional expressions as discrete entities because emotions per se are discrete categories or is it because we constructively create the perceived emotion so effortlessly out of multiple sources of information (e.g., context, knowledge about the target) that our brains are “tricked” into seeing distinct categories, akin optical illusions? As with many psychological phenomena, a middle ground has been proposed: there is a seesaw relationship between the innate tendency for categorical perception and the context in which the emotional expression occurs (Fugate, 2013; Hess & Hareli, 2017). As the emotional signal increases in noise and becomes ambiguous (e.g., tendency of the perceived agent to mask the emotion), so does the influence of context and language increases in deducing the emotional state. Conversely, the richer the emotional expression is in situational information (e.g., a boisterous laugh), the lower the need to rely on context to infer the respective emotional state (Hareli, Elkabetz, & Hess, 2019).

Lesion and neuroimaging studies are principally equipped with informing models of perceptual decision‐making because they aim to reveal underlying brain structures and disclose associated mental processes, an impossible feat for behavioral or physiological experiments (Aue, Lavelle, & Cacioppo, 2009). However, as with biological and psychological models of emotions, current neurocognitive models suffer from a similarly unbalanced focus on the input of perceptual decision‐making on emotions, namely the type of sensory information that is extracted by the visual and auditory cortices (Belin et al., 2004; Concina, Renna, Grosso, & Sacchetti, 2019; Frühholz, Trost, & Kotz, 2016; Haxby, Hoffman, & Gobbini, 2000; Rauschecker, 2017; Sedda & Scarpina, 2012) and how this information is integrated into a percept (Bernstein & Yovel, 2015; Brück, Kreifelts, & Wildgruber, 2011; Heekeren, Marrett, & Ungerleider, 2008; Schirmer & Adolphs, 2017; Schirmer & Kotz, 2006). So far, less focus has been placed on how higher cognitive functions (e.g., language processes, accessing semantic knowledge) contribute to the formation of a holistic percept and its interpretation within various contexts.

Perceptual decision‐making on emotions is naturally part of a general perception system, for which neurocognitive models do exist. However, simple extrapolation from these models to the realm of emotional expressions is not warranted, as the latter constitute a special class of signals while the former were built from generic stimuli (Sander, Grafman, & Zalla, 2003; Scherer, 2009; Van Kleef, 2010). Unlike perceiving colors and pure tones, for instance, evaluating emotional stimuli engages a broader range of cognitive processes and potentially higher degrees of perceptual and inferential freedom. Nevertheless, we expect that perceptual decision‐making on emotional expressions adheres to the same three principles of general perception. First, sensory information is extracted by visual and auditory primary and associative regions (Hauser & Salinas, 2014). Within these regions, there are further areas specialized in processing human faces (Bernstein & Yovel, 2015; Haxby et al., 2000), body postures (de Gelder et al., 2015; Peelen & Downing, 2005), and human voices (Belin et al., 2004; Ceravolo, Fruhholz, & Grandjean, 2016; Frühholz & Belin, 2018; Pernet et al., 2015). Second, this sensory information is passed along two anatomically segregated and functionally specialized processing streams, dubbed the ventral stream and the dorsal stream (Goodale & Milner, 1992; Goodale & Westwood, 2004; O'Reilly, 2010; Rauschecker, 2012, 2013). The ventral pathway, connecting primary sensory cortices with temporal and prefrontal regions, is functionally conceptualized as the “what” stream, responsible for stimulus recognition and identification, and the mapping of sensory information onto conceptual representations (Goodale & Milner, 1992; Grill‐Spector & Weiner, 2014; Hebart & Hesselmann, 2012; Kravitz, Saleem, Baker, Ungerleider, & Mishkin, 2013). The dorsal pathway, connecting primary sensory areas with parietal and prefrontal regions, constitutes the “where/how” stream, responsible for processing space and motion, including locating objects in space, understanding others' movements and guiding our own actions toward objects in space (Arbib, 2017; Friederici, 2012; Goodale, Westwood, & Milner, 2004; Lega, Stephan, Zatorre, & Penhune, 2016; Murakami, Kell, Restle, Ugawa, & Ziemann, 2015). Finally, perception occurs when the incoming sensory information is made available to higher‐order brain regions and matched against a mental template. In the ventral stream, this mental template consists of semantic categorical representations (a prototype of a stimulus, e.g., how a face generally looks like) (Sedda & Scarpina, 2012; Summerfield & Koechlin, 2008; Summerfield et al., 2006; Takahashi, Ohki, & Kim, 2013). In the dorsal stream, the mental template consists of visuomotor and audiomotor sequences potentially stored in our procedural memory (e.g., how emotional expressions and emotional utterances evolve over time) (Goodale, Króliczak, & Westwood, 2005; Goodale & Milner, 1992; Rauschecker, 2011, 2012). Such templates allow us to perceive and discriminate other individuals' actions, including facial movements (Bernstein & Yovel, 2015) and speech (Rauschecker, 2012).

A later stage in the emotional perception process concerns emotional categorization or verbal labeling. By converting the set of sensory information into a percept that can be communicated, an individual is able to relate and describe the emotional status of another individual. Regarding this stage, a significant task‐dependent role is assigned to the frontal cortex in matching incoming sensory information to a mental template (Brück et al., 2011; Dricu & Frühholz, 2016; Frühholz & Grandjean, 2013b; Liakakis, Nickel, & Seitz, 2011; Sakagami & Pan, 2007; Schirmer & Kotz, 2006). While reviews and meta‐analyses exist on the multiple roles of the inferior frontal cortex (IFC) in perceiving a large class of stimuli (Dal Monte et al., 2014; Greenlee et al., 2007; Liakakis et al., 2011; Rahnev, Nee, Riddle, Larson, & D'Esposito, 2016), no such systematic reviews exist for perceiving emotional expressions, despite abundant empirical data (Dricu & Frühholz, 2016). Another consistent frontal brain structure recruited during the perceptual decisions on emotions is the dorsomedial frontal cortex (dmFC). This structure is predominantly involved in social cognition, such as forming impressions about others and inferring beliefs, desires, and intentions (Fruhholz, Trost, & Grandjean, 2016; Korb, Fruhholz, & Grandjean, 2015; J. P. Mitchell, Cloutier, Banaji, & Macrae, 2006; Schlaffke et al., 2015; Schurz, Radua, Aichhorn, Richlan, & Perner, 2014; Venkatraman, Rosati, Taren, & Huettel, 2009). The emotional expressions conveyed by others are inherently social stimuli and are processed differently from other classes of stimuli such as inanimate objects (Amodio & Frith, 2006). The involvement of the dmFC in this particular decisional process suggests that facial, bodily and vocal emotional expressions are not only proxies for mental states but also that perceivers spontaneously infer traits and mental states (i.e., beliefs, desire, intention) (Reisenzein, 2009), which are integrated in the emotional evaluation (Dricu & Frühholz, 2016; Uleman, Newman, & Moskowitz, 1996; Van Overwalle, 2009, 2011).

In addition to the frontal brain structures targeted by the ventral and dorsal processing stream, other brain structures that do not exclusively belong to either processing stream also contribute to perceptual decisions on emotions. One such structure is the amygdala, which works concomitantly with sensory cortices and higher‐order cortices to tag incoming sensory information with contextual relevance (Pannese, Grandjean, & Fruhholz, 2016; Phelps & LeDoux, 2005) and subsequently detect this relevance upon the next encounter with that stimulus or circumstance (Sander et al., 2003). For example, the procedure of fear conditioning instills an initially neutral stimulus with the capacity of inducing reactions and behaviors that are biologically relevant (e.g., freezing or fleeing) upon consistent association with an aversive unconditioned stimulus (Pape & Pare, 2010). Furthermore, the amygdala is likely to process diverse emotional expressions, such as facial expressions (Haxby, Hoffman, & Gobbini, 2002; O'Toole, Roark, & Abdi, 2002; Rossion, 2015; Sabatinelli et al., 2011) and vocal prosody features (Fruhholz, Klaas, Patel, & Grandjean, 2015; Frühholz et al., 2015, 2016; Pannese, Grandjean, & Frühholz, 2015, Pannese et al., 2016) as relevant social signals (Sander et al., 2003). The large variety of cortical and subcortical projections to and from the amygdala provide it with information about the properties of the stimulus as well as the ongoing goals and needs of the organism (J. L. Price, 2003). As such, the amygdala might serve as one of the interfaces between sensory cortices and higher‐order brain structures.

Altogether, perceptual decision‐making on emotions likely involves a large neural network of brain regions with complementary functional roles. The present meta‐analysis endeavored to systematically review and meta‐analytically analyze the neuroimaging literature to uncover our knowledge of this large neural network to date. Additionally, we also aimed to account for several shortcomings in the field of emotion perception. The first shortcoming concerns a lack of acknowledgment of the heterogeneity of perceptual tasks on emotional expressions. The second is the lack of interest on the differential involvement of distributed brain systems depending on the decisional requirements. There has been an implicit assumption that perception tasks do not differ qualitatively from one another (Elliott, Zahn, Deakin, & Anderson, 2011; Ong, Zaki, & Goodman, 2015; Schlegel, Boone, & Hall, 2017). Reviews and meta‐analyses frequently aggregate heterogeneous tasks of emotion perception, discarding any differences in task instructions (Fusar‐Poli, Placentino, Carletti, Landi, & Abbamonte, 2009; Müller, Höhner, & Eickhoff, 2018; Phan, Wager, Taylor, & Liberzon, 2002; Wager, Phan, Liberzon, & Taylor, 2003). Alternatively, some researchers use a specific paradigm of emotion perception and then extrapolate their findings across the entire phenomenon of emotion perception (e.g., (Adolphs, Damasio, Tranel, Cooper, & Damasio, 2000).

Notably, in a recent meta‐analysis, Müller et al. (2018) examined the influence of task requirements (i.e., explicit evaluation of facial emotional expression vs. focus on a nonemotional face feature) on the recruitment of brain regions during human imaging studies. However, their focus was solely on the visual face‐processing network and they did not differentiate between different types of explicit evaluation and decision tasks. To fill in these gaps, we review both the visual and auditory domain of others' emotional expression as part of the decisional process on emotions. Furthermore, we specifically argue that the differential requirements of various explicit emotion perception tasks including decisions on perceived emotions should be assessed. While neurobiologists and neuroscientists have long used sophisticated batteries of tests that tap into various facets of emotion perception in studies, for both healthy participants and patient samples (Boller & Grafman, 2000; Wilhelm, Hildebrandt, Manske, Schacht, & Sommer, 2014), there still seems a certain lack of acknowledgment concerning the heterogeneity of perceptual and decisional tasks on emotional expressions and their differential neural implications. We thus reviewed the neuroimaging literature of emotion perception spanning 30 years. Using the existing literature to inform us about the neurocognitive mechanisms behind perceptual decisions on others' emotions, we then built an evidence‐based neurocognitive model of emotion perception (Figure 1). Finally, we connected our neurocognitive model of emotion perception with biological, neuroscientific, and psychological theories of emotion and neurocognitive models of general perception.

2. MATERIALS AND METHODS

2.1. Selection of neuroimaging studies

Potentially eligible studies for perceptual decision‐making of emotions were identified by conducting a search on PubMed for studies published online between January 1st 1989 and July 1st 2019, using the following keyword combinations: (fmri OR pet) AND (emotion* OR affective) AND (face* OR facial OR body OR posture* OR voice* OR vocal). Study inclusion was restricted to whole‐brain functional magnetic resonance imaging studies or PET studies written in English on perceptual decision‐making on emotions across a variety of tasks. A series of five inclusion and exclusion criteria were further applied at the level of participants, stimuli, task instructions, imaging data, and imaging contrasts reported. All potential studies were independently screened by each author, and were selected if they had sufficient experimental and data quality. A flow chart of the study search, inclusion and exclusion criteria, and the final selection of studies is shown in Figure 2.

Why is decision making a perceptual issue?

PRISMA flow chart of the study search and study selection process

2.2. Participants selection

Only healthy adults with a median or average age between 18 and 59 years old were included in the analyses. Studies with less than eight participants were excluded, as they would mostly introduce noise in the data (Eickhoff et al., 2009). Clinical and pharmaceutical studies were included if they reported separate, within‐group analyses for the controls or the placebo condition. Studies on imaging genetics were included if they randomly selected the sample out of the general population and allowed the allele quotas to fall out naturally.

2.3. Experimental stimuli

The included studies used emotional expressions conveyed in the face, voice, or body postures according to either a basic emotions model (i.e., anger, fear, joy, sadness, disgust, surprise) or a valence‐arousal model (e.g., mildly/highly pleasant/unpleasant). Emotional expressions were either prototypical or morphed/filtered; unimodal (e.g., faces only, voices only) or multimodal (e.g., faces and voices together); presented either in a static or dynamic form. Furthermore, the emotional stimuli must have been previously validated as pertaining to an emotional construct (e.g., Ekman faces) or must have been validated in a pilot study specific for the paradigm in question. For the sake of uniformity, one study using point‐light stimuli to depict emotional faces and body expressions was not included (Atkinson, Vuong, & Smithson, 2012). Mental states (e.g., Mind in the eye task) and sexual or erotic stimuli were also excluded.

2.4. Task instructions

Because we were interested in the perceptual decision‐making on emotional expressions, we only included studies that used a paradigm of active deliberation over perceived emotional expressions (e.g., identify, categorize, and discriminate). We specifically excluded studies that prompted the participants to feel the emotion perceived or to react to it, as well as studies looking into learning (e.g., fear conditioning), memory for emotional stimuli (e.g., recall of happy vs. neutral faces) or the effects of emotion on cognition. Similarly, studies that required emotional expressions to be imagined, anticipated, or generated were excluded. Studies on backward masking of emotions (i.e., an emotional face presented at a near‐threshold detection rate, e.g., 67 ms) and binocular rivalry (e.g., emotional faces superimposed on houses) were included only if the participants were fully aware of the stimuli.

2.5. Imaging data

The brain activation data must have been reported in either the standard MNI or Talairach space. Studies failing to report the imaging space were not included. Furthermore, only studies on changes in regional activation (i.e., as revealed by task comparison or by image subtraction method) were included. Because the activation likelihood estimation (ALE) meta‐analysis has been validated with contrast‐based analyses, we excluded data on changes in functional or effective connectivity, and data reporting an interaction between stimulus and time, or task and time. Similarly, we excluded studies reporting contrast‐based deactivation, as it is conceptually recommended that activation and deactivation studies are investigated separately (Müller et al., 2018).

2.6. Imaging contrasts

We were particularly interested in an “emotion versus neutral” contrast within each type of perceptual decision task. Therefore, we primarily included studies reporting either a main effect of emotion (irrespective of the type of emotion, e.g., all emotions vs. neutral; irrespective of modality, e.g., emotional faces and voices vs. neutral faces and voices) or simple effects of emotion (e.g., discriminate happy vs. discriminate neutral, when other emotions were also reported). Studies were also included if they reported a main effect of task (e.g., discriminate emotional faces vs. discriminate geometrical shapes). We specifically excluded contrasts using resting‐state or fixation cross as a baseline for comparison or contrasts comparing various emotions against each other. Finally, we excluded imaging contrasts correlating with other attributes (e.g., anxiety, personality traits).

Following the keyword search, 3,278 studies were highlighted. The selection process of these articles took place in two stages. First, the titles and abstracts were assessed, and the articles were retrieved based on for relevance. Second, the full text of relevant articles was assessed to determine whether the five inclusion and exclusion criteria were met (i.e., participants selection, experimental stimuli, task instructions, imaging data, and analysis contrasts). Following this procedure, 3,278 initial publications were found, of which 107 articles (111 experiments) fulfilled all inclusion and exclusion criteria.

2.7. Post hoc classification of paradigms

As with any meta‐analysis, it is imperative that the individual tasks within a target paradigm are as similar as possible, that is, that task heterogeneity is reduced as much as it is theoretically possible. Following the identification of the 107 eligible articles but before we conducted the meta‐analyses, we performed a post hoc classification of the perceptual decisional tasks on emotions to identify homogenous paradigms. In grouping the eligible studies, we generally ignored the authors' nomenclature as it tended to be heterogeneous and inconsistent. For example, discriminating an emotional expression (i.e., comparing two stimuli against each other on a particular dimension) was sometimes referred to as “detection” (Buchanan et al., 2000), “judgment” (Critchley et al., 2000), or “identification” (Gur et al., 2007). Similarly, labeling emotional expressions was termed “discrimination” (Johnston, Mayes, Hughes, & Young, 2013; Kotz et al., 2003), “categorization” (Pichon, de Gelder, & Grèzes, 2009; van de Riet, Grèzes, & de Gelder, 2009), “recognition” (Derntl et al., 2012), “identification” (Kitada, Johnsrude, Kochiyama, & Lederman, 2010), “classification” (Szameitat et al., 2010), or even “comprehension” (Alba‐Ferrara, Hausmann, Mitchell, & Weis, 2011). Given the high variety with which the nomenclature was used, we opted to classify the tasks based on the similarity in the instructions. We then gave these formed classes of tasks new labels that would fit with the corresponding paradigm.

A starting point in our post hoc classification was the seminal study by Hariri et al. (2002) who found that a nonverbal emotion‐matching task greatly activates the bilateral amygdala. In this task, participants must match the facial expression (usually angry or fearful) of one of two faces to that of a simultaneously presented target expression. Since its implementation, this paradigm has been increasingly used in research on facial expressions in both healthy and clinical populations. Furthermore, Burklund, Craske, Taylor, and Lieberman (2015) argued that emotion matching and emotion labeling tap into distinct brain networks. Given the high possibility that amygdala activation could be driven mostly by the nonverbal matching task, which might not be characteristic to other forms of perceptual decisional tasks, we considered emotion matching and emotion labeling as distinct classes of decision‐making (Figure 3). We grouped studies on emotion labeling to include those paradigms where participants are asked to associate a perceived emotional expression with an appointed emotion label. Specifically, emotion labeling included forced‐choice studies with response buttons dedicated to each emotional label or emotional construct. Based on task instructions, emotion labeling could be further divided into off‐screen labeling and on‐screen labeling, with the latter presenting the target emotional expression along with simultaneous verbal descriptions of the possible choice labels (i.e., on‐screen), and the former requiring participants to perform without such visual aids (i.e., off‐screen). Such a distinction in task instructions might prompt participants to use different cognitive strategies to perceive the emotions (Figure 3).

Why is decision making a perceptual issue?

Results of the individual neuroimaging meta‐analysis on the paradigm of labeling of emotional expressions when the available choice labels are (a) displayed on the screen or (b) not displayed on the screen. (c) Results of the contrast analysis between the meta‐analysis on off‐screen emotion labeling and (d) on‐screen emotion labeling. Amy, amygdala; dTri, dorsal pars triangularis; Hipp, hippocampus; IFC, inferior frontal cortex; mTri, middle pars triangularis; Orb, pars orbitalis; pMTG, posterior middle temporal gyrus; STS, superior temporal sulcus; vTri, ventral pars triangularis

Additionally, we grouped experiments into emotion discrimination and emotion rating. Discrimination incorporates experiments in which participants compare a target stimulus against background noise (e.g., detect the presence or absence of a stimulus) or against other similar stimuli (e.g., are these stimuli the same or different?) and it draws inspiration from classical psychophysical studies (e.g., Phillips, Channon, Tunstall, Hedenstrom, & Lyons, 2008; Soliunas & Gurciniene, 2007; Van Hout, Hautus, & Lee, 2011; Figure 3). Emotion rating consists of studies in which participants must gauge the intensity or the valence of the emotional expression on a given scale and draws inspiration from neuropsychological studies with brain‐lesioned patients (e.g., Adolphs et al., 2000; Fruhholz & Staib, 2017; Figure 2).

2.8. Coordinate‐based meta‐analyses

Following the post hoc classification of studies into paradigms based on task instructions, we proceeded with separate meta‐analyses on each paradigm of perceptual decisions on emotions. We opted for the coordinate‐based ALE meta‐analysis, which identifies brain areas of convergent neural activity across different experiments, empirically determining whether this clustering is greater than expected by chance. The ALE algorithm, as implemented in the latest version of GingerALE 2.3.6 (http://brainmap.org/ale), captures the spatial uncertainty associated with reported coordinates, treating them as the centers for 3D Gaussian probability distributions (Turkeltaub, Eden, Jones, & Zeffiro, 2002) with widths based on empirical between‐subject and between‐template comparisons (Eickhoff et al., 2009). One modeled activation (MA) map is then created for each experiment by merging the probability distributions of all activation foci (Turkeltaub et al., 2012). If more than one focus from a single experiment is jointly influencing the MA map, then the maximum probability associated with any focus reported by the given experiment is used. Voxel‐wise ALE scores (union across these MA maps) then quantify the convergence across experiments at each location in the brain. As functional activations occur predominantly in gray matter areas, ALE scores are computed only for voxels with more than 10% probability of containing gray matter (Evans, Collins, & Milner, 1992). The resulting random‐effects inference focuses on the above‐chance convergence across studies rather than the clustering within a particular study (Eickhoff et al., 2009). To distinguish “true” from random convergence, ALE scores are compared to an empirical null distribution reflecting a random spatial association among all MA maps.

A major focus of the present study was to determine the differences and similarities in brain structures between the different types of perceptual decisions on emotions. In this regard, we performed a series of conjunction and contrast analyses between the different types of decisions on emotions. We have to note that the number of studies classified to certain paradigms (see above) differed across these paradigms (Table 1). However, GingerALE accounts for an unbalanced number of studies that are subjected to certain contrasts by means of data simulation and permutation (Eickhoff et al., 2011). Thus, the brain activations resulting from these comparisons between paradigms are very unlikely influenced by the study number.

Table 1

Summary of the emotion perception paradigms included in the meta‐analyses

Perceptual taskArticlesExperimentsParticipantsAverage no. of participantsFoci
Emotion labeling 45 47 1,089 23 723
Off‐screen 20 20 486 25 282
Onscreen 25 27 603 22 441
Emotion matching 34 35 853 24 528
Emotion discrimination 18 19 290 15 183
Emotion rating 10 10 218 22 173

The conjunction analysis is computed using the conservative minimum statistic inference (Nichols, Brett, Andersson, Wager, & Poline, 2005), which calculates a simple overlap between regions that were found statistically significant in the individual meta‐analyses. This implies that only those regions that are significant on a corrected level in both individual meta‐analyses are considered. Contrast analyses are performed by computing the voxel‐wise difference between two ensuing ALE maps (Eickhoff et al., 2011). Specifically, all experiments contributing to either the minuend or the subtrahend in these contrast analysis are then pooled and randomly divided into two groups of the same size as the two original sets of experiments reflecting the contrasted ALE analyses. ALE scores for these two randomly assembled groups are calculated and the difference between these ALE scores is recorded for each voxel in the brain. Repeating this process several thousand times yields an expected distribution of ALE‐score differences under the assumption of exchangeability. The “true” difference in ALE scores is tested against this null‐distribution yielding a posterior probability that the true difference was not due to random noise in an exchangeable set of labels. The resulting probability values are then thresholded and inclusively masked them by the respective main effects, that is, the significant effects of the ALE analysis for that condition. A correction for multiple comparisons is not applied to the contrasts analyses because GingerALE restricts the search space to voxels that had survived the threshold in the main effect for the minuend (Eickhoff et al., 2011).

The GingerALE algorithm uses two sets of statistical corrections (or thresholds). The first correction represents the p value that a voxel must surpass to be considered (i.e., cluster‐forming threshold) while the second correction specifies the number of contiguous voxels that must simultaneously surpass cluster‐forming threshold to be considered a significantly active cluster of voxels (i.e., cluster‐level family‐wise error corrected thresholding). The reasoning for this dual‐threshold correction is that voxels representing false alarms due to noise are more likely to be randomly distributed throughout the brain and thus much less likely to occur in contiguous groups of voxels than in single voxels (Eickhoff, Bzdok, Laird, Kurth, & Fox, 2012; Lieberman & Cunningham, 2009). We thresholded individual meta‐analyses with a cluster‐forming threshold at voxel level p < .001 and a cluster‐level family‐wise error corrected thresholding of p < .01 (Eickhoff et al., 2012). To determine null‐distributions, we conducted 10,000 repetitions. Contrasts analyses were based on the subtraction of single dataset meta‐analyses and the results were thresholded at p < .05 (i.e., 5% probability that the differences observed between datasets are due to random noise). To assess the null distribution, we again opted for 10,000 repetitions. A correction for multiple comparisons is not applied to the contrasts analyses because GingerALE restricts the search space to voxels that have survived the threshold in the main effect for the minuend (Eickhoff et al., 2011). To ensure enough statistical power, we limited our analyses to data sets which contained at least 17 experiments (Eickhoff et al., 2016). All meta‐analyses results were localized and labeled using the Yale BioImage Suite's digital medical atlas (http://bioimagesuiteweb.github.io/webapp; Papademetris et al., 2006), and visualized using the MRIcron software (http://nitrc.org/projects/mricron) with the MNI brain template.

3. RESULTS

A summary of the post hoc classification of eligible studies can be seen in Table 1. Most of the studies used various tasks of emotion labeling, resulting in 47 experiments, 1,089 participants and 723 distinct activation coordinates in the brain (foci), followed by 35 experiments of emotion matching, 19 experiments of emotion discrimination, and 10 experiments of emotion rating (Table 1).

3.1. Individual meta‐analyses of decisional tasks

Neuroimaging meta‐analyses as those calculated by the GingerALE software reveal the most consistently reported neural activity for a given paradigm, for example, emotion labeling. In other words, they reveal the brain structures most regularly engaged during a paradigm, above, and beyond individual differences in experimental design, settings, or stimuli.

The paradigm of emotion labeling was characterized by extensive activation in the left IFC (pars triangularis and pars orbitalis), as well as the left amygdala, the right superior temporal sulcus, the left posterior middle temporal gyrus (MTG) were recruited and the right pars triangularis of the IFC (Table 2, Figure 3a). When the available choice options were displayed on the screen, emotion labeling was associated with a strongly lateralized recruitment of the left amygdala and several patches along the left IFC, namely mid and dorsal pars triangularis, and pars opercularis (Table 2, Figure 3b). When the available choice options were not displayed on the screen for the duration of the trials, converging brain activity was found in the left mid and ventral pars triangularis, the left posterior MTG, the left amygdala extending into the hippocampus, and the right superior temporal sulcus (Table 2, Figure 3c).

Table 2

Brain regions with significant convergence of activity pertaining to each paradigm of emotion perception

ParadigmAnatomical structurexyz
Emotion labeling (all)a , e Cluster 1 (k = 4,720)
L mid pars triangularis (IFC) −48 26 4
L dorsal pars triangularis (IFC) −46 34 4
L pars orbitalis (IFC) −40 22 −8
L pars orbitalis (IFC) −32 26 −4
L pars orbitalis (IFC) −46 36 −10
Cluster 2 (k = 2,632)
R superior temporal sulcus 52 −36 4
R superior temporal sulcus 58 −50 8
Cluster 3 (k = 2,280)
L amygdala −20 −4 −16
Cluster 4 (k = 920)
L posterior middle temporal gyrus −50 −58 12
L posterior middle temporal gyrus −56 −50 6
Cluster 5 (k = 608)
R pars triangularis (IFC) 54 30 −2
On‐screen labelingb , e Cluster 1 (k = 992)
L amygdala −20 −2 −16
Cluster 2 (k = 760)
L pars orbitalis (IFC) −38 22 −8
L pars orbitalis (IFC) −34 28 −2
Cluster 3 (k = 712)
L mid pars triangularis (IFC) −46 32 8
Cluster 4 (k = 488)
R ventral pars triangularis (IFC) 56 28 0
Off‐screen labelingc , e Cluster 1 (k = 4,320)
L mid pars triangularis (IFC) −48 26 4
L ventral pars triangularis (IFC) −50 16 −4
L mid pars triangularis (IFC) −46 38 0
Cluster 2 (k = 1,344)
L hippocampus −20 −12 −14
L amygdala −20 −8 −16
Cluster 3 (k = 1,016)
L posterior middle temporal gyrus −48 −56 14
Emotion matchinge Cluster 1 (k = 3,312)
R amygdala 22 −4 −18
Cluster 2 (k = 3,216)
L amygdala −22 −6 −18
Cluster 3 (k = 3,128)
R inferior frontal junction 46 16 22
R inferior frontal sulcus 52 26 22
Cluster 4 (k = 2,696)
R visual association area 26 −96 −2
Cluster 5 (k = 2,496)
L inferior frontal junction −44 18 26
Cluster 6 (k = 1,848)
R fusiform gyrus 40 −52 −24
R fusiform gyrus 40 −64 −14
Cluster 7 (k = 1,360)
L visual association area −22 −96 −6
Cluster 8 (k = 1,256)
L fusiform gyrus −40 −54 −22
Cluster 9 (k = 1,152)
L + R dorsomedial frontal cortex −2 16 50
Cluster 10 (k = 864)
L thalamus −22 −30 −2
Cluster 11 (k = 696)
R intraparietal sulcus 34 −56 46
Cluster 12 (k = 656)
R superior temporal sulcus 58 −48 14
R superior temporal sulcus 52 −44 10
Cluster 13 (k = 536)
R thalamus 24 −30 0
Cluster 14 (k = 488)
L intraparietal sulcus −30 −56 44
Emotion discriminationd , e Cluster 1 (k = 1,264)
R pars opercularis (IFC) 42 16 20
R pars opercularis (IFC) 52 22 24
Cluster 2 (k = 864)
L amygdala −24 0 −18
L peri‐amygdala −32 6 −22
Emotion ratingd , e Cluster 3 (k = 1,152)
L amygdala −26 2 −18
L hippocampus −32 −6 −20

The paradigm of nonverbal matching of emotional facial expressions is correlated with bilateral activations in the inferior frontal junction (IFJ), the amygdala, the visual association cortex, the fusiform gyrus, the dmFC, the intraparietal sulcus, and the left thalamus (Table 2, Figure 4a). At the chosen cluster‐forming threshold of p < .0001, no brain regions showed statistical convergence of neural activity for the paradigms of emotion discrimination and emotion rating. Due to the overall lower number of experiments pertaining to these two paradigms compared to emotion labeling and emotion matching, we opted to lower the cluster‐forming threshold to p < .001, which revealed that the left amygdala was associated with both emotion discrimination and emotion rating (Table 2, Figure 5a). At this more lenient threshold, the right pars opercularis of the IFC was additionally activated for emotion discrimination (Table 2, Figure 5a).

Why is decision making a perceptual issue?

(a) Results of the individual neuroimaging meta‐analysis on emotion matching. (b) Results of the contrast analysis between the meta‐analyses on emotion matching and emotion labeling. Amy, amygdala; dmFC, dorsomedial frontal cortex; FG, fusiform gyrus; IFJ, inferior frontal junction; IFS, inferior frontal sulcus; IPS, intraparietal sulcus; pMTG, posterior middle temporal gyrus; STS, superior temporal sulcus; Thal, thalamus; Tri, pars triangularis; Vis, the visual cortex

Why is decision making a perceptual issue?

(a) Results of the individual neuroimaging meta‐analysis on emotion discrimination and (b) emotion rating. Amy, amygdala; Oper, pars opercularis of the inferior frontal cortex

3.2. Contrast and conjunction analyses

Unlike individual neuroimaging meta‐analyses, which find the most regularly engaged brain structures during a given paradigm, contrast analyses compare thus found brain regions of engaged neural activity between two paradigms, that is, meta‐analysis of paradigm A versus meta‐analysis of paradigm B. In other words, a contrast analysis reveals which brain regions, if any, are more consistently more recruited by paradigm A compared to paradigm B. A conjunction analysis, on the other hand, reveals which regions, if any, are regularly recruited during paradigms A and B.

Comparing “off‐screen” against “on‐screen” emotion labeling revealed that the left ventral pars triangularis extending, the right superior temporal sulcus, and the left posterior MTG were more consistently recruited during off‐screen labeling, that is, when available choice options are not displayed on the screen during the perception decision‐making process (Table 3, Figure 3d). Contrasting “on‐screen” emotion labeling with “off‐screen” labeling did not reveal any significant brain activation that is more consistently reported by the former compared to the latter. Last, the conjunction analysis showed that activations in the left amygdala and the left mid pars triangularis of the IFC were common for both onscreen and off‐screen labeling of emotions (Table 3, Figure 3d).

Table 3

Contrast and conjunction analyses of paradigms of perceptual decisions on emotions

AnalysisAnatomical structure x y z
On‐screen labelinga vs. off‐screen labelingb No brain regions with significant convergence of activity
Off‐screen labelingb vs. on‐screen labelinga Cluster 1 (k = 696)
L ventral pars triangularis (IFC) −48 30 −8
L ventral pars triangularis (IFC) −50 22 −4
Cluster 2 (k = 240)
L posterior middle temporal gyrus −48 −52 16
L posterior middle temporal gyrus −46 −54 10
Off‐screen labelingb and on‐screen labelinga Cluster 1 (k = 240)
L amygdala −20 −8 −16
Cluster 2 (k = 144)
L mid pars triangularis (IFC) −48 28 6
Label emotionsc vs. Match emotionsd Cluster 1 (k = 1,232)
L mid pars triangularis (IFC) −51 29 4
L mid pars triangularis (IFC) −48 32 2
L mid pars triangularis (IFC) −44 30 4
Cluster 2 (k = 312)
L posterior middle temporal gyrus −54 −58 12
Match emotionsd vs. Label emotionsc Cluster 1 (k = 2,488)
R visual association areas 30 −94 −6
R visual association areas 26 −94 −8
Cluster 2 (k = 2,424)
R inferior frontal sulcus 54 32 20
R inferior frontal junction 46 11 24
Cluster 3 (k = 2,256)
R amygdala 20 −9 −18
R amygdala 20 −4 −22
Cluster 4 (k = 1,744)
L inferior frontal sulcus −47 20 31
Cluster 5 (k = 1,048)
L visual association areas −22 −91 −9
Cluster 6 (k = 1,040)
L amygdala −28 0 −22
L amygdala −24 −6 −18
Cluster 7 (k = 936)
R fusiform gyrus 44 −56 −24
R fusiform gyrus 38 −54 −22
Cluster 8 (k = 800)
L fusiform gyrus −42 −60 −22
L fusiform gyrus −40 −52 −26
Cluster 9 (k = 792)
L thalamus −18 −28 −2
L thalamus −22 −32 0
Cluster 10 (k = 696)
R intraparietal sulcus 34 −58 42
R intraparietal sulcus 30 −56 42
Cluster 11 (k = 536)
R thalamus 16 −32 −2
Cluster 12 (k = 208)
L intraparietal sulcus −32 −64 46
Label emotionsc c and Match emotionsd Cluster 1 (k = 1,664)
L amygdala −22 −2 −16

The paradigm of emotion matching relies exclusively on facial expressions. Therefore, to ensure a fair comparison, we looked at the similarities and differences between this paradigm and a subset of emotion labeling, that is, labeling facial expressions. Compared to the nonverbal task of emotion matching, the verbal labeling of emotional facial expressions distinctly recruits the left pars triangularis, the right superior temporal sulcus, and the left posterior MTG (Table 3, Figure 4b). Comparing emotion matching against emotion labeling revealed an extended network of bilateral brain regions comprising of the bilateral visual association areas, amygdalae, the IFJ, the dorsal medial frontal cortex, the fusiform gyri, the intraparietal sulci, and the thalami that were uniquely recruited during emotion matching (Table 3, Figure 4b). The conjunction analysis revealed that emotion matching and emotion labeling of facial expressions similarly recruited the left amygdala (Table 3, Figure 4b).

Due to the insufficient number of experiments for emotion discrimination and emotion rating (Eickhoff et al., 2016), we were unable to formally run contrast and conjunction analyses between these paradigms and emotion labeling and emotion matching.

4. DISCUSSION

The present study tried to address the existing gaps in the literature of emotion perception, namely the general lack of acknowledgment concerning the heterogeneity of perceptual tasks (Elliott et al., 2011; Ong et al., 2015) and how the brain idiosyncratically computes the various perceptual decisions on emotions. We opted for an evidence‐based approach, based on the existing literature about the neurocognitive mechanisms behind our perceptual decisions on others' emotions.

In this regard, we reviewed the neuroimaging literature of emotion perception spanning 30 years. This resulted in three major observations. First, there are largely four research paradigms used to investigate perceptional decision‐making on emotions, referred to in this article as emotion labeling, matching, discrimination, and rating. Second, each paradigm of decision‐making on emotions tapped into different neural structures that reflect the putative cognitive demands of the decision‐making task at hand. Third, the left amygdala was responsive across all classes of decisional paradigms, regardless of task instructions, clarifying the degree of involvement of the amygdala in the explicit evaluation of emotions. In the following paragraphs, we will discuss these findings and we will conclude by proposing a neurocognitive model of perceptual decision‐making on emotions that outlines the information flow in the brain needed for a proper understanding of other individuals' emotions.

4.1. Emotion matching

Nonverbally matching emotional expressions, such as matching a target emotional expression to various other expressions perceived, recruits brain regions of sensory processing regions and of higher cognition. The presence of sensory regions is understandable given the nature of the control task used at the level of individual experiments. The control task invariably involves matching geometrical shapes based on low‐level perceptual cues. Contrastingly, the facial expressions in the matching task contain almost exclusively emotional faces (i.e., no neutral faces), most often fearful and angry. The comparison between matching highly aroused emotional faces and matching geometrical shapes would thus reveal residual activation in sensory brain regions involved with processing the complex perceptual cues found in emotional expressions. These sensory processing regions involved bilateral visual association areas, fusiform gyrus, thalamus, and intraparietal sulci, which have all been implicated in processing human faces (Arcurio, Gold, & James, 2012; Frühholz, Fehr, & Herrmann, 2009; Haxby et al., 2000, 2002; Rossion & Retter, 2015; Yovel, 2016).

The emotion‐matching paradigm also strongly recruited the bilateral amygdala. Generally thought of as a module of automatic detection of emotions (Frühholz & Grandjean, 2013a; Öhman, 2002; Pannese et al., 2015, 2016; Phelps & LeDoux, 2005; Vuilleumier, Armony, Driver, & Dolan, 2001), the amygdala is preferentially recruited by angry and fearful faces (Adams, Gordon, Baird, Ambady, & Kleck, 2003; Milesi et al., 2014; Phelps et al., 2001; Repeiski, Smith, Sansom, & Repetski, 1996), which are the predominant stimuli in the emotion‐matching paradigm. However, an alternative role for involvement of the amygdala is not as an automatic detector of emotions per se, but rather as a detector of relevant and salient stimuli, of which emotional expressions represent a subclass (Sander et al., 2003). For example, the amygdala responds to novel neutral faces (Schwartz, Wright, Shin, Kagan, & Rauch, 2003) and neutral faces of a different race (Hart et al., 2000) and abstract figures with learned associations with food rewards (Gottfried, O'Doherty, & Dolan, 2003). Adding an identity‐matching paradigm involving neutral faces only, Wright and Liu (2006) showed that bilateral amygdala activity was responsive during both identity and emotion‐matching tasks. Thus, it seems that the matching task itself triggers amygdala activity. The authors argued that the matching task adds relevance to the stimuli, including neutral faces. Viewed alone, neutral faces would be expected to have less inherent relevance than emotional faces but, during a matching task, they must acquire task‐related relevance. That is, out of the two possible faces to be matched, one becomes the “right” face while the other becomes the “wrong” face.

In addition to sensory processing regions, we found additional activation in bilateral IFJ and the dmFC during the emotion‐matching paradigm. The IFJ is a structurally distinct region located at the junction of the inferior frontal sulcus (IFS) and the inferior precentral sulcus (Amunts et al., 2010). Lesion studies (Petrides, 1985, 2005), transcranial magnetic stimulation (Verbruggen, Aron, Stevens, & Chambers, 2010), and neuroimaging experiments (Clos, Amunts, Laird, Fox, & Eickhoff, 2013; Harding, Yücel, Harrison, Pantelis, & Breakspear, 2015; C. Kim, Cilles, Johnson, & Gold, 2012; Sundermann & Pfleiderer, 2012) have linked the IFJ to cognitive switching across a variety of domains such as context switching (e.g., shifting between task rules or cognitive rules), perceptual switching (e.g., switching attention between perceptual features of a stimulus or between stimuli) and response switching (e.g., switching between different stimulus‐response mappings). The dmFC is a complex structure that spans several distinct regions (Öngür, Ferry, & Price, 2003) and plays different roles in social cognition (Amodio & Frith, 2006; Dricu & Frühholz, 2016; J. P. Mitchell, Macrae, & Banaji, 2005; Schurz et al., 2014). We found that the same portion of the dmFC is consistently reported in nonverbal tasks of appraising the mental states of both human and nonhuman agents based on observable cues (Döhnel et al., 2012; Gallagher et al., 2000; J. W. Kim et al., 2005; Schlaffke et al., 2015; Schurz et al., 2014; Völlm et al., 2006). In the domain of emotions, the dmFC is highly active when judging the appropriateness of specific facial emotions in a given context (J. W. Kim et al., 2005) or inferring whether someone is genuinely sad/happy or is simply posing for the camera (McLellan, Wilcke, Johnston, Watts, & Miles, 2012). Furthermore, dmFC is also recruited when participants must reason about the most likely scenario that caused an emotional facial expression (Prochnow, Brunheim, Steinhauser, & Seitz, 2014).

It thus appears that the nonverbal task of emotion matching invites participants to simultaneously infer the putative mental states associated with the perceived emotional expression. Beyond perceptually attending to each facial expression (as indexed by the sensory regions and intraparietal sulci), it is reasonable to expect that participants also mentally switch from appraising the mental state associated with the target emotional expression to appraising the mental states of each of the two potential choice options. Upon encountering emotional expressions of others, we may automatically infer their traits and mental states, and spontaneously integrate this information into our impressions about others (Uleman et al., 1996; Van Overwalle, 2009, 2011). Strong functional (Harding et al., 2015; Sundermann & Pfleiderer, 2012) and structural connectivity (Ford, McGregor, Case, Crosson, & White, 2010; Sallet et al., 2013) between the dmFC and the IFJ suggests that these higher cognitive brain structures work together to coordinate decoding emotions and inferring mental states in order to perform the matching task.

4.2. Emotion labeling

We found a strongly left lateralized pattern of activation during all tasks of emotion labeling (perceiving expressions from the face, voice, or body posture and associating them with a label pertaining to an emotion construct, such as joy or fear). This lateralization is very much in line with relevant lesion (Bates et al., 2001; Riès, Dronkers, & Knight, 2016) and neuroimaging literature (Vigneau et al., 2006) on language production and comprehension. However, we did not find converging neural activation in sensory processing brain regions. Unlike the emotion matching paradigm, the control task used in emotion labeling paradigms often matches the emotional expressions in perceptual complexity. In fact, 35 out of the 45 experiments used neutral faces, voices, or body postures as a baseline. Therefore, activations in brain regions associated with mental processes such as basic perception, attention, and working memory would have been canceled out.

When comparing results for emotion labeling tasks that either displayed or hid possible response labels alongside the stimuli (“on‐screen” vs. “off‐screen” labeling), we found some similarities and several distinctions. Specifically, the left mid pars triangularis of the IFC and the amygdala were similarly recruited during “on‐screen” and “off‐screen” emotion labeling. Putatively, what “on‐screen” and “off‐screen” labeling share in terms of cognitive processing is the successful retrieval of semantic knowledge about the world around us, including emotional constructs (Hamberger, Habeck, Pantazatos, Williams, & Hirsch, 2014; Miceli, Amitrano, Capasso, & Caramazza, 1996; Raymer et al., 1997). Not surprisingly, the left mid pars triangularis has been implicated in the domain‐general access and retrieval of information from semantic memory (Costafreda et al., 2006; Gennari, MacDonald, Postle, & Seidenberg, 2007; Nee et al., 2013; Riès et al., 2016; Snyder, Banich, & Munakata, 2011).

The right ventral and left mid pars triangularis of the IFC were active during on‐screen labeling of emotions (when the available choice options are displayed on the computer screen for the duration of the trial simultaneously with the target emotional expression). Both of these regions are important in phonological coding (Adair, Schwartz, Williamson, Raymer, & Heilman, 1999; C. J. Price, 2010), a crucial cognitive process in single‐word reading during which letter‐to‐sound associations stored in long‐term memory are accessed and manipulated (Bokde, Tagamets, Friedman, & Horwitz, 2001; Palmer, 2000). Structural and functional differences in bilateral pars triangularis can distinguish between normal readers and individuals with dyslexia (Eckert et al., 2003; Leonard, Eckert, Given, Virginia, & Eden, 2006; Norton et al., 2014; Partanen, Siege, & Giaschi, 2018; Robichon, Levrier, Farnarier, & Habib, 2000). More importantly, training individuals with dyslexia on reading increases feedback connectivity from the left pars triangularis to the right pars triangularis and sensory cortices, which predicts the subsequent increased performance in reading speed (Frye, Wu, Liederman, & McGraw Fisher, 2010; Z. V. Woodhead et al., 2013). Together, this evidence points to the concerted effort of the left and right pars triangularis in the rapid and automatic reading of printed words, such as the ones displayed during the on‐screen labeling of emotions.

On‐line labeling of emotions also recruited the left pars orbitalis of the IFC. This region is consistently involved in semantic judgments on a wide range of stimuli, including single printed words (e.g., “is this word concrete or abstract?”; Cutting et al., 2006; Devlin, Matthews, & Rushworth, 2003; Fisher, Cortes, Griego, & Tagamets, 2012; Mainy et al., 2008; Poldrack et al., 1999; Z. Woodhead et al., 2012; Yvert, Perrone‐Bertolotti, Baciu, & David, 2012) and pairs of printed words (e.g., “are the two words semantically related?”; Booth et al., 2006; Gough, Nobre, & Devlin, 2005; Kemmerer, Rudrauf, Manzel, & Tranel, 2012; Kotz, Cappa, von Cramon, & Friederici, 2002; Liu et al., 2013; Mechelli, Josephs, Lambon Ralph, McClelland, & Price, 2007). More relevant to the task of on‐screen labeling, activity in the left pars orbitalis increases when participants match pictures to auditory labels based on semantic similarity (Schmithorst, Holland, & Plante, 2007). This finding strongly suggests that, upon independently perceiving the emotional expression of the stimulus and covertly reading the displayed emotion labels (as indexed by the left and right pars triangularis), participants perform the on‐screen emotion labeling task by semantically matching the emotional expression to the appropriate label.

Off‐screen emotion labeling recruited the left mid and ventral pars triangularis, left posterior MTG, and the right posterior superior temporal sulcus (pSTS). The left mid pars triangularis was also present in on‐screen labeling of emotions and it likely reflects accessing semantic knowledge (Costafreda et al., 2006; Gennari et al., 2007; Nee et al., 2013; Riès et al., 2016; Snyder et al., 2011). The ventral portion of the left pars triangularis has been implicated in verbal working memory (Rottschy et al., 2012), suggesting that off‐line labeling of emotions required maintaining the possible choice labels in working memory for the duration of the trials. The activation in the left posterior MTG coincides with a region long implicated in anomic aphasia (Foundas, Daniels, & Vasterling, 1998; Goodglass, 1980; Kay & Ellis, 1987; Raymer et al., 1997), a brain disorder in which the patient knows what an object is and how to use it, and can accurately select it from a group of objects but cannot name the object (Goodglass, 1980). In other words, anomic patients have intact semantic knowledge of the perceived object but no longer have access to the lexical‐phonological representation in the phonological output lexicon (Butterworth, 1992; Levelt, 1992; Raymer et al., 1997). In an independent line of research, neuroimaging studies have also consistently implicated the same region in object naming (Cathy J Price, 2012; Watson, Cardillo, Ianni, & Chatterjee, 2013). The critical role of the left posterior MTG in naming objects might come from its strategic location between the ventral processing stream—important for object recognition—and the auditory and visual association cortex (C. J Price, 2012; Raymer et al., 1997).

The right pSTS has long been associated with recognizing and understanding purposeful action and movements (Allison, Puce, & McCarthy, 2000) and the basic understanding of intentions (Pelphrey & Morris, 2006). Extensive research has shown that this major sulcal landmark in the temporal lobe is sensitive to gaze orientation (Hooker et al., 2003) and the movement of the human body (Kourtzi & Kanwisher, 2000; Saxe, Xiao, Kovacs, Perrett, & Kanwisher, 2004), hands (Gallagher & Frith, 2004; Holle, Obleser, Rueschemeyer, & Gunter, 2010) and face (Fruhholz, Godde, Lewicki, Herzmann, & Herrmann, 2011; Lee et al., 2010) through either direct perception (Allison et al., 2000) or implied movement (David & Senior, 2000). However, the pSTS is not sensitive to just any kind of movement, as significantly more activation occurs when people watch others perform complex versus simple actions (Castelli, Happé, Frith, & Frith, 2000), physically possible movements compared to impossible movements (Stevens, Fonlupt, Shiffrar, & Decety, 2000), meaningful versus meaningless movements (Decety et al., 1997; Schultz, Imamizu, Kawato, & Frith, 2004), and when people observe transitions between meaningful actions within a larger goal‐directed activity such as cleaning the kitchen (Zacks et al., 2001). Similarly, the pSTS sulcus is also involved when understanding and inferring actions from auditory cues alone, such as footsteps and action verbs (Bidet‐Caulet, Voisin, Bertrand, & Fonlupt, 2005; Hein & Knight, 2008). The overwhelming evidence suggests that the right pSTS is involved in decoding and understanding meaningful social actions conveyed by gaze direction, body movement, and other types of goal‐directed meaningful motion or implied by spoken words (see also Frith & Frith, 1999).

4.3. Emotion discrimination

When performing the meta‐analysis with a more lenient threshold, we observed activation in the left amygdala and the right pars opercularis during emotion discrimination, in which participants must compare a target stimulus against background noise (e.g., detect the presence or absence of a stimulus) or against other similar stimuli (e.g., are these stimuli identical or different?). The right pars opercularis has traditionally been involved in Go‐No Go tasks and Stop Signal Tasks, prompting its hypothesized recruitment in cognitive and behavioral control, and inhibition of prepotent tendencies (Aron, Robbins, & Poldrack, 2004; Derrfuss, Brass, Neumann, & von Cramon, 2005; Levy & Wagner, 2011). However, a recent reinterpretation of the literature has extended the involvement of the right pars opercularis to monitoring salient features such as identifying a target item or a target dimension in a series of nontargets or nondimensions (Hampshire, Duncan, & Owen, 2007; Hampshire, Thompson, Duncan, & Owen, 2008, 2009; Shallice, Stuss, Alexander, Picton, & Derkzen, 2008), and detecting prelearned target objects (Hampshire et al., 2007, 2008; Linden et al., 1999). Thus, it seems that, as with the other paradigms of emotion perception, emotion discrimination is associated with brain regions involved in general cognitive functions that are assumedly beyond rather basic perceptual tasks.

4.4. The role of the amygdala in emotion perception

The left amygdala was recruited across all tasks of explicit emotion perception, regardless of task instructions. This is an interesting finding since the degree of the amygdala's involvement in the explicit evaluation of emotions has been a contentious issue in affective neuroscience. On one hand, research suggests that the amygdala is recruited in specific types of perception such as implicit (e.g., Brück et al., 2011; LeDoux, 1998) or nonverbal (e.g., Hariri, Bookheimer, & Mazziotta, 2000) emotion perception. On the other hand, several lesion studies have pointed to a general role of the left amygdala in response to emotional faces (Markowitsch, 1999), voices (Frühholz et al., 2015), and pictures (Gläscher & Adolphs, 2003), a role that has been backed up by neuroimaging studies of emotion perception (Baas, Aleman, & Kahn, 2004; Sergerie, Chochol, & Armony, 2008) and emotion experience (Costanzo et al., 2015).

The left amygdala might be involved in the cognitive appraisal of emotional information, possibly via a global–local hemispheric bias (Baas et al., 2004; Cahill, 2003; Markowitsch, 1999) while the right amygdala might play an important role in the production of a general arousal level. Specifically, the right amygdala is more strongly engaged in fast global, albeit shallow, processing of emotional content (Henke, Landis, & Markowitsch, 1993; Morris, Öhman, & Dolan, 1999), while the left amygdala is more strongly engaged in active detailed processing (Krickl, Poser, & Markowitsch, 1987; Morris, Öhman, & Dolan, 1998). The preference for left but not right amygdala activation during emotion labeling could be argued that these tasks require an important degree of local and detailed processing. For example, participants might focus more on specific visuospatial details or acoustic features that differentiate an angry expression from a neutral expression to perform successfully. Conversely, emotion matching might depend on both coarse and detailed processing of emotional content, thus recruiting both the right and left amygdala. Systematic comparison of tasks that require global versus local processing of emotional material is needed via carefully designed experiments.

4.5. A neurocognitive model of perceptual decision‐making on emotions

By synthesizing the findings of the meta‐analyses (Figure 1b), we propose a multistage neurocognitive model that outlines the general flow of information from sensory processing regions to frontal brain regions (Figure 1c). We argue for three functional principles of perceptual decisions on others' emotions. First, brain regions in the left and right hemisphere are differentially recruited depending on whether the paradigm is verbal or nonverbal in nature. The verbal task of emotion labeling generously recruited multiple regions in the left hemisphere. These regions have long been implicated in language processing, including verbal working memory, access to semantic knowledge, reading, and naming (Goodglass, 1980; C. J. Price, 2010; Rottschy et al., 2012). The nonverbal task of emotion matching consistently recruited bilateral brain regions, resting on different cognitive mechanisms for optimal decision‐making. Instead of verbally processing the target emotion along with the two choice options in a similar way to off‐screen emotion labeling, our findings suggest that we match emotional expressions by primarily mentalizing about the perceived individuals' mental state during their episode of emotional expression. Second, the resulting information is made available to the IFC, where a functional anterior‐to‐posterior gradient emerged in the left hemisphere. While emotional information during the nonverbal paradigm of emotion matching converges in the most posterior parts (IFJ and IFS), emotion labeling recruited the middle pars triangularis and the anterior pars orbitalis. Third, the left amygdala was the only brain structure consistently recruited by all perceptual decisions, regardless of task instructions. Unlike the right amygdala, which is more strongly engaged in fast, global but shallow processing of emotional content (Henke et al., 1993; Morris et al., 1999), the left amygdala is more engaged in focal and detailed emotional processing (Frühholz, Trost, & Grandjean, 2014; Krickl et al., 1987; Morris et al., 1998; Pannese et al., 2015).

We must add a note concerning the possible limitations of our neurocognitive model. The model mostly hypothesizes feed‐forward mechanisms of information from sensory regions to higher‐level brain regions. Perceptual decision‐making, however, is unlikely a one‐way process, and also includes top‐down mechanisms of prediction based on context and prior expectations (e.g., Summerfield & De Lange, 2014). While we acknowledge the importance of these top‐down predictive mechanisms, the empirical evidence for the functional processes in emotion perception is surprisingly scarce. Some studies only recently began to empirically investigate these mechanisms (e.g., Bernstein & Yovel, 2015; Frühholz et al., 2016). Therefore, we refrain from explicitly incorporating such mechanisms in our current model given the limited empirical evidence on these functional and neural processes. Future studies should extend our current model by including top‐down processes of emotional predictions and expectations.

5. CONCLUSIONS, IMPLICATIONS, AND LIMITATIONS

When emotion perception consists of matching an expression against other expressions based on perceptual features (i.e., a novel facial expression against target facial expressions), we rely on a brain network that largely involves inferring the mental state underlying those expressions. Despite the artificial setting of such a paradigm, mentalization is present in many forms of emotion perception ranging from passive observation to explicit appraisal (Dricu & Frühholz, 2016; R. L. Mitchell & Phillips, 2015). On the other hand, when prompted to perceive and name the emotion in others, we rely on language processes to provide important support. One might argue that verbal perceptual decision‐making is synonymous with language processes, for example, relying on phonological coding and reading processes at large when the possible emotion labels are displayed on screen; maintaining emotion labels in our verbal working memory throughout the experiment when they are not displayed on screen; semantic matching between perceived expressions and emotion labels.

The present findings are partially compatible with two psychological theories of emotion. Strong appraisal theories suggest that we are capable of ascertaining the emotion of others by first inferring the appraisals associated with the perceived emotional expressions (Scherer et al., 2013; Scherer & Ellgring, 2007). Specifically, following sensory processing, we deduce the individual appraisals behind each pattern of facial muscle movements, body postures, or vocal prosody, and subsequently infer the emotion experienced by another individual. In this view, we should observe brain regions involved in “mindreading” (i.e., the process of inferring others' thoughts, beliefs, and desires) across all perceptual decisions on emotions (e.g., Frith & Frith, 2006; Perner & Esken, 2015). We found support for the strong version of appraisal theory in our nonverbal paradigm of emotion perception, that is, emotion matching, but not during emotion labeling or emotion discrimination. Conversely, constructivism theories—particularly the strong versions—give language a crucial role in perceiving (and, to some extent, inferring) one's own emotions and others' emotions (Barrett & Kensinger, 2010; Barrett et al., 2007; Lindquist & Gendron, 2013). In this view, brain regions pertaining to language processing should be present in all perceptual decisions on emotions. We found support for the constructivism theory of emotion in both the online and off‐line paradigms of emotion labeling, but not for emotion matching or emotion discrimination. We should mention, however, that off‐line labeling of emotions may still trigger a basic process of mental state inference, as suggested by the activation in the posterior superior temporal sulcus (Gergely & Csibra, 2003; Peelen, Atkinson, & Vuilleumier, 2010; Skerry & Saxe, 2014). Constructivism theories further claim that emotion perception is achieved via domain‐general cognitive processes that are not unique to emotional stimuli. We found support for this hypothesis in all the paradigms of emotion perception, including matching, labeling, and discrimination. For example, emotion matching is achieved via domain‐general processes of cognitive switching in the IFJ and eye gaze shifting in the intraparietal sulci (Clos et al., 2013; Corbetta, Patel, & Shulman, 2008). Emotion labeling is likely achieved via language processes that are not themselves unique to emotion, for example, phonological coding, semantic retrieval of emotion concepts (Rottschy et al., 2012; Schmithorst et al., 2007). Finally, emotion discrimination activates the right pars opercularis in a similar way to how discrimination between other classes of stimuli does (Hampshire et al., 2007, 2008; Linden et al., 1999).

In conclusion, strong appraisal and constructivism theories of emotion each partially predict the results in our meta‐analyses. As such, our findings align more readily with recent attempts at combining multiple elements from traditionally divergent accounts of emotion perception (Moors, 2017; Nesse, 2014). Instead of singling out some causal mechanisms of emotion perception at the expense of others, it may be helpful to acknowledge that the route to emotion inference is not singular and that there can be multiple mechanisms that differ in functionality and optimality. Our results further reiterate this by showing that emotion perception paradigms are, in fact, quite heterogeneous. Depending on task instructions, perceiving agents engage in different cognitive strategies that are called upon based on the situation. Clearly, more scientific progress can be achieved if predictions of emotion perception are tested on a larger scale of paradigms, beyond those included in this meta‐analyses that can only reflect the trends in the scientific community. On a more practical level, we strongly recommend that emotion researchers acknowledge the heterogeneity of emotion perception tasks, at the very least in its two variants, that is, verbal and nonverbal perception. Neuropsychologists have long used sophisticated batteries of tests that tap into various facets of emotion perception with the purpose of assessing subtle neurological damage, under the assumption that different brain regions underlie different perceptual tasks (Boller & Grafman, 2000; Wilhelm et al., 2014). However, the fields of psychology and neuroscience have yet to acknowledge the functional heterogeneity of emotion perception tasks. Furthermore, when bridging results across paradigms, authors should be aware that some emotion perception tasks are not directly comparable. Instead, more circumscribed extrapolation and interpretation of results is warranted.

We must note of course the limitations of our study. As meta‐analyses are based on the available empirical data, their results may be affected by scientific trends and a publication bias that disfavors null results. We tried to mitigate the publication bias in the literature by including neuroimaging results published in supplementary materials that were otherwise not reported in the main manuscripts. More importantly, as detailed elsewhere (Eickhoff & Bzdok, 2013; Rottschy et al., 2012), coordinate‐based meta‐analyses such as ours are less susceptible to publication bias than standard meta‐analytic approaches that examine effect sizes because the assessment of spatial convergence across experiments is not affected by additionally including (observed but unpublished) null results. We are therefore confident that the validity of our results was not substantially undermined by such bias.

Regarding scientific trends, our post hoc classification of emotion perception tasks resulted in four major paradigms, that is, labeling, matching, discrimination, and rating. However, many other types of emotion perception tasks exist that have not been studied so far but can be adapted to the fMRI environment (Boller & Grafman, 2000; Wilhelm et al., 2014). We thus invite future neuroimaging studies to test as many and as varied perception tasks as possible to reveal the neurocognitive processes behind them. Following our post hoc classifications of studies, labeling emotions was by far the most used emotion perception paradigm, whereas emotion rating and discrimination were the least used paradigms. Unfortunately, the number of experiments for emotion rating (n = 10) and emotion discrimination (n = 19) were below the minimum number generally recommended for neuroimaging meta‐analyses (Eickhoff et al., 2016). As such, the rate of false positives may be high, and results should be interpreted with caution. Furthermore, we were unable to properly compare the different stimulus modalities. The number of experiments per paradigm, emotion construct (e.g., joy, anger, fear), and modality (i.e., faces, voices, body postures) did not reach the minimum recommended number to draw meaningful conclusions (Eickhoff et al., 2016). Instead, we looked at the commonalities and distinctions between paradigms of emotion perception, while trying to mitigate the noise introduced by the different perceptual modalities and the various emotion constructs by applying a more stringent statistical analysis than typically employed by neuroimaging meta‐analyses, that is, a cluster‐forming threshold at voxel level of p < .0001 and a cluster‐level threshold of p < .01 (Eickhoff et al., 2012). We therefore encourage future empirical studies to simultaneously compare several emotion perception tasks on a wide range of stimuli (i.e., faces, voices, and body postures) in a full‐factorial design to reveal differences and commonalities between tasks and classes of stimuli.

ACKNOWLEDGMENTS

Support was provided by a grant of the Swiss National Science Foundation (SNSF PP00P1_157409/1 and PP00P1_183711/1) to S.F. We thank Marine Bobin and Caitlyn Trevor for helpful comments on the manuscript.

Notes

Dricu M, Frühholz S. A neurocognitive model of perceptual decision‐making on emotional signals. Hum Brain Mapp. 2020;41:1532–1556. 10.1002/hbm.24893 [PMC free article] [PubMed] [CrossRef] [Google Scholar]

Funding information Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung, Grant/Award Numbers: PP00P1_157409/1, PP00P1_183711/1

DATA AVAILABILITY STATEMENT

The data of the present meta‐analysis are available from the corresponding author upon reasonable request.

REFERENCES

  • Adair, J. C. , Schwartz, R. L. , Williamson, D. , Raymer, A. M. , & Heilman, K. M. (1999). Articulatory processes and phonologic dyslexia. Neuropsychiatry, Neuropsychology, and Behavioral Neurology, 12(2), 121–127. [PubMed] [Google Scholar]
  • Adams, R. B. , Gordon, H. L. , Baird, A. A. , Ambady, N. , & Kleck, R. E. (2003). Effects of gaze on amygdala sensitivity to anger and fear faces. Science, 300(5625), 1536–1536. [PubMed] [Google Scholar]
  • Adolphs, R. , Damasio, H. , Tranel, D. , Cooper, G. , & Damasio, A. R. (2000). A role for somatosensory cortices in the visual recognition of emotion as revealed by three‐dimensional lesion mapping. The Journal of Neuroscience, 20(7), 2683–2690. [PMC free article] [PubMed] [Google Scholar]
  • Alba‐Ferrara, L. , Hausmann, M. , Mitchell, R. L. , & Weis, S. (2011). The neural correlates of emotional prosody comprehension: Disentangling simple from complex emotion. PLoS One, 6(12), e28701–e28701. [PMC free article] [PubMed] [Google Scholar]
  • Allison, T. , Puce, A. , & McCarthy, G. (2000). Social perception from visual cues: Role of the STS region. Trends in Cognitive Sciences, 4(7), 267–278. [PubMed] [Google Scholar]
  • Amodio, D. M. , & Frith, C. D. (2006). Meeting of minds: The medial frontal cortex and social cognition. Nature Reviews Neuroscience, 7(4), 268–277. [PubMed] [Google Scholar]
  • Amunts, K. , Lenzen, M. , Friederici, A. D. , Schleicher, A. , Morosan, P. , Palomero‐Gallagher, N. , & Zilles, K. (2010). Broca's region: Novel organizational principles and multiple receptor mapping. PLoS Biology, 8(9), e1000489. [PMC free article] [PubMed] [Google Scholar]
  • Arbib, M. A. (2017). Dorsal and ventral streams in the evolution of the language‐ready brain: Linking language to the world. Journal of Neurolinguistics, 43, 228–253. [Google Scholar]
  • Arcurio, L. R. , Gold, J. M. , & James, T. W. (2012). The response of face‐selective cortex with single face parts and part combinations. Neuropsychologia, 50(10), 2454–2459. [PMC free article] [PubMed] [Google Scholar]
  • Aron, A. R. , Robbins, T. W. , & Poldrack, R. A. (2004). Inhibition and the right inferior frontal cortex. Trends in Cognitive Sciences, 8(4), 170–177. [PubMed] [Google Scholar]
  • Atkinson, A. P. , Vuong, Q. C. , & Smithson, H. E. (2012). Modulation of the face‐and body‐selective visual regions by the motion and emotion of point‐light face and body stimuli. NeuroImage, 59(2), 1700–1712. [PubMed] [Google Scholar]
  • Aue, T. , Lavelle, L. A. , & Cacioppo, J. T. (2009). Great expectations: What can fMRI research tell us about psychological phenomena? International Journal of Psychophysiology, 73(1), 10–16. [PubMed] [Google Scholar]
  • Baas, D. , Aleman, A. , & Kahn, R. S. (2004). Lateralization of amygdala activation: A systematic review of functional neuroimaging studies. Brain Research Reviews, 45(2), 96–103. [PubMed] [Google Scholar]
  • Barrett, L. F. (2006). Solving the emotion paradox: Categorization and the experience of emotion. Personality and Social Psychology Review, 10(1), 20–46. [PubMed] [Google Scholar]
  • Barrett, L. F. , & Kensinger, E. A. (2010). Context is routinely encoded during emotion perception. Psychological Science, 21(4), 595–599. [PMC free article] [PubMed] [Google Scholar]
  • Barrett, L. F. , Lindquist, K. A. , & Gendron, M. (2007). Language as context for the perception of emotion. Trends in Cognitive Sciences, 11(8), 327–332. [PMC free article] [PubMed] [Google Scholar]
  • Barrett, L. F. , Mesquita, B. , & Gendron, M. (2011). Context in emotion perception. Current Directions in Psychological Science, 20(5), 286–290. [Google Scholar]
  • Bates, E. , Reilly, J. , Wulfeck, B. , Dronkers, N. , Opie, M. , Fenson, J. , … Herbst, K. (2001). Differential effects of unilateral lesions on language production in children and adults. Brain and Language, 79(2), 223–265. [PubMed] [Google Scholar]
  • Belin, P. , Fecteau, S. , & Bedard, C. (2004). Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8(3), 129–135. [PubMed] [Google Scholar]
  • Bernstein, M. , & Yovel, G. (2015). Two neural pathways of face processing: A critical evaluation of current models. Neuroscience & Biobehavioral Reviews, 55, 536–546. [PubMed] [Google Scholar]
  • Bidet‐Caulet, A. , Voisin, J. , Bertrand, O. , & Fonlupt, P. (2005). Listening to a walking human activates the temporal biological motion area. NeuroImage, 28(1), 132–139. [PubMed] [Google Scholar]
  • Bimler, D. , & Kirkland, J. (2001). Categorical perception of facial expressions of emotion: Evidence from multidimensional scaling. Cognition & Emotion, 15(5), 633–658. [Google Scholar]
  • Bokde, A. L. , Tagamets, M.‐A. , Friedman, R. B. , & Horwitz, B. (2001). Functional interactions of the inferior frontal cortex during the processing of words and word‐like stimuli. Neuron, 30(2), 609–617. [PubMed] [Google Scholar]
  • Boller, F. , & Grafman, J. (2000). Handbook of neuropsychology (Vol. 4). Oxford, United Kingdom: Gulf Professional Publishing. [Google Scholar]
  • Booth, J. R. , Lu, D. , Burman, D. D. , Chou, T.‐L. , Jin, Z. , Peng, D.‐L. , … Liu, L. (2006). Specialization of phonological and semantic processing in Chinese word reading. Brain Research, 1071(1), 197–207. [PMC free article] [PubMed] [Google Scholar]
  • Brück, C. , Kreifelts, B. , & Wildgruber, D. (2011). Emotional voices in context: A neurobiological model of multimodal affective information processing. Physics of Life Reviews, 8(4), 383–403. [PubMed] [Google Scholar]
  • Buchanan, T. W. , Lutz, K. , Mirzazade, S. , Specht, K. , Shah, N. J. , Zilles, K. , & Jäncke, L. (2000). Recognition of emotional prosody and verbal components of spoken language: An fMRI study. Cognitive Brain Research, 9(3), 227–238. [PubMed] [Google Scholar]
  • Burklund, L. J. , Craske, M. G. , Taylor, S. E. , & Lieberman, M. D. (2015). Altered emotion regulation capacity in social phobia as a function of comorbidity. Social Cognitive and Affective Neuroscience, 10(2), 199–208. [PMC free article] [PubMed] [Google Scholar]
  • Butterworth, B. (1992). Disorders of phonological encoding. Cognition, 42(1–3), 261–286. [PubMed] [Google Scholar]
  • Cahill, L. (2003). Sex‐related influences on the neurobiology of emotionally influenced memory. Annals of the New York Academy of Sciences, 985(1), 163–173. [PubMed] [Google Scholar]
  • Campanella, S. , Quinet, P. , Bruyer, R. , Crommelinck, M. , & Guerit, J.‐M. (2002). Categorical perception of happiness and fear facial expressions: An ERP study. Journal of Cognitive Neuroscience, 14(2), 210–227. [PubMed] [Google Scholar]
  • Castelli, F. , Happé, F. , Frith, U. , & Frith, C. (2000). Movement and mind: A functional imaging study of perception and interpretation of complex intentional movement patterns. NeuroImage, 12(3), 314–325. [PubMed] [Google Scholar]
  • Ceravolo, L. , Fruhholz, S. , & Grandjean, D. (2016). Proximal vocal threat recruits the right voice‐sensitive auditory cortex. Social Cognitive and Affective Neuroscience, 11(5), 793–802. 10.1093/scan/nsw004 [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Cheal, J. L. , & Rutherford, M. (2010). Mapping emotion category boundaries using a visual expectation paradigm. Perception, 39(11), 1514. [PubMed] [Google Scholar]
  • Cheal, J. L. , & Rutherford, M. (2015). Investigating the category boundaries of emotional facial expressions: Effects of individual participant and model and the stability over time. Personality and Individual Differences, 74, 146–152. [Google Scholar]
  • Clos, M. , Amunts, K. , Laird, A. R. , Fox, P. T. , & Eickhoff, S. B. (2013). Tackling the multifunctional nature of Broca's region meta‐analytically: Co‐activation‐based parcellation of area 44. NeuroImage, 83, 174–188. [PMC free article] [PubMed] [Google Scholar]
  • Concina, G. , Renna, A. , Grosso, A. , & Sacchetti, B. (2019). The auditory cortex and the emotional valence of sounds. Neuroscience & Biobehavioral Reviews, 98, 256–264. [PubMed] [Google Scholar]
  • Coppin, G. , & Sander, D. (2013). Contemporary theories and concepts in the psychology of emotions. Emotion‐Oriented Systems, 1, 1–31. [Google Scholar]
  • Corbetta, M. , Patel, G. , & Shulman, G. L. (2008). The reorienting system of the human brain: From environment to theory of mind. Neuron, 58(3), 306–324. [PMC free article] [PubMed] [Google Scholar]
  • Costafreda, S. G. , Fu, C. H. , Lee, L. , Everitt, B. , Brammer, M. J. , & David, A. S. (2006). A systematic review and quantitative appraisal of fMRI studies of verbal fluency: Role of the left inferior frontal gyrus. Human Brain Mapping, 27(10), 799–810. [PMC free article] [PubMed] [Google Scholar]
  • Costanzo, E. Y. , Villarreal, M. , Drucaroff, L. J. , Ortiz‐Villafañe, M. , Castro, M. N. , Goldschmidt, M. , … Brusco, L. I. (2015). Hemispheric specialization in affective responses, cerebral dominance for language, and handedness: Lateralization of emotion, language, and dexterity. Behavioural Brain Research, 288, 11–19. [PubMed] [Google Scholar]
  • Critchley, H. , Daly, E. , Phillips, M. , Brammer, M. , Bullmore, E. , Williams, S. , … Murphy, D. (2000). Explicit and implicit neural mechanisms for processing of social information from facial expressions: A functional magnetic resonance imaging study. Human Brain Mapping, 9(2), 93–105. [PMC free article] [PubMed] [Google Scholar]
  • Cutting, L. , Clements, A. , Courtney, S. , Rimrodt, S. , Schafer, J. , Bisesi, J. , … Pugh, K. (2006). Differential components of sentence comprehension: Beyond single word reading and memory. NeuroImage, 29(2), 429–438. [PubMed] [Google Scholar]
  • Dailey, M. N. , Cottrell, G. W. , Padgett, C. , & Adolphs, R. (2002). EMPATH: A neural network that categorizes facial expressions. Journal of Cognitive Neuroscience, 14(8), 1158–1173. [PubMed] [Google Scholar]
  • Dal Monte, O. , Schintu, S. , Pardini, M. , Berti, A. , Wassermann, E. M. , Grafman, J. , & Krueger, F. (2014). The left inferior frontal gyrus is crucial for reading the mind in the eyes: Brain lesion evidence. Cortex, 58, 9–17. [PubMed] [Google Scholar]
  • David, A. S. , & Senior, C. (2000). Implicit motion and the brain. Trends in Cognitive Sciences, 4(8), 293–294. [PubMed] [Google Scholar]
  • de Gelder, B. , De Borst, A. , & Watson, R. (2015). The perception of emotion in body expressions. Wiley Interdisciplinary Reviews: Cognitive Science, 6(2), 149–158. [PubMed] [Google Scholar]
  • Decety, J. , Grezes, J. , Costes, N. , Perani, D. , Jeannerod, M. , Procyk, E. , … Fazio, F. (1997). Brain activity during observation of actions. Influence of action content and subject's strategy. Brain, 120(10), 1763–1777. [PubMed] [Google Scholar]
  • Deonna, J. A. , & Scherer, K. R. (2010). The case of the disappearing intentional object: Constraints on a definition of emotion. Emotion Review, 2(1), 44–52. [Google Scholar]
  • Derntl, B. , Habel, U. , Robinson, S. , Windischberger, C. , Kryspin‐Exner, I. , Gur, R. C. , & Moser, E. (2012). Culture but not gender modulates amygdala activation during explicit emotion recognition. BMC Neuroscience, 13(1), 54. [PMC free article] [PubMed] [Google Scholar]
  • Derrfuss, J. , Brass, M. , Neumann, J. , & von Cramon, D. Y. (2005). Involvement of the inferior frontal junction in cognitive control: Meta‐analyses of switching and Stroop studies. Human Brain Mapping, 25(1), 22–34. [PMC free article] [PubMed] [Google Scholar]
  • Devlin, J. T. , Matthews, P. M. , & Rushworth, M. F. (2003). Semantic processing in the left inferior prefrontal cortex: A combined functional magnetic resonance imaging and transcranial magnetic stimulation study. Journal of Cognitive Neuroscience, 15(1), 71–84. [PubMed] [Google Scholar]
  • DiGirolamo, M. A. , & Russell, J. A. (2017). The emotion seen in a face can be a methodological artifact: The process of elimination hypothesis. Emotion, 17(3), 538. [PubMed] [Google Scholar]
  • Döhnel, K. , Schuwerk, T. , Meinhardt, J. , Sodian, B. , Hajak, G. , & Sommer, M. (2012). Functional activity of the right temporo‐parietal junction and of the medial prefrontal cortex associated with true and false belief reasoning. NeuroImage, 60(3), 1652–1661. [PubMed] [Google Scholar]
  • Dricu, M. , & Frühholz, S. (2016). Perceiving emotional expressions in others: Activation likelihood estimation meta‐analyses of explicit evaluation, passive perception and incidental perception of emotions. Neuroscience & Biobehavioral Reviews, 71, 810–828. [PubMed] [Google Scholar]
  • Eckert, M. A. , Leonard, C. M. , Richards, T. L. , Aylward, E. H. , Thomson, J. , & Berninger, V. W. (2003). Anatomical correlates of dyslexia: Frontal and cerebellar findings. Brain, 126(2), 482–494. [PubMed] [Google Scholar]
  • Eickhoff, S. B. , & Bzdok, D. (2013). Meta‐analyses in basic and clinical neuroscience: State of the art and perspective In S. Ulmer & O. Jansen (Eds): fMRI basics and clinical applications (pp. 77–87). Berlin, Heidelberg: Springer. [Google Scholar]
  • Eickhoff, S. B. , Bzdok, D. , Laird, A. R. , Kurth, F. , & Fox, P. T. (2012). Activation likelihood estimation meta‐analysis revisited. NeuroImage, 59(3), 2349–2361. [PMC free article] [PubMed] [Google Scholar]
  • Eickhoff, S. B. , Bzdok, D. , Laird, A. R. , Roski, C. , Caspers, S. , Zilles, K. , & Fox, P. T. (2011). Co‐activation patterns distinguish cortical modules, their connectivity and functional differentiation. NeuroImage, 57(3), 938–949. [PMC free article] [PubMed] [Google Scholar]
  • Eickhoff, S. B. , Laird, A. R. , Grefkes, C. , Wang, L. E. , Zilles, K. , & Fox, P. T. (2009). Coordinate‐based activation likelihood estimation meta‐analysis of neuroimaging data: A random‐effects approach based on empirical estimates of spatial uncertainty. Human Brain Mapping, 30(9), 2907–2926. [PMC free article] [PubMed] [Google Scholar]
  • Eickhoff, S. B. , Nichols, T. E. , Laird, A. R. , Hoffstaedter, F. , Amunts, K. , Fox, P. T. , … Eickhoff, C. R. (2016). Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation. NeuroImage, 137, 70–85. [PMC free article] [PubMed] [Google Scholar]
  • Ekman, P. (1992). An argument for basic emotions. Cognition & Emotion, 6(3–4), 169–200. [Google Scholar]
  • Ekman, P. , & Cordaro, D. (2011). What is meant by calling emotions basic. Emotion Review, 3(4), 364–370. [Google Scholar]
  • Ekman, P. , Friesen, W. V. , O'Sullivan, M. , Chan, A. , Diacoyanni‐Tarlatzis, I. , Heider, K. , … Ricci‐Bitti, P. E. (1987). Universals and cultural differences in the judgments of facial expressions of emotion. Journal of Personality and Social Psychology, 53(4), 712. [PubMed] [Google Scholar]
  • Elliott, R. , Zahn, R. , Deakin, J. W. , & Anderson, I. M. (2011). Affective cognition and its disruption in mood disorders. Neuropsychopharmacology, 36(1), 153. [PMC free article] [PubMed] [Google Scholar]
  • Etcoff, N. L. , & Magee, J. J. (1992). Categorical perception of facial expressions. Cognition, 44(3), 227–240. [PubMed] [Google Scholar]
  • Evans, A. , Collins, D. , & Milner, B. (1992). An MRI‐based stereotactic atlas from 250 young normal subjects. Paper presented at the Society for Neuroscience Abstracts, Los Angeles, CA.
  • Faucher, L. (2013). Comment: Constructionisms? Emotion Review, 5(4), 374–378. [Google Scholar]
  • Fisher, J. E. , Cortes, C. R. , Griego, J. A. , & Tagamets, M. A. (2012). Repetition of letter strings leads to activation of and connectivity with word‐related regions. NeuroImage, 59(3), 2839–2849. [PMC free article] [PubMed] [Google Scholar]
  • Ford, A. , McGregor, K. M. , Case, K. , Crosson, B. , & White, K. D. (2010). Structural connectivity of Broca's area and medial frontal cortex. NeuroImage, 52(4), 1230–1237. [PMC free article] [PubMed] [Google Scholar]
  • Foundas, A. L. , Daniels, S. K. , & Vasterling, J. J. (1998). Anomia: Case studies with lesion localization. Neurocase, 4(1), 35–43. [Google Scholar]
  • Friederici, A. D. (2012). The cortical language circuit: From auditory perception to sentence comprehension. Trends in Cognitive Sciences, 16(5), 262–268. [PubMed] [Google Scholar]
  • Frith, C. D. , & Frith, U. (1999). Interacting minds—A biological basis. Science, 286(5445), 1692–1695. [PubMed] [Google Scholar]
  • Frith, C. D. , & Frith, U. (2006). The neural basis of mentalizing. Neuron, 50(4), 531–534. [PubMed] [Google Scholar]
  • Frühholz, S. , & Belin, P. (2018). The science of voice perception In The Oxford handbook of voice perception (Vol. 1). Oxford, Oxford University Press. [Google Scholar]
  • Frühholz, S. , Fehr, T. , & Herrmann, M. (2009). Interference control during recognition of facial affect enhances the processing of expression specific properties—An event‐related fMRI study. Brain Research, 1269, 143–157. [PubMed] [Google Scholar]
  • Fruhholz, S. , Godde, B. , Lewicki, P. , Herzmann, C. , & Herrmann, M. (2011). Face recognition under ambiguous visual stimulation: fMRI correlates of "encoding styles". Human Brain Mapping, 32(10), 1750–1761. 10.1002/hbm.21144 [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Frühholz, S. , & Grandjean, D. (2013a). Amygdala subregions differentially respond and rapidly adapt to threatening voices. Cortex, 49(5), 1394–1403. [PubMed] [Google Scholar]
  • Frühholz, S. , & Grandjean, D. (2013b). Processing of emotional vocalizations in bilateral inferior frontal cortex. Neuroscience & Biobehavioral Reviews, 37(10), 2847–2855. [PubMed] [Google Scholar]
  • Frühholz, S. , Hofstetter, C. , Cristinzio, C. , Saj, A. , Seeck, M. , Vuilleumier, P. , & Grandjean, D. (2015). Asymmetrical effects of unilateral right or left amygdala damage on auditory cortical processing of vocal emotions. Proceedings of the National Academy of Sciences of the United States of America, 112(5), 1583–1588. [PMC free article] [PubMed] [Google Scholar]
  • Fruhholz, S. , Klaas, H. S. , Patel, S. , & Grandjean, D. (2015). Talking in fury: The cortico‐subcortical network underlying angry vocalizations. Cerebral Cortex, 25(9), 2752–2762. 10.1093/cercor/bhu074 [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Fruhholz, S. , & Staib, M. (2017). Neurocircuitry of impaired affective sound processing: A clinical disorders perspective. Neuroscience and Biobehavioral Reviews, 83, 516–524. 10.1016/j.neubiorev.2017.09.009 [PubMed] [CrossRef] [Google Scholar]
  • Frühholz, S. , Trost, W. , & Grandjean, D. (2014). The role of the medial temporal limbic system in processing emotions in voice and music. Progress in Neurobiology, 123, 1–17. [PubMed] [Google Scholar]
  • Fruhholz, S. , Trost, W. , & Grandjean, D. (2016). Whispering ‐ The hidden side of auditory communication. NeuroImage, 142, 602–612. 10.1016/j.neuroimage.2016.08.023 [PubMed] [CrossRef] [Google Scholar]
  • Frühholz, S. , Trost, W. , & Kotz, S. A. (2016). The sound of emotions—Towards a unifying neural network perspective of affective sound processing. Neuroscience & Biobehavioral Reviews, 68, 96–110. [PubMed] [Google Scholar]
  • Fruhholz, S. , van der Zwaag, W. , Saenz, M. , Belin, P. , Schobert, A. K. , Vuilleumier, P. , & Grandjean, D. (2016). Neural decoding of discriminative auditory object features depends on their socio‐affective valence. Social Cognitive and Affective Neuroscience, 11(10), 1638–1649. 10.1093/scan/nsw066 [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Frye, R. E. , Wu, M.‐H. , Liederman, J. , & McGraw Fisher, J. (2010). Greater pre‐stimulus effective connectivity from the left inferior frontal area to other areas is associated with better phonological decoding in dyslexic readers. Frontiers in Systems Neuroscience, 4, 156. [PMC free article] [PubMed] [Google Scholar]
  • Fugate, J. (2013). Categorical perception for emotional faces. Emotion Review, 5(1), 84–89. [PMC free article] [PubMed] [Google Scholar]
  • Fujimura, T. , Matsuda, Y.‐T. , Katahira, K. , Okada, M. , & Okanoya, K. (2012). Categorical and dimensional perceptions in decoding emotional facial expressions. Cognition & Emotion, 26(4), 587–601. [PMC free article] [PubMed] [Google Scholar]
  • Fusar‐Poli, P. , Placentino, A. , Carletti, F. , Landi, P. , & Abbamonte, M. (2009). Functional atlas of emotional faces processing: A voxel‐based meta‐analysis of 105 functional magnetic resonance imaging studies. Journal of Psychiatry & Neuroscience, 34(6), 418. [PMC free article] [PubMed] [Google Scholar]
  • Gallagher, H. L. , & Frith, C. D. (2004). Dissociable neural pathways for the perception and recognition of expressive and instrumental gestures. Neuropsychologia, 42(13), 1725–1736. [PubMed] [Google Scholar]
  • Gallagher, H. L. , Happé, F. , Brunswick, N. , Fletcher, P. C. , Frith, U. , & Frith, C. D. (2000). Reading the mind in cartoons and stories: An fMRI study of ‘theory of mind’ in verbal and nonverbal tasks. Neuropsychologia, 38(1), 11–21. [PubMed] [Google Scholar]
  • Gennari, S. P. , MacDonald, M. C. , Postle, B. R. , & Seidenberg, M. S. (2007). Context‐dependent interpretation of words: Evidence for interactive neural processes. NeuroImage, 35(3), 1278–1286. [PMC free article] [PubMed] [Google Scholar]
  • Gergely, G. , & Csibra, G. (2003). Teleological reasoning in infancy: The naıve theory of rational action. Trends in Cognitive Sciences, 7(7), 287–292. [PubMed] [Google Scholar]
  • Gläscher, J. , & Adolphs, R. (2003). Processing of the arousal of subliminal and supraliminal emotional stimuli by the human amygdala. The Journal of Neuroscience, 23(32), 10274–10282. [PMC free article] [PubMed] [Google Scholar]
  • Goldstone, R. L. , & Hendrickson, A. T. (2010). Categorical perception. Wiley Interdisciplinary Reviews: Cognitive Science, 1(1), 69–78. [PubMed] [Google Scholar]
  • Goodale, M. A. , Króliczak, G. , & Westwood, D. A. (2005). Dual routes to action: Contributions of the dorsal and ventral streams to adaptive behavior. Progress in Brain Research, 149, 269–283. [PubMed] [Google Scholar]
  • Goodale, M. A. , & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15(1), 20–25. [PubMed] [Google Scholar]
  • Goodale, M. A. , & Westwood, D. A. (2004). An evolving view of duplex vision: Separate but interacting cortical pathways for perception and action. Current Opinion in Neurobiology, 14(2), 203–211. [PubMed] [Google Scholar]
  • Goodale, M. A. , Westwood, D. A. , & Milner, A. D. (2004). Two distinct modes of control for object‐directed action. Progress in Brain Research, 144, 131–144. [PubMed] [Google Scholar]
  • Goodglass, H. (1980). Disorders of naming following brain injury: Observation of the effects of brain injury adds another dimension to our understanding of the relations between neurological and psychological factors in the naming process. American Scientist, 68(6), 647–655. [PubMed] [Google Scholar]
  • Gottfried, J. A. , O'Doherty, J. , & Dolan, R. J. (2003). Encoding predictive reward value in human amygdala and orbitofrontal cortex. Science, 301(5636), 1104–1107. [PubMed] [Google Scholar]
  • Gough, P. M. , Nobre, A. C. , & Devlin, J. T. (2005). Dissociating linguistic processes in the left inferior frontal cortex with transcranial magnetic stimulation. The Journal of Neuroscience, 25(35), 8010–8016. [PMC free article] [PubMed] [Google Scholar]
  • Greenlee, J. D. , Oya, H. , Kawasaki, H. , Volkov, I. O. , Severson, M. A. , Howard, M. A. , & Brugge, J. F. (2007). Functional connections within the human inferior frontal gyrus. Journal of Comparative Neurology, 503(4), 550–559. [PubMed] [Google Scholar]
  • Grill‐Spector, K. , & Weiner, K. S. (2014). The functional architecture of the ventral temporal cortex and its role in categorization. Nature Reviews Neuroscience, 15(8), 536. [PMC free article] [PubMed] [Google Scholar]
  • Gur, R. E. , Loughead, J. , Kohler, C. G. , Elliott, M. A. , Lesko, K. , Ruparel, K. , … Gur, R. C. (2007). Limbic activation associated with misidentification of fearful faces and flat affect in schizophrenia. Archives of General Psychiatry, 64(12), 1356–1366. [PubMed] [Google Scholar]
  • Hamberger, M. J. , Habeck, C. G. , Pantazatos, S. P. , Williams, A. C. , & Hirsch, J. (2014). Shared space, separate processes: Neural activation patterns for auditory description and visual object naming in healthy adults. Human Brain Mapping, 35(6), 2507–2520. [PMC free article] [PubMed] [Google Scholar]
  • Hampshire, A. , Duncan, J. , & Owen, A. M. (2007). Selective tuning of the blood oxygenation level‐dependent response during simple target detection dissociates human frontoparietal subregions. Journal of Neuroscience, 27(23), 6219–6223. [PMC free article] [PubMed] [Google Scholar]
  • Hampshire, A. , Thompson, R. , Duncan, J. , & Owen, A. M. (2008). The target selective neural response—Similarity, ambiguity, and learning effects. PLoS One, 3(6), e2520. [PMC free article] [PubMed] [Google Scholar]
  • Hampshire, A. , Thompson, R. , Duncan, J. , & Owen, A. M. (2009). Selective tuning of the right inferior frontal gyrus during target detection. Cognitive, Affective, & Behavioral Neuroscience, 9(1), 103–112. [PMC free article] [PubMed] [Google Scholar]
  • Harding, I. H. , Yücel, M. , Harrison, B. J. , Pantelis, C. , & Breakspear, M. (2015). Effective connectivity within the frontoparietal control network differentiates cognitive control and working memory. NeuroImage, 106, 144–153. [PubMed] [Google Scholar]
  • Hareli, S. , Elkabetz, S. , & Hess, U. (2019). Drawing inferences from emotion expressions: The role of situative informativeness and context. Emotion, 19(2), 200. [PubMed] [Google Scholar]
  • Hariri, A. R. , Bookheimer, S. Y. , & Mazziotta, J. C. (2000). Modulating emotional responses: Effects of a neocortical network on the limbic system. Neuroreport, 11(1), 43–48. [PubMed] [Google Scholar]
  • Hariri, A. R. , Mattay, V. S. , Tessitore, A. , Fera, F. , Smith, W. G. , & Weinberger, D. R. (2002). Dextroamphetamine modulates the response of the human amygdala. Neuropsychopharmacology, 27(6), 1036–1040. [PubMed] [Google Scholar]
  • Harnad, S. (1987). Psychophysical and cognitive aspects of categorical perception: A critical overview In Harnad, S. (ed.), Categorical perception: The groundwork of cognition (pp. 1–25). New York: Cambridge University Press. [Google Scholar]
  • Hart, A. J. , Whalen, P. J. , Shin, L. M. , McInerney, S. C. , Fischer, H. , & Rauch, S. L. (2000). Differential response in the human amygdala to racial outgroup vs ingroup face stimuli. Neuroreport, 11(11), 2351–2354. [PubMed] [Google Scholar]
  • Hauser, C. K. , & Salinas, E. (2014). Perceptual decision making. In: Jaeger D., Jung R. (eds) Encyclopedia of Computational Neuroscience. Springer, New York, NY.
  • Haxby, J. V. , Hoffman, E. A. , & Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4(6), 223–233. [PubMed] [Google Scholar]
  • Haxby, J. V. , Hoffman, E. A. , & Gobbini, M. I. (2002). Human neural systems for face recognition and social communication. Biological Psychiatry, 51(1), 59–67. [PubMed] [Google Scholar]
  • Hebart, M. N. , & Hesselmann, G. (2012). What visual information is processed in the human dorsal stream? Journal of Neuroscience, 32(24), 8107–8109. [PMC free article] [PubMed] [Google Scholar]
  • Heekeren, H. R. , Marrett, S. , Bandettini, P. A. , & Ungerleider, L. G. (2004). A general mechanism for perceptual decision‐making in the human brain. Nature, 431(7010), 859–862. [PubMed] [Google Scholar]
  • Heekeren, H. R. , Marrett, S. , & Ungerleider, L. G. (2008). The neural systems that mediate human perceptual decision making. Nature Reviews Neuroscience, 9(6), 467–479. [PubMed] [Google Scholar]
  • Hein, G. , & Knight, R. T. (2008). Superior temporal sulcus—It's my area: Or is it? Journal of Cognitive Neuroscience, 20(12), 2125–2136. [PubMed] [Google Scholar]
  • Henke, K. , Landis, T. , & Markowitsch, H. J. (1993). Subliminal perception of pictures in the right hemisphere. Consciousness and Cognition, 2(3), 225–236. [Google Scholar]
  • Hess, U. , & Hareli, S. (2017). The social signal value of emotions: The role of contextual factors in social inferences drawn from emotion displays In James R. A. & Fernandez‐Dols J. M. (Eds.), The science of facial expression (pp. 375–393). New York, NY: Oxford University Press. [Google Scholar]
  • Holle, H. , Obleser, J. , Rueschemeyer, S.‐A. , & Gunter, T. C. (2010). Integration of iconic gestures and speech in left superior temporal areas boosts speech comprehension under adverse listening conditions. NeuroImage, 49(1), 875–884. [PubMed] [Google Scholar]
  • Hooker, C. I. , Paller, K. A. , Gitelman, D. R. , Parrish, T. B. , Mesulam, M.‐M. , & Reber, P. J. (2003). Brain networks for analyzing eye gaze. Cognitive Brain Research, 17(2), 406–418. [PMC free article] [PubMed] [Google Scholar]
  • Jaywant, A. , & Pell, M. D. (2012). Categorical processing of negative emotions from speech prosody. Speech Communication, 54(1), 1–10. [Google Scholar]
  • Johnston, P. , Mayes, A. , Hughes, M. , & Young, A. W. (2013). Brain networks subserving the evaluation of static and dynamic facial expressions. Cortex, 49(9), 2462–2472. [PubMed] [Google Scholar]
  • Kay, J. , & Ellis, A. (1987). A cognitive neuropsychological case study of anomia. Brain, 110(3), 613–629. [PubMed] [Google Scholar]
  • Kemmerer, D. , Rudrauf, D. , Manzel, K. , & Tranel, D. (2012). Behavioral patterns and lesion sites associated with impaired processing of lexical and conceptual knowledge of actions. Cortex, 48(7), 826–848. [PMC free article] [PubMed] [Google Scholar]
  • Kim, C. , Cilles, S. E. , Johnson, N. F. , & Gold, B. T. (2012). Domain general and domain preferential brain regions associated with different types of task switching: A meta‐analysis. Human Brain Mapping, 33(1), 130–142. [PMC free article] [PubMed] [Google Scholar]
  • Kim, J. W. , Kim, J. J. , Jeong, B. S. , Ki, S. W. , Im, D.‐M. , Lee, S. J. , & Lee, H. S. (2005). Neural mechanism for judging the appropriateness of facial affect. Cognitive Brain Research, 25(3), 659–667. [PubMed] [Google Scholar]
  • Kitada, R. , Johnsrude, I. S. , Kochiyama, T. , & Lederman, S. J. (2010). Brain networks involved in haptic and visual identification of facial expressions of emotion: An fMRI study. NeuroImage, 49(2), 1677–1689. [PubMed] [Google Scholar]
  • Korb, S. , Fruhholz, S. , & Grandjean, D. (2015). Reappraising the voices of wrath. Social Cognitive and Affective Neuroscience, 10(12), 1644–1660. 10.1093/scan/nsv051 [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Korolkova, O. A. (2014). Categorical perception of facial expressions is not a homogeneous effect. Paper presented at the Cognitive Science Meets Artificial Intelligence: Human and Artificial Agents in Interactive Contexts. Proceedings of the 36th Annual Meeting of the Cognitive Science Society, Quebec City, Canada.
  • Kotsoni, E. , de Haan, M. , & Johnson, M. H. (2001). Categorical perception of facial expressions by 7‐month‐old infants. Perception, 30(9), 1115–1126. [PubMed] [Google Scholar]
  • Kotz, S. A. , Cappa, S. F. , von Cramon, D. Y. , & Friederici, A. D. (2002). Modulation of the lexical–semantic network by auditory semantic priming: An event‐related functional MRI study. NeuroImage, 17(4), 1761–1772. [PubMed] [Google Scholar]
  • Kotz, S. A. , Meyer, M. , Alter, K. , Besson, M. , von Cramon, D. Y. , & Friederici, A. D. (2003). On the lateralization of emotional prosody: An event‐related functional MR investigation. Brain and Language, 86(3), 366–376. [PubMed] [Google Scholar]
  • Kourtzi, Z. , & Kanwisher, N. (2000). Activation in human MT/MST by static images with implied motion. Journal of Cognitive Neuroscience, 12(1), 48–55. [PubMed] [Google Scholar]
  • Kravitz, D. J. , Saleem, K. S. , Baker, C. I. , Ungerleider, L. G. , & Mishkin, M. (2013). The ventral visual pathway: An expanded neural framework for the processing of object quality. Trends in Cognitive Sciences, 17(1), 26–49. [PMC free article] [PubMed] [Google Scholar]
  • Krickl, M. , Poser, U. , & Markowitsch, H. J. (1987). Interactions between damaged brain hemisphere and mode of presentation on the recognition of faces and figures. Neuropsychologia, 25(5), 795–805. [PubMed] [Google Scholar]
  • Laukka, P. (2005). Categorical perception of vocal emotion expressions. Emotion, 5(3), 277. [PubMed] [Google Scholar]
  • LeDoux, J. (1998). The emotional brain: The mysterious underpinnings of emotional life. Touchstone: Simon and Schuster. [Google Scholar]
  • Lee, L. C. , Andrews, T. J. , Johnson, S. J. , Woods, W. , Gouws, A. , Green, G. G. , & Young, A. W. (2010). Neural responses to rigidly moving faces displaying shifts in social attention investigated with fMRI and MEG. Neuropsychologia, 48(2), 477–490. [PubMed] [Google Scholar]
  • Lega, C. , Stephan, M. A. , Zatorre, R. J. , & Penhune, V. (2016). Testing the role of dorsal premotor cortex in auditory‐motor association learning using transcranical magnetic stimulation (TMS). PLoS One, 11(9), e0163380. [PMC free article] [PubMed] [Google Scholar]
  • Leonard, C. , Eckert, M. , Given, B. , Virginia, B. , & Eden, G. (2006). Individual differences in anatomy predict reading and oral language impairments in children. Brain, 129(12), 3329–3342. [PubMed] [Google Scholar]
  • Levelt, W. J. (1992). Accessing words in speech production: Stages, processes and representations. Cognition, 42(1–3), 1–22. [PubMed] [Google Scholar]
  • Levy, B. J. , & Wagner, A. D. (2011). Cognitive control and right ventrolateral prefrontal cortex: Reflexive reorienting, motor inhibition, and action updating. Annals of the New York Academy of Sciences, 1224(1), 40–62. [PMC free article] [PubMed] [Google Scholar]
  • Liakakis, G. , Nickel, J. , & Seitz, R. (2011). Diversity of the inferior frontal gyrus—A meta‐analysis of neuroimaging studies. Behavioural Brain Research, 225(1), 341–347. [PubMed] [Google Scholar]
  • Liberman, A. M. , Harris, K. S. , Hoffman, H. S. , & Griffith, B. C. (1957). The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology, 54(5), 358. [PubMed] [Google Scholar]
  • Lieberman, M. D. , & Cunningham, W. A. (2009). Type I and type II error concerns in fMRI research: Re‐balancing the scale. Social Cognitive and Affective Neuroscience, 4(4), 423–428. [PMC free article] [PubMed] [Google Scholar]
  • Linden, D. E. , Prvulovic, D. , Formisano, E. , Völlinger, M. , Zanella, F. E. , Goebel, R. , & Dierks, T. (1999). The functional neuroanatomy of target detection: An fMRI study of visual and auditory oddball tasks. Cerebral Cortex, 9(8), 815–823. [PubMed] [Google Scholar]
  • Lindquist, K. A. (2013). Emotions emerge from more basic psychological ingredients: A modern psychological constructionist model. Emotion Review, 5(4), 356–368. [Google Scholar]
  • Lindquist, K. A. , Barrett, L. F. , Bliss‐Moreau, E. , & Russell, J. A. (2006). Language and the perception of emotion. Emotion, 6(1), 125. [PubMed] [Google Scholar]
  • Lindquist, K. A. , & Gendron, M. (2013). What's in a word? Language constructs emotion perception. Emotion Review, 5(1), 66–71. [Google Scholar]
  • Liu, L. , Tao, R. , Wang, W. , You, W. , Peng, D. , & Booth, J. R. (2013). Chinese dyslexics show neural differences in morphological processing. Developmental Cognitive Neuroscience, 6, 40–50. [PMC free article] [PubMed] [Google Scholar]
  • Mainy, N. , Jung, J. , Baciu, M. , Kahane, P. , Schoendorff, B. , Minotti, L. , … Lachaux, J. P. (2008). Cortical dynamics of word recognition. Human Brain Mapping, 29(11), 1215–1230. [PMC free article] [PubMed] [Google Scholar]
  • Markowitsch, H. J. (1999). Differential contribution of right and left amygdala to affective information processing. Behavioural Neurology, 11(4), 233–244. [PubMed] [Google Scholar]
  • McLellan, T. , Wilcke, J. , Johnston, L. , Watts, R. , & Miles, L. (2012). Sensitivity to posed and genuine displays of happiness and sadness: A fMRI study. Neuroscience Letters, 531(2), 149–154. [PubMed] [Google Scholar]
  • Mechelli, A. , Josephs, O. , Lambon Ralph, M. A. , McClelland, J. L. , & Price, C. J. (2007). Dissociating stimulus‐driven semantic and phonological effect during reading and naming. Human Brain Mapping, 28(3), 205–217. [PMC free article] [PubMed] [Google Scholar]
  • Miceli, G. , Amitrano, A. , Capasso, R. , & Caramazza, A. (1996). The treatment of anomia resulting from output lexical damage: Analysis of two cases. Brain and Language, 52(1), 150–174. [PubMed] [Google Scholar]
  • Milesi, V. , Cekic, S. , Péron, J. , Frühholz, S. , Cristinzio, C. , Seeck, M. , & Grandjean, D. (2014). Multimodal emotion perception after anterior temporal lobectomy (ATL). Frontiers in Human Neuroscience, 8, 275. [PMC free article] [PubMed] [Google Scholar]
  • Mitchell, J. P. , Cloutier, J. , Banaji, M. R. , & Macrae, C. N. (2006). Medial prefrontal dissociations during processing of trait diagnostic and nondiagnostic person information. Social Cognitive and Affective Neuroscience, 1(1), 49–55. [PMC free article] [PubMed] [Google Scholar]
  • Mitchell, J. P. , Macrae, C. N. , & Banaji, M. R. (2005). Forming impressions of people versus inanimate objects: Social‐cognitive processing in the medial prefrontal cortex. NeuroImage, 26(1), 251–257. [PubMed] [Google Scholar]
  • Mitchell, R. L. , & Phillips, L. H. (2015). The overlapping relationship between emotion perception and theory of mind. Neuropsychologia, 70, 1–10. [PubMed] [Google Scholar]
  • Moors, A. (2017). Integration of two skeptical emotion theories: Dimensional appraisal theory and Russell's psychological construction theory. Psychological Inquiry, 28(1), 1–19. [Google Scholar]
  • Morris, J. S. , Öhman, A. , & Dolan, R. J. (1998). Conscious and unconscious emotional learning in the human amygdala. Nature, 393(6684), 467–470. [PubMed] [Google Scholar]
  • Morris, J. S. , Öhman, A. , & Dolan, R. J. (1999). A subcortical pathway to the right amygdala mediating “unseen” fear. Proceedings of the National Academy of Sciences of the United States of America, 96(4), 1680–1685. [PMC free article] [PubMed] [Google Scholar]
  • Mulder, M. J. , van Maanen, L. , & Forstmann, B. U. (2014). Perceptual decision neurosciences—A model‐based review. Neuroscience, 277, 872–884. 10.1016/j.neuroscience.2014.07.031 [PubMed] [CrossRef] [Google Scholar]
  • Müller, V. I. , Cieslik, E. C. , Laird, A. R. , Fox, P. T. , Radua, J. , Mataix‐Cols, D. , … Turkeltaub, P. E. (2018). Ten simple rules for neuroimaging meta‐analysis. Neuroscience & Biobehavioral Reviews, 84, 151–161. [PMC free article] [PubMed] [Google Scholar]
  • Müller, V. I. , Höhner, Y. , & Eickhoff, S. B. (2018). Influence of task instructions and stimuli on the neural network of face processing: An ALE meta‐analysis. Cortex, 103, 240–255. [PMC free article] [PubMed] [Google Scholar]
  • Murakami, T. , Kell, C. A. , Restle, J. , Ugawa, Y. , & Ziemann, U. (2015). Left dorsal speech stream components and their contribution to phonological processing. Journal of Neuroscience, 35(4), 1411–1422. [PMC free article] [PubMed] [Google Scholar]
  • Nee, D. E. , Brown, J. W. , Askren, M. K. , Berman, M. G. , Demiralp, E. , Krawitz, A. , & Jonides, J. (2013). A meta‐analysis of executive components of working memory. Cerebral Cortex, 23(2), 264–282. [PMC free article] [PubMed] [Google Scholar]
  • Nelson, N. L. , & Russell, J. A. (2016). Building emotion categories: Children use a process of elimination when they encounter novel expressions. Journal of Experimental Child Psychology, 151, 120–130. [PubMed] [Google Scholar]
  • Nesse, R. M. (2014). Comment: A general “theory of emotion” is neither necessary nor possible. Emotion Review, 6(4), 320–322. [Google Scholar]
  • Nichols, T. , Brett, M. , Andersson, J. , Wager, T. , & Poline, J.‐B. (2005). Valid conjunction inference with the minimum statistic. NeuroImage, 25(3), 653–660. [PubMed] [Google Scholar]
  • Norton, E. S. , Black, J. M. , Stanley, L. M. , Tanaka, H. , Gabrieli, J. D. , Sawyer, C. , & Hoeft, F. (2014). Functional neuroanatomical evidence for the double‐deficit hypothesis of developmental dyslexia. Neuropsychologia, 61, 235–246. [PMC free article] [PubMed] [Google Scholar]
  • O'Reilly, R. C. (2010). The what and how of prefrontal cortical organization. Trends in Neurosciences, 33(8), 355–361. 10.1016/j.tins.2010.05.002 [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • O'Toole, A. J. , Roark, D. A. , & Abdi, H. (2002). Recognizing moving faces: A psychological and neural synthesis. Trends in Cognitive Sciences, 6(6), 261–266. [PubMed] [Google Scholar]
  • Ochsner, K. N. , Ray, R. R. , Hughes, B. , McRae, K. , Cooper, J. C. , Weber, J. , … Gross, J. J. (2009). Bottom‐up and top‐down processes in emotion generation: Common and distinct neural mechanisms. Psychological Science, 20(11), 1322–1331. [PMC free article] [PubMed] [Google Scholar]
  • Öhman, A. (2002). Automaticity and the amygdala: Nonconscious responses to emotional faces. Current Directions in Psychological Science, 11(2), 62–66. [Google Scholar]
  • Ong, D. C. , Zaki, J. , & Goodman, N. D. (2015). Affective cognition: Exploring lay theories of emotion. Cognition, 143, 141–162. [PubMed] [Google Scholar]
  • Öngür, D. , Ferry, A. T. , & Price, J. L. (2003). Architectonic subdivision of the human orbital and medial prefrontal cortex. Journal of Comparative Neurology, 460(3), 425–449. [PubMed] [Google Scholar]
  • Palmer, S. (2000). Phonological recoding deficit in working memory of dyslexic teenagers. Journal of Research in Reading, 23(1), 28–40. [Google Scholar]
  • Panksepp, J. (2000). Emotions as natural kinds within the mammalian brain In M. Lewis & J.M. Haviland‐Jones (Eds.), Handbook of emotions (Vol. 2, pp. 137–156). New York: Guilford. [Google Scholar]
  • Panksepp, J. (2007). Neurologizing the psychology of affects: How appraisal‐based constructivism and basic emotion theory can coexist. Perspectives on Psychological Science, 2(3), 281–296. [PubMed] [Google Scholar]
  • Pannese, A. , Grandjean, D. , & Frühholz, S. (2015). Subcortical processing in auditory communication. Hearing Research, 328, 67–77. [PubMed] [Google Scholar]
  • Pannese, A. , Grandjean, D. , & Fruhholz, S. (2016). Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions. Cortex, 85, 116–125. 10.1016/j.cortex.2016.10.013 [PubMed] [CrossRef] [Google Scholar]
  • Papademetris, X. , Jackowski, M. P. , Rajeevan, N. , DiStasio, M. , Okuda, H. , Constable, R. T. , & Staib, L. H. (2006). BioImage suite: An integrated medical image analysis suite: An update. The Insight Journal, 2006, 209. [PMC free article] [PubMed] [Google Scholar]
  • Pape, H.‐C. , & Pare, D. (2010). Plastic synaptic networks of the amygdala for the acquisition, expression, and extinction of conditioned fear. Physiological Reviews, 90(2), 419–463. [PMC free article] [PubMed] [Google Scholar]
  • Partanen, M. , Siege, L. S. , & Giaschi, D. E. (2018). Effect of reading intervention and task difficulty on orthographic and phonological reading systems in the Brain. Neuropsychologia, 130, 13–25. [PubMed] [Google Scholar]
  • Peelen, M. V. , Atkinson, A. P. , & Vuilleumier, P. (2010). Supramodal representations of perceived emotions in the human brain. The Journal of Neuroscience, 30(30), 10127–10134. [PMC free article] [PubMed] [Google Scholar]
  • Peelen, M. V. , & Downing, P. E. (2005). Selectivity for the human body in the fusiform gyrus. Journal of Neurophysiology, 93(1), 603–608. [PubMed] [Google Scholar]
  • Pelphrey, K. A. , & Morris, J. P. (2006). Brain mechanisms for interpreting the actions of others from biological‐motion cues. Current Directions in Psychological Science, 15(3), 136–140. [PMC free article] [PubMed] [Google Scholar]
  • Perner, J. , & Esken, F. (2015). Evolution of human cooperation in Homo heidelbergensis: Teleology versus mentalism. Developmental Review, 38, 69–88. [Google Scholar]
  • Pernet, C. R. , McAleer, P. , Latinus, M. , Gorgolewski, K. J. , Charest, I. , Bestelmeyer, P. E. , … Valdes‐Sosa, M. (2015). The human voice areas: Spatial organization and inter‐individual variability in temporal and extra‐temporal cortices. NeuroImage, 119, 164–174. [PMC free article] [PubMed] [Google Scholar]
  • Petrides, M. (1985). Deficits in non‐spatial conditional associative learning after periarcuate lesions in the monkey. Behavioural Brain Research, 16(2–3), 95–101. [PubMed] [Google Scholar]
  • Petrides, M. (2005). Lateral prefrontal cortex: Architectonic and functional organization. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 360(1456), 781–795. [PMC free article] [PubMed] [Google Scholar]
  • Phan, K. L. , Wager, T. , Taylor, S. F. , & Liberzon, I. (2002). Functional neuroanatomy of emotion: A meta‐analysis of emotion activation studies in PET and fMRI. NeuroImage, 16(2), 331–348. [PubMed] [Google Scholar]
  • Phelps, E. A. , & LeDoux, J. E. (2005). Contributions of the amygdala to emotion processing: From animal models to human behavior. Neuron, 48(2), 175–187. [PubMed] [Google Scholar]
  • Phelps, E. A. , O'Connor, K. J. , Gatenby, J. C. , Gore, J. C. , Grillon, C. , & Davis, M. (2001). Activation of the left amygdala to a cognitive representation of fear. Nature Neuroscience, 4(4), 437–441. [PubMed] [Google Scholar]
  • Phillips, L. H. , Channon, S. , Tunstall, M. , Hedenstrom, A. , & Lyons, K. (2008). The role of working memory in decoding emotions. Emotion, 8(2), 184. [PubMed] [Google Scholar]
  • Pichon, S. , de Gelder, B. , & Grèzes, J. (2009). Two different faces of threat. Comparing the neural systems for recognizing fear and anger in dynamic body expressions. NeuroImage, 47(4), 1873–1883. [PubMed] [Google Scholar]
  • Poldrack, R. A. , Wagner, A. D. , Prull, M. W. , Desmond, J. E. , Glover, G. H. , & Gabrieli, J. D. (1999). Functional specialization for semantic and phonological processing in the left inferior prefrontal cortex. NeuroImage, 10(1), 15–35. [PubMed] [Google Scholar]
  • Price, C. J. (2010). The anatomy of language: A review of 100 fMRI studies published in 2009. Annals of the New York Academy of Sciences, 1191(1), 62–88. 10.1111/j.1749-6632.2010.05444.x [PubMed] [CrossRef] [Google Scholar]
  • Price, C. J. (2012). A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. NeuroImage, 62(2), 816–847. [PMC free article] [PubMed] [Google Scholar]
  • Price, J. L. (2003). Comparative aspects of amygdala connectivity. Annals of the New York Academy of Sciences, 985(1), 50–58. [PubMed] [Google Scholar]
  • Prochnow, D. , Brunheim, S. , Steinhauser, L. , & Seitz, R. J. (2014). Reasoning about the implications of facial expressions: A behavioral and fMRI study on low and high social impact. Brain and Cognition, 90, 165–173. 10.1016/j.bandc.2014.07.004 [PubMed] [CrossRef] [Google Scholar]
  • Rahnev, D. , Nee, D. E. , Riddle, J. , Larson, A. S. , & D'Esposito, M. (2016). Causal evidence for frontal cortex organization for perceptual decision making. Proceedings of the National Academy of Sciences of the United States of America, 113(21), 6059–6064. 10.1073/pnas.1522551113 [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Rauschecker, J. P. (2011). An expanded role for the dorsal auditory pathway in sensorimotor control and integration. Hearing Research, 271(1–2), 16–25. [PMC free article] [PubMed] [Google Scholar]
  • Rauschecker, J. P. (2012). Ventral and dorsal streams in the evolution of speech and language. Frontiers in Evolutionary Neuroscience, 4, 7. [PMC free article] [PubMed] [Google Scholar]
  • Rauschecker, J. P. (2013). Processing streams in auditory cortex In Neural correlates of auditory cognition (pp. 7–43). New York, NY: Springer. [Google Scholar]
  • Rauschecker, J. P. (2017). Where, when, and how: Are they all sensorimotor? Towards a unified view of the dorsal pathway in vision and audition. Cortex, 98, 262–268. [PMC free article] [PubMed] [Google Scholar]
  • Raymer, A. M. , Foundas, A. , Maher, L. , Greenwald, M. , Morris, M. , Rothi, L. , & Heilman, K. (1997). Cognitive neuropsychological analysis and neuroanatomic correlates in a case of acute anomia. Brain and Language, 58(1), 137–156. [PubMed] [Google Scholar]
  • Reisenzein, R. (2009). Emotions as metarepresentational states of mind: Naturalizing the belief–desire theory of emotion. Cognitive Systems Research, 10(1), 6–20. [Google Scholar]
  • Repeiski, J. , Smith, M. , Sansom, I. , & Repetski, J. (1996). A differential neural response in the human amygdala to fearful and happy facial expressions. Nature, 383, 31. [Google Scholar]
  • Riès, S. K. , Dronkers, N. F. , & Knight, R. T. (2016). Choosing words: Left hemisphere, right hemisphere, or both? Perspective on the lateralization of word retrieval. Annals of the New York Academy of Sciences, 1369(1), 111–31. [PMC free article] [PubMed] [Google Scholar]
  • Robichon, F. , Levrier, O. , Farnarier, P. , & Habib, M. (2000). Developmental dyslexia: Atypical cortical asymmetries and functional significance. European Journal of Neurology, 7(1), 35–46. [PubMed] [Google Scholar]
  • Rossion, B. (2015). Face perception. Brain Mapping: An Encyclopedic Reference, 2, 515–522. [Google Scholar]
  • Rossion, B. , & Retter, T. L. (2015). Holistic face perception: Mind the gap! Visual Cognition, 23(3), 379–398. [Google Scholar]
  • Rottschy, C. , Langner, R. , Dogan, I. , Reetz, K. , Laird, A. R. , Schulz, J. B. , … Eickhoff, S. B. (2012). Modelling neural correlates of working memory: A coordinate‐based meta‐analysis. NeuroImage, 60(1), 830–846. [PMC free article] [PubMed] [Google Scholar]
  • Russell, J. A. (2005). Emotion in human consciousness is built on core affect. Journal of Consciousness Studies, 12(8–9), 26–42. [Google Scholar]
  • Russell, J. A. (2009). Emotion, core affect, and psychological construction. Cognition and Emotion, 23(7), 1259–1283. [Google Scholar]
  • Sabatinelli, D. , Fortune, E. E. , Li, Q. , Siddiqui, A. , Krafft, C. , Oliver, W. T. , … Jeffries, J. (2011). Emotional perception: Meta‐analyses of face and natural scene processing. NeuroImage, 54(3), 2524–2533. [PubMed] [Google Scholar]
  • Sakagami, M. , & Pan, X. (2007). Functional role of the ventrolateral prefrontal cortex in decision making. Current Opinion in Neurobiology, 17(2), 228–233. [PubMed] [Google Scholar]
  • Sallet, J. , Mars, R. B. , Noonan, M. P. , Neubert, F.‐X. , Jbabdi, S. , O'Reilly, J. X. , … Rushworth, M. F. (2013). The organization of dorsal frontal cortex in humans and macaques. The Journal of Neuroscience, 33(30), 12255–12274. [PMC free article] [PubMed] [Google Scholar]
  • Sander, D. , Grafman, J. , & Zalla, T. (2003). The human amygdala: An evolved system for relevance detection. Reviews in the Neurosciences, 14(4), 303–316. [PubMed] [Google Scholar]
  • Sauter, D. A. (2018). Is there a role for language in emotion perception? Emotion Review, 10(2), 111–115. [Google Scholar]
  • Sauter, D. A. , LeGuen, O. , & Haun, D. (2011). Categorical perception of emotional facial expressions does not require lexical categories. Emotion, 11(6), 1479. [PubMed] [Google Scholar]
  • Saxe, R. , Xiao, D.‐K. , Kovacs, G. , Perrett, D. , & Kanwisher, N. (2004). A region of right posterior superior temporal sulcus responds to observed intentional actions. Neuropsychologia, 42(11), 1435–1446. [PubMed] [Google Scholar]
  • Schall, J. D. (2001). Neural basis of deciding, choosing and acting. Nature Reviews. Neuroscience, 2(1), 33–42. 10.1038/35049054 [PubMed] [CrossRef] [Google Scholar]
  • Scherer, K. R. (2009). Emotions are emergent processes: They require a dynamic computational architecture. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 364(1535), 3459–3474. [PMC free article] [PubMed] [Google Scholar]
  • Scherer, K. R. , Banse, R. , & Wallbott, H. G. (2001). Emotion inferences from vocal expression correlate across languages and cultures. Journal of Cross‐Cultural Psychology, 32(1), 76–92. [Google Scholar]
  • Scherer, K. R. , Clark‐Polner, E. , & Mortillaro, M. (2011). In the eye of the beholder? Universality and cultural specificity in the expression and perception of emotion. International Journal of Psychology, 46(6), 401–435. [PubMed] [Google Scholar]
  • Scherer, K. R. , & Ellgring, H. (2007). Are facial expressions of emotion produced by categorical affect programs or dynamically driven by appraisal? Emotion, 7(1), 113. [PubMed] [Google Scholar]
  • Scherer, K. R. , Mortillaro, M. , & Mehu, M. (2013). Understanding the mechanisms underlying the production of facial expression of emotion: A componential perspective. Emotion Review, 5(1), 47–53. [Google Scholar]
  • Schirmer, A. , & Adolphs, R. (2017). Emotion perception from face, voice, and touch: Comparisons and convergence. Trends in Cognitive Sciences, 21(3), 216–228. [PMC free article] [PubMed] [Google Scholar]
  • Schirmer, A. , & Kotz, S. A. (2006). Beyond the right hemisphere: Brain mechanisms mediating vocal emotional processing. Trends in Cognitive Sciences, 10(1), 24–30. [PubMed] [Google Scholar]
  • Schlaffke, L. , Lissek, S. , Lenz, M. , Juckel, G. , Schultz, T. , Tegenthoff, M. , … Brüne, M. (2015). Shared and nonshared neural networks of cognitive and affective theory‐of‐mind: A neuroimaging study using cartoon picture stories. Human Brain Mapping, 36(1), 29–39. [PMC free article] [PubMed] [Google Scholar]
  • Schlegel, K. , Boone, R. T. , & Hall, J. A. (2017). Individual differences in interpersonal accuracy: A multi‐level meta‐analysis to assess whether judging other people is one skill or many. Journal of Nonverbal Behavior, 41(2), 103–137. [Google Scholar]
  • Schmithorst, V. J. , Holland, S. K. , & Plante, E. (2007). Object identification and lexical/semantic access in children: A functional magnetic resonance imaging study of word‐picture matching. Human Brain Mapping, 28(10), 1060–1074. [PMC free article] [PubMed] [Google Scholar]
  • Schultz, J. , Imamizu, H. , Kawato, M. , & Frith, C. D. (2004). Activation of the human superior temporal gyrus during observation of goal attribution by intentional objects. Journal of Cognitive Neuroscience, 16(10), 1695–1705. [PubMed] [Google Scholar]
  • Schurz, M. , Radua, J. , Aichhorn, M. , Richlan, F. , & Perner, J. (2014). Fractionating theory of mind: A meta‐analysis of functional brain imaging studies. Neuroscience & Biobehavioral Reviews, 42, 9–34. [PubMed] [Google Scholar]
  • Schusterman, R. J. , Reichmuth, C. J. , & Kastak, D. (2000). How animals classify friends and foes. Current Directions in Psychological Science, 9(1), 1–6. [Google Scholar]
  • Schwartz, C. E. , Wright, C. I. , Shin, L. M. , Kagan, J. , & Rauch, S. L. (2003). Inhibited and uninhibited infants "grown up": Adult amygdalar response to novelty. Science, 300(5627), 1952–1953. [PubMed] [Google Scholar]
  • Sedda, A. , & Scarpina, F. (2012). Dorsal and ventral streams across sensory modalities. Neuroscience Bulletin, 28(3), 291–300. [PMC free article] [PubMed] [Google Scholar]
  • Sergerie, K. , Chochol, C. , & Armony, J. L. (2008). The role of the amygdala in emotional processing: A quantitative meta‐analysis of functional neuroimaging studies. Neuroscience & Biobehavioral Reviews, 32(4), 811–830. [PubMed] [Google Scholar]
  • Shallice, T. , Stuss, D. T. , Alexander, M. P. , Picton, T. W. , & Derkzen, D. (2008). The multiple dimensions of sustained attention. Cortex, 44(7), 794–805. [PubMed] [Google Scholar]
  • Skerry, A. E. , & Saxe, R. (2014). A common neural code for perceived and inferred emotion. The Journal of Neuroscience, 34(48), 15997–16008. [PMC free article] [PubMed] [Google Scholar]
  • Snyder, H. R. , Banich, M. T. , & Munakata, Y. (2011). Choosing our words: Retrieval and selection processes recruit shared neural substrates in left ventrolateral prefrontal cortex. Journal of Cognitive Neuroscience, 23(11), 3470–3482. [PMC free article] [PubMed] [Google Scholar]
  • Soliunas, A. , & Gurciniene, O. (2007). Same‐different discrimination: Simultaneous versus successive presentation of stimuli. Studia Psychologica, 49(1), 27. [Google Scholar]
  • Sommer, M. , Dohnel, K. , Meinhardt, J. , & Hajak, G. (2008). Decoding of affective facial expressions in the context of emotional situations. Neuropsychologia, 46(11), 2615–2621. 10.1016/j.neuropsychologia.2008.04.020 [PubMed] [CrossRef] [Google Scholar]
  • Stevens, J. A. , Fonlupt, P. , Shiffrar, M. , & Decety, J. (2000). New aspects of motion perception: Selective neural encoding of apparent human movements. Neuroreport, 11(1), 109–115. [PubMed] [Google Scholar]
  • Summerfield, C. , & De Lange, F. P. (2014). Expectation in perceptual decision making: Neural and computational mechanisms. Nature Reviews Neuroscience, 15(11), 745–756. [PubMed] [Google Scholar]
  • Summerfield, C. , Egner, T. , Greene, M. , Koechlin, E. , Mangels, J. , & Hirsch, J. (2006). Predictive codes for forthcoming perception in the frontal cortex. Science, 314(5803), 1311–1314. [PubMed] [Google Scholar]
  • Summerfield, C. , & Koechlin, E. (2008). A neural representation of prior information during perceptual inference. Neuron, 59(2), 336–347. [PubMed] [Google Scholar]
  • Sundermann, B. , & Pfleiderer, B. (2012). Functional connectivity profile of the human inferior frontal junction: Involvement in a cognitive control network. BMC Neuroscience, 13(1), 1. [PMC free article] [PubMed] [Google Scholar]
  • Szameitat, D. P. , Kreifelts, B. , Alter, K. , Szameitat, A. J. , Sterr, A. , Grodd, W. , & Wildgruber, D. (2010). It is not always tickling: Distinct cerebral responses during perception of different laughter types. NeuroImage, 53(4), 1264–1271. [PubMed] [Google Scholar]
  • Takahashi, E. , Ohki, K. , & Kim, D.‐S. (2013). Dissociation and convergence of the dorsal and ventral visual working memory streams in the human prefrontal cortex. NeuroImage, 65, 488–498. [PMC free article] [PubMed] [Google Scholar]
  • Turkeltaub, P. E. , Eden, G. F. , Jones, K. M. , & Zeffiro, T. A. (2002). Meta‐analysis of the functional neuroanatomy of single‐word reading: Method and validation. NeuroImage, 16(3), 765–780. [PubMed] [Google Scholar]
  • Turkeltaub, P. E. , Eickhoff, S. B. , Laird, A. R. , Fox, M. , Wiener, M. , & Fox, P. (2012). Minimizing within‐experiment and within‐group effects in activation likelihood estimation meta‐analyses. Human Brain Mapping, 33(1), 1–13. [PMC free article] [PubMed] [Google Scholar]
  • Uleman, J. S. , Newman, L. S. , & Moskowitz, G. B. (1996). People as flexible interpreters: Evidence and issues from spontaneous trait inference In M.P. Zanna (Ed.), Advances in experimental social psychology (Vol. 28, pp. 211–279). San Diego: Elsevier. [Google Scholar]
  • van de Riet, W. A. , Grèzes, J. , & de Gelder, B. (2009). Specific and common brain regions involved in the perception of faces and bodies and the representation of their emotional expressions. Social Neuroscience, 4(2), 101–120. [PubMed] [Google Scholar]
  • Van Hout, D. , Hautus, M. J. , & Lee, H. S. (2011). Investigation of test performance over repeated sessions using signal detection theory: Comparison of three nonattribute‐specified difference tests 2‐AFCR, A‐NOT A and 2‐AFC. Journal of Sensory Studies, 26(5), 311–321. [Google Scholar]
  • Van Kleef, G. A. (2010). The emerging view of emotion as social information. Social and Personality Psychology Compass, 4(5), 331–343. [Google Scholar]
  • Van Overwalle, F. (2009). Social cognition and the brain: A meta‐analysis. Human Brain Mapping, 30(3), 829–858. [PMC free article] [PubMed] [Google Scholar]
  • Van Overwalle, F. (2011). A dissociation between social mentalizing and general reasoning. NeuroImage, 54(2), 1589–1599. [PubMed] [Google Scholar]
  • Venkatraman, V. , Rosati, A. G. , Taren, A. A. , & Huettel, S. A. (2009). Resolving response, decision, and strategic control: Evidence for a functional topography in dorsomedial prefrontal cortex. Journal of Neuroscience, 29(42), 13158–13164. 10.1523/jneurosci.2708-09.2009 [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Verbruggen, F. , Aron, A. R. , Stevens, M. A. , & Chambers, C. D. (2010). Theta burst stimulation dissociates attention and action updating in human inferior frontal cortex. Proceedings of the National Academy of Sciences of the United States of America, 107(31), 13966–13971. [PMC free article] [PubMed] [Google Scholar]
  • Vigneau, M. , Beaucousin, V. , Herve, P.‐Y. , Duffau, H. , Crivello, F. , Houde, O. , … Tzourio‐Mazoyer, N. (2006). Meta‐analyzing left hemisphere language areas: Phonology, semantics, and sentence processing. NeuroImage, 30(4), 1414–1432. [PubMed] [Google Scholar]
  • Völlm, B. A. , Taylor, A. N. , Richardson, P. , Corcoran, R. , Stirling, J. , McKie, S. , … Elliott, R. (2006). Neuronal correlates of theory of mind and empathy: A functional magnetic resonance imaging study in a nonverbal task. NeuroImage, 29(1), 90–98. [PubMed] [Google Scholar]
  • Vuilleumier, P. , Armony, J. L. , Driver, J. , & Dolan, R. J. (2001). Effects of attention and emotion on face processing in the human brain: An event‐related fMRI study. Neuron, 30(3), 829–841. [PubMed] [Google Scholar]
  • Wager, T. D. , Phan, K. L. , Liberzon, I. , & Taylor, S. F. (2003). Valence, gender, and lateralization of functional brain anatomy in emotion: A meta‐analysis of findings from neuroimaging. NeuroImage, 19(3), 513–531. [PubMed] [Google Scholar]
  • Watson, C. E. , Cardillo, E. R. , Ianni, G. R. , & Chatterjee, A. (2013). Action concepts in the brain: An activation likelihood estimation meta‐analysis. Journal of Cognitive Neuroscience, 25(8), 1191–1205. [PubMed] [Google Scholar]
  • Wiech, K. , Vandekerckhove, J. , Zaman, J. , Tuerlinckx, F. , Vlaeyen, J. W. , & Tracey, I. (2014). Influence of prior information on pain involves biased perceptual decision‐making. Current Biology, 24(15), R679–R681. [PMC free article] [PubMed] [Google Scholar]
  • Wilhelm, O. , Hildebrandt, A. , Manske, K. , Schacht, A. , & Sommer, W. (2014). Test battery for measuring the perception and recognition of facial expressions of emotion. Frontiers in Psychology, 5, 404. [PMC free article] [PubMed] [Google Scholar]
  • Woodhead, Z. , Barnes, G. , Penny, W. , Moran, R. , Teki, S. , Price, C. , & Leff, A. (2012). Reading front to back: MEG evidence for early feedback effects during word recognition. Cerebral Cortex, 24(3), 817–825. [PMC free article] [PubMed] [Google Scholar]
  • Woodhead, Z. V. , Penny, W. , Barnes, G. R. , Crewes, H. , Wise, R. J. , Price, C. J. , & Leff, A. P. (2013). Reading therapy strengthens top–down connectivity in patients with pure alexia. Brain, 136(8), 2579–2591. [PMC free article] [PubMed] [Google Scholar]
  • Wright, P. , & Liu, Y. J. (2006). Neutral faces activate the amygdala during identity matching. NeuroImage, 29(2), 628–636. 10.1016/J.Neuroimage.2005.07.047 [PubMed] [CrossRef] [Google Scholar]
  • Yovel, G. (2016). Neural and cognitive face‐selective markers: An integrative review. Neuropsychologia, 83, 5–13. [PubMed] [Google Scholar]
  • Yvert, G. , Perrone‐Bertolotti, M. , Baciu, M. , & David, O. (2012). Dynamic causal modeling of spatiotemporal integration of phonological and semantic processes: An electroencephalographic study. Journal of Neuroscience, 32(12), 4297–4306. [PMC free article] [PubMed] [Google Scholar]
  • Zacks, J. M. , Braver, T. S. , Sheridan, M. A. , Donaldson, D. I. , Snyder, A. Z. , Ollinger, J. M. , … Raichle, M. E. (2001). Human brain activity time‐locked to perceptual event boundaries. Nature Neuroscience, 4(6), 651–655. [PubMed] [Google Scholar]
  • Zaki, J. (2013). Cue integration: A common framework for social cognition and physical perception. Perspectives on Psychological Science, 8(3), 296–312. [PubMed] [Google Scholar]


Articles from Human Brain Mapping are provided here courtesy of Wiley-Blackwell


Why decision making is a perceptual issue?

Perceptual decision making is the process by which sensory information is used to guide behavior toward the external world. This involves gathering information through the senses, evaluating and integrating it according to the current goals and internal state of the subject, and using it to produce motor responses.

What is perceptual decision making?

Perceptual decision-making represents choosing a course of action based on available sensory evidence (Heekeren et al., 2008). This process implies the evaluation of sensory information to make a decision and translate it into behavior.

What is the relationship between decision making and perception?

Ideally, decision making would be an objective process, but the way individuals make decisions and the quality of their choices are largely influenced by their perceptions. Individual decision making is an important factor of behavior at all levels of an organization. Decision making occurs as a reaction to a problem.

What are perception issues?

Some perceptual problems can seem like a memory loss or a communication problem but they are not. For example the person may seem slower or more hesitant when attempting tasks. They are trying to make sense of the world around them which seems different. They may have difficulty in explaining what they have to do next.