NIH Blueprint: The Human Connectome Project

Task-fMRI 3T Imaging Protocol Details

Task-Evoked Functional Brain Activity

Our primary goals in including task-evoked functional MRI (tfMRI) in the HCP are to: 1) help identify as many “nodes” as possible that can guide, validate and interpret the results of the connectivity analyses that will be conducted on resting state fMRI (rfMRI), resting state MEG (rMEG) and diffusion data; 2) to allow a comparison of network connectivity in a task context to connectivity results generated using rfMRI; and 3) to relate signatures of activation magnitude or location in key network nodes to individual differences in performance, psychometric measures, or other phenotypic traits. To accomplish these goals, we developed a battery of tasks that can identify node locations in as wide a range of neural systems as is feasible within realistic time constraints (see Barch et al. 2013 for more detail).

We assessed seven major domains that we think sample the diversity of neural systems that will be of interest to a wide range of individuals in the field, including: 1) visual, motion, somatosensory, and motor systems; 2) category specific representations; 3) working memory/cognitive control systems; 4) language processing (semantic and phonological processing); 5) social cognition (Theory of Mind); 6) relational processing; and 7) emotion processing. These tasks are described in more detail below and in Barch et al. 2013. Stimuli were projected onto a computer screen behind the subject’s head within the imaging chamber. The screen was viewed by a mirror positioned approximately 8 cm above the subject’s face.

tfMRI scripts and data files

Script files are run in E-Prime 2.0 Professional to present task fMRI stimuli and collect behavioral responses in the scanner. If you would like to run HCP tasks in your own research project, or examine the stimuli used in HCP tasks, the task stimulus script archives can be obtained from ConnectomeDB. (HCP user account and Aspera browser plugin required to download.)

Tab-delimited versions of the E-Prime data files (TAB.txt) are included in this release. TAB.txt files are named according to the task condition that they describe and are contained within the directories for each of the two runs within each task (each phase encoding direction). A brief description of the key variables in those files can be found in Appendix 6: Task fMRI E-Prime Key Variables. The original edat files will not be available, because they may contain identifying information.

In addition, we include EV .txt files derived from the TAB.txt files (defined above) in the released data. EV files are explanatory variables (predictors) in FSL format (3-columns: onset, duration, and amplitude). There is a separate EV directory for each of the two runs within each task. Examples of the EV files for each task are detailed below.

The preprocessed data also includes .fsf files for each task. The fsf file is the setup or configuration file for running GLM-based fMRI analyses in 'FEAT' (FMRIB's Expert Analysis Tool: (FMRIB's Expert Analysis Tool). The Lev1 .fsf files contain setup information necessary to run GLM analyses on the timeseries data for an individual scan run. Lev2 .fsf files contain setup information to run GLM analyses combining multiple scan runs for an individual participant. Lev3 .fsf files (not included in the release) can be created to setup GLM analyses across multiple participants.

HCP’s results of individual (within-subject, level 2) tfMRI analysis for each task are available in download packages separate from the unprocessed and preprocessed datasets. These tfMRI datasets include a design.fsf file showing the setup information that was used in the FEAT GLM analysis for each task. See Appendix 3D for more details.

Here are examples of the EV files for each task and phase encoding direction in the appropriate unprocessed tfMRI_[task_phaseencodingdirection]/LINKED_DATA/EPRIME directory (e.g. tfMRI_WM_LR/LINKED_DATA/EPRIME) and preprocessed {Subject_ID}/MNINONLinear/Results/tfMRI_[task_phaseencodingdirection]/EVs directory (e.g. 100307/MNINONLinear/Results tfMRI_WM_LR/EVs):

Working Memory



onset of 0Back body block condition



onset of 0Back faces block condition



onset of 0Back places block condition



onset of 0Back tools block condition



onset of 2Back body block condition



onset of 2Back faces block condition



onset of 2Back places block condition



onset of 2Back tools block condition

EVs /0bk_cor.txt


onset of correct trials in 0Back blocks

EVs /0bk_err.txt


onset of error trials in 0Back blocks

EVs /0bk_nlr.txt


onset of trials in 0Back blocks with no response

EVs /2bk_cor.txt


onset of correct trials in 2Back blocks

EVs /2bk_err.txt


onset of error trials in 2Back blocks

EVs /2bk_nlr.txt


onset of trials in 0Back blocks with no response

EVs /all_bk_cor.txt


onset of correct trials in 0- and 2Back blocks



onset of error trials in both 0- and 2Back blocks




Onset of mostly reward blocks

EVs /loss.txt      


Onset of mostly loss blocks

EVs /win_event.txt


Onset of reward trials

EVs /loss_event.txt


Onset of loss trials

EVs /neutral.txt


Onset of neutral trials


EVs /cue.txt


Onset of task cues

EVs /lf.txt   


Onset of left foot blocks

EVs /rf.txt


Onset of right foot blocks

EVs /lh.txt


Onset of left hand blocks

EVs /rh.txt


Onset of right hand blocks

EVs /t.txt


Onset of tongue blocks


EVs /story.txt


Onset of story blocks

EVs /math.txt


Onset of math blocks

Social Cognition

EVs /mental.txt


Onset of mental interaction blocks

EVs /rnd.txt


Onset of random interaction blocks

EVs /mental_resp.txt


Onset of trials rated as mental interaction

EVs /other_resp.txt


Onset of trials not rated as mental interaction

Relational Processing

EVs /relation.txt


Onset of relational blocks

EVs /match.txt


Onset of match blocks

Emotion Processing

EVs /fear.txt


Onset of emotional face blocks

EVs /neut.txt


Onset of shape blocks


Details of tfMRI tasks

Working Memory

The category specific representation task and the working memory task are combined into a single task paradigm. Participants were presented with blocks of trials that consisted of pictures of places, tools, faces and body parts (non-mutilated parts of bodies with no “nudity”). Within each run, the 4 different stimulus types were presented in separate blocks. Also, within each run, ½ of the blocks use a 2-back working memory task and ½ use a 0-back working memory task (as a working memory comparison). A 2.5 second cue indicates the task type (and target for 0-back) at the start of the block. Each of the two runs contains 8 task blocks (10 trials of 2.5 seconds each, for 25 seconds) and 4 fixation blocks (15 seconds). On each trial, the stimulus is presented for 2 seconds, followed by a 500 ms inter-task interval (ITI). 

Conditions (Blocked)


0-back faces

2-back faces

0-back places

2-back places

0-back tools

2-back tools

0-back body parts   

2-back body parts

Conditions (Event-Related)


0-back correct trials

2-back correct trials

0-back error trials

2-back error trials

0-back no response trials

2-back no response trials

Additional Contrasts.These event types can be combined to create two categories of contrasts.

Working Memory Contrasts


0-back contrast (activity combined across conditions 1-4)

2-back contrast (activity combined across conditions 5-8)

2-back versus 0-back contrast (2-back contrast minus 0-back contrast)

Category Contrasts


Faces contrast   (0-back faces plus 2-back faces)

Places contrast (0-back places plus 2-back places)

Tools contrast    (0-back tools plus 2-back tools)

Body contrast    (0-back body plus 2-back body)

Potential Additional Event Related Contrasts: Researchers can also use the TAB.txt E-Prime data files to generate the following potential event-related contrasts:

  1. Targets
    1. For 2-back tasks, targets are 2-back repeats
    2. For 0-back tasks, targets match the cue stimulus
  2. Non-targets
    1. For 2-back tasks, non-targets are novel items
    2. For 0-back tasks, non-targets do not match the cue stimulus
  3. Lures
    1. For 2-back tasks, lures are 1-back or 3-back repeats
    2. For 0-back tasks, lures are repeated stimuli that do not match the cue stimulus


This task was adapted from the one developed by Delgado and Fiez (Delgado et al. 2000). Participants play a card guessing game where they are asked to guess the number on a mystery card (represented by a “?”) in order to win or lose money. Participants are told that potential card numbers range from 1-9 and to indicate if they think the mystery card number is more or less than 5 by pressing one of two buttons on the response box. Feedback is the number on the card (generated by the program as a function of whether the trial was a reward, loss or neutral trial) and either: 1) a green up arrow with “$1” for reward trials, 2) a red down arrow next to -$0.50 for loss trials; or 3) the number 5 and a gray double headed arrow for neutral trials. The “?” is presented for up to 1500 ms (if the participant responds before 1500 ms, a fixation cross is displayed for the remaining time), following by feedback for 1000 ms.  There is a 1000 ms ITI with a “+” presented on the screen. The task is presented in blocks of 8 trials that are either mostly reward (6 reward trials pseudo randomly interleaved with either 1 neutral and 1 loss trial, 2 neutral trials, or 2 loss trials) or mostly loss  (6 loss trials pseudo-randomly interleaved with either 1 neutral and 1 reward trial, 2 neutral trials, or 2 reward trials).  In each of the two runs, there are 2 mostly reward and 2 mostly loss blocks, interleaved with 4 fixation blocks (15 seconds each).

Conditions (Blocked)


Mostly reward blocks


Mostly loss blocks


Conditions (Event-Related)


Reward trials


Loss trials


Neutral trials


References for Gambling Task: Reliable across subjects and robust activation in fMRI (Delgado et al. 2000; May et al. 2004; Tricomi et al. 2004; Forbes et al. 2009)


This task was adapted from the one developed by Buckner and colleagues (Buckner et al. 2011; Yeo et al. 2011) . Participants are presented with visual cues that ask them to either tap their left or right fingers, or squeeze their left or right toes, or move their tongue to map motor areas. Each block of a movement type lasted 12 seconds (10 movements), and is preceded by a 3 second cue. In each of the two runs, there are 13 blocks, with 2 of tongue movements, 4 of hand movements (2 right and 2 left), and 4 of foot movements (2 right and 2 left). In addition, there are 3 15-second fixation blocks per run. This task contains the following events, each of which is computed against the fixation baseline.

Conditions (Blocked)


Left finger blocks

Right finger blocks

Left toe blocks

Right toe blocks

Tongue movement

References for Motor Task: Localizer (Morioka et al. 1995; Bizzi et al. 2008; Buckner et al. 2011; Yeo et al. 2011) .

Language Processing 

This task was developed by Binder and colleagues (Binder et al. 2011) and uses the E-prime scripts provided by these investigators.  The task consists of two runs that each interleave 4 blocks of a story task and 4 blocks of a math task.  The lengths of the blocks vary (average of approximately 30 seconds), but the task was designed so that the math task blocks match the length of the story task blocks, with some additional math trials at the end of the task to complete the 3.8 minute run as needed.  The story blocks present participants with brief auditory stories (5-9 sentences) adapted from Aesop’s fables, followed by a 2-alternative forced-choice question that asks participants about the topic of the story.  The example provided in the original Binder paper (p. 1466) is “For example, after a story about an eagle that saves a man who had done him a favor, participants were asked, “Was that about revenge or reciprocity?” The math task also presents trials auditorially and requires subjects to complete addition and subtraction problems.  The trials present subjects with a series of arithmetic operations (e.g., “fourteen plus twelve”), followed by “equals” and then two choices (e.g., “twenty-nine or twenty-six”).  Participants push a button to select either the first or the second answer. The math task is adaptive to try to maintain a similar level of difficulty across participants.  For more details on the task, please see (Binder et al. 2011) .

Conditions (Blocked)




References for Language Task: Reliable across subjects and robust activation (Binder et al. 2011) .

Social Cognition (Theory of Mind)

Participants were presented with short video clips (20 seconds) of objects (squares, circles, triangles) that either interacted in some way, or moved randomly on the screen. These videos were developed by either Castelli and colleagues (Castelli et al. 2000) or Martin and colleagues (Wheatley et al. 2007) . After each video clip, participants judge whether the objects had a mental interaction (an interaction that appears as if the shapes are taking into account each other’s feelings and thoughts), Not Sure, or No interaction (i.e., there is no obvious interaction between the shapes and the movement appears random). Each of the two task runs has 5 video blocks (2 Mental and 3 Random in one run, 3 Mental and 2 Random in the other run) and 5 fixation blocks (15 seconds each). 

Conditions (Blocked)


Random interaction

Mental interaction

References for the Social Cognition Task: Reliable across subjects and robust activation (Castelli et al. 2000; Castelli et al. 2002; Wheatley et al. 2007; White et al. 2011) .

Relational Processing

This task was adapted from the one developed by Christoff and colleagues (Smith et al. 2007). The stimuli are 6 different shapes filled with 1 of 6 different textures.  In the relational processing condition, participants are presented with 2 pairs of objects, with one pair at the top of the screen and the other pair at the bottom of the screen.  They are told that they should first decide what dimension differs across the top pair of objects (differed in shape or differed in texture) and then they should decide whether the bottom pair of objects also differ along that same dimension (e.g., if the top pair differs in shape, does the bottom pair also differ in shape).  In the control matching condition, participants are shown two objects at the top of the screen and one object at the bottom of the screen, and a word in the middle of the screen (either “shape” or “texture”).  They are told to decide whether the bottom object matches either of the top two objects on that dimension (e.g., if the word is “shape”, is the bottom object the same shape as either of the top two objects.  For both conditions, the subject responds yes or no using one button or another.  For the relational condition, the stimuli are presented for 3500 ms, with a 500 ms ITI, and there are four trials per block.  In the matching condition, stimuli are presented for 2800 ms, with a 400 ms ITI, and there are 5 trials per block.  Each type of block (relational or matching) lasts a total of 18 seconds.  In each of the two runs of this task, there are 3 relational blocks, 3 matching blocks and 3 16-second fixation blocks.

Conditions (Blocked)


Relational processing


References for the Relational Processing Task: Localizer (Smith et al. 2007).

Emotion Processing

This task was adapted from the one developed by Hariri and colleagues (Smith et al. 2007).  Participants are presented with blocks of trials that either ask them to decide which of two faces presented on the bottom of the screen match the face at the top of the screen, or which of two shapes presented at the bottom of the screen match the shape at the top of the screen.  The faces have either an angry or fearful expression.  Trials are presented in blocks of 6 trials of the same task (face or shape), with the stimulus presented for 2000 ms and a 1000 ms ITI.  Each block is preceded by a 3000 ms task cue (“shape” or “face”), so that each block is 21 seconds including the cue. Each of the two runs includes 3 face blocks and 3 shape blocks, with 8 seconds of fixation at the end of each run.

Conditions (Blocked)




Note: A bug was written into the E-prime script for the EMOTION task, such that the task stopped short of the last three trials of the last task block in each run. This bug was not discovered until data had been collected on several participants. Consequently, the BOLD images and E-Prime data for the EMOTION task are shorter than our original design described above.

References for the Emotion Processing Task: Localizer (Hariri et al. 2002) ; Moderate reliability across time (Manuck et al. 2007).