top of page
Anchor 1

RESEARCH INTERESTS

How the brain encodes value and motivation

Several brain regions have been implicated in encoding the economic value of cues that predict reward and guide decisions based on anticipated outcomes. Early work by Dr. Roesch described increased activity in frontocortical regions when primates expect a high versus a low value reward. This value-encoding signal (i.e., higher firing to cues leading to acquisition of better reward) was more prominent in areas strongly affiliated with the motor system (e.g., premotor cortex, PM) compared to areas more strongly affiliated with the limbic/reward system (e.g., orbitofrontal cortex, OFC). This suggested that the 'value-encoding' signal might not necessarily reflect the value of expected reward, but rather the motivated effort that the animal puts forth to gain more valued rewards. When value and motivation were independently manipulated, OFC was found to represent the value of predicted reward, while PM reflected the degree of motivation driven by the value of the reward. These results have called into question what information is actually encoded in many other brain areas reported to exhibit reward-related activity and is guiding current research within the lab.

Reward predictions, prediction errors and attention

Early in his career, Dr. Roesch developed a novel odor-guided decision-making task that varies the value of reward (larger/smaller, immediate/delayed) across several trial blocks, allowing the experimenter to investigate several different aspects of associative encoding and prediction errors. To date, this task has been employed in over 25 publications. For example, we have shown that orbitofrontal cortex (OFC) and ventral striatum (VS) signal reward expectancies, while dopamine (DA) neurons and neurons in basolateral amygdala (ABL) signal signed (Rescorla-Wagner) and unsigned errors (Pearce-Hall) in reward prediction, respectively. We suspect these signals inform mechanisms in anterior cingulate cortex (ACC) to increase attention on trials after reward contingencies are violated, so learning can occur.

Animal models of addiction, schizophrenia, and aging

Drug-exposed animal models show decision-making deficits in tasks that require flexible behavior (e.g., reversal learning, gambling tasks). Early in his career, Dr. Matthew Roesch showed that cocaine-exposed rats are hypersensitive to changes in expected reward size and delays and demonstrated, for the first time, that chronic cocaine exposure impacts several nodes in the corticostriatal circuit. For instance, drug exposure impairs flexible encoding of outcomes during decision-making in orbitofrontal cortex (OFC) and basolateral amygdala (ABL). Cocaine-exposure also reduces the degree and flexibility of cue-evoked firing in ventral striatum (VS), while enhancing cue-evoked firing in dorsal striatum (DS), consistent with the idea that long-term drug exposure makes behaviors more habit-like; most recently, we have shown that lesions to VS can also enhance stimulus-response encoding in DS. In addition to addiction, we have examined how age, schizophrenia, and ADHD might disrupt corticostriatal activity. We have shown that age reduces attention for learning signals and reward-related signals in amygdala and OFC, that prefrontal and amygdala functions are disrupted in animal models of schizophrenia, and pre-natal nicotine expsosure (a model of ADHD) affects neural circuits critical for impulse control, learning, and attention. To test these ideas, we have developed rodent-analogs to human attentional set-shifting and stop-signal paradigms.

Social recognition of reward and distress

Human imaging studies have implicated the amygdala, striatum, and orbitofrontal cortex in the assessment of and response to the mental states of others, a function that is disrupted in many psychological disorders, including autism and psychopathy. However, detailed work in animals at the single-unit and neurochemical level is missing. Our first contribution to this field is to show that phasic DA release can be modulated by the delivery of reward to a conspecific partner. Our data show that animals display a mixture of affective states during observation of conspecific reward, first exhibiting increases in appetitive calls (50 kHz), then exhibiting increases in aversive calls (22 kHz). Mirroring the ultrasonic vocalizations (USVs), DA signals were also modulated by delivery of reward to the conspecific. Our results demonstrated the positive and negative states associated with conspecific reward delivery modulate DA signals related to learning in social situations.

Conflict and Response Inhibition

How does the brain detect two competing behavioral actions (conflict; cognitive control) and how is unwanted behavior is inhibited? Signals in anterior cingulate (ACC) and supplemental eye field (SEF) seem to reflect the manifestations of conflict (i.e., competing directional signals) instead of conflict monitoring, as previously suggested. We have also found that ACC and basalateral amygdala (ABL) are modulated by errors and attention during learning, and we have recently developed a stop-signal task to characterize firing in DS, orbitofrontal cortex (OFC) and medial prefrontal cortex (mPFC). We have shown that activity in DS reflects the manifestation of conflict and miscoding directional signals, OFC signals conflict adaptation (i.e., increased executive control under heightened conflict), and mPFC monitors the degree of conflict after decisions are made.

Neurophysiology of rule switching in the corticostriatal circuit

The ability to adjust behavioral responses to cues in a changing environment is crucial for survival and the medial prefrontal cortex (mPFC) is thought to detect and resolve conflict between rules in changing contingencies (set-shifting), though its mechanisms are still unclear. Medial dorsal striatum (mDS) receives major projections from mPFC and neural activity in mDS is closely linked to action selection, making the mDS a potential major player for enacting rule-guided action policies. In a set-shifting task that our lab developed, we have shown that inactivation of mDS impairs the ability to shift to a new rule and increases in the number of regressive errors. In mPFC, we have shown that activity of single neurons represent distinct rules and increased firing occurs on high conflict trials in a separate population of mPFC neurons. In addition, outcome firing was modulated by the current rule and the degree of conflict associated with the previous decision. These results promote a greater understanding of the role that mPFC and mDS play in switching between rules.

CURRENT TECHNIQUES

Single Unit Recording

bottom of page