Research

Reliable neural representations

I have a broad interest in how the neural code can be made robust to the widespread response variability (i.e., noise) that exists in the brain.

Nonlinear mixed selectivity

With Stephanie Palmer and Dave Freedman, I showed that conjunctive nonlinear mixed selectivity produces metabolically efficient and reliably decoded neural representations. That is, having single neurons that code for particular combinations of stimulus feature values (i.e., a mixed code) produces a representation of the stimuli that is more robust to noise than a representation that keeps those stimulus features separate (i.e., a pure code). In this work, we also outline several important tradeoffs between mixed codes that combine different numbers of distinct features, including required population sizes and optimal receptive field sizes in feature space.

Due to the ubiquity of mixed codes, we believe this description of their reliability and its tradeoffs will provide important tools that can be used to understand the nature of representations across many different brain regions and even different kinds of brains.

Further materials

  1. Paper: W. Jeffrey Johnston, Stephanie E. Palmer, David J. Freedman. Nonlinear mixed selectivity supports reliable neural computation. | PLOS Computational Biology, 2020 | Cover
  2. Code: Available in this repository.

Distributed representations of multiple stimuli

I am broadly interested in how different brain regions interact with each other to construct representations of stimuli that are integrated across different sensory systems and processing streams. I am particularly interested in how this set of problems is solved in the presence of multiple stimuli, a condition which is understudied in both laboratory and theoretical settings. I believe that the distributed nature of information in the brain and the necessity of simultaneously representing variable numbers of stimuli are likely to both constrain neural representations in interesting ways.

Solving the assignment problem

An observer watching a barking dog and purring cat together in a field has distinct pairs of representations of the two animals in their visual and auditory systems. Without prior knowledge, how does the observer infer that the dog barks and the cat purrs? This binding of disparate representations is called the assignment problem, and it must be solved to integrate distinct representations across but also within sensory modalities. In my work on the assignment problem, I analyze perhaps the most straightforward solution: the representation of one or more common pieces of stimulus information in pairs of relevant brain regions -- for instance, in the example above, estimates of the spatial position of both the cat and the dog represented in both the visual and auditory systems. Proceeding from here, I describe a tradeoff between the frequency of assignment errors (e.g., the purring dog and barking cat) and the fidelity of stimulus representations within each region (e.g., the precision of the representation of the cat's face) that is mediated by a general tradeoff between redundancy and efficiency. Assignment errors reported in humans are broadly consistent with our model; we also make further, specific predictions that are yet to be tested.

I am currently preparing a manuscript describing these results.

Further materials

  1. Recent poster: W. Jeffrey Johnston and David J. Freedman. Solutions to the assignment problem balance tradeoffs between local and catastrophic errors. | presented at COSYNE 2020

Differences in dorsal and ventral stream representations of natural images

In my experimental work, I have recorded from the inferotemporal cortex (ITC), a region in the canonical ventral visual stream, and the lateral intraparietal area (LIP), a region in the canonical dorsal visual stream, during the performance of three natural image-based behavioral tasks. I am taking multiple approaches to analyze this rich set of data.

As a first pass, I characterized the prevalence of single neuron tuning for different task and stimulus factors, and contrasted how the representation of these factors changes across different tasks. The results of some of these analyses are shown in the poster below.

Further materials

  1. Recent poster: W. Jeffrey Johnston, Krithika Mohan, and David J. Freedman. What goes where: Using stimulus representations from both visual streams to guide behavior. | presented at SfN Neuroscience 2018

Preferential engagement of LIP in trained tasks

In other experimental work, we directly compare the same population of neurons in LIP during undirected free-viewing behavior and a highly trained and directed task. While the animal's physical behavior and behavioral markers of engagement are very similar across the two contexts, LIP's apparent engagement is markedly different. In particular, LIP appears to be preferentially involved in stimulus selection during the highly trained task, and much less involved when the animal is allowed to freely choose which stimulus to attend to. We connect this to other work in LIP showing extensive changes in neural representations due to task training. In the context of those results, we argue that LIP may be preferentially engaged in highly trained and reward-motivated behaviors, but less involved in unconstrained, exploratory behavior.

I am currently preparing a manuscript describing these results.