Quantitative Neurobiology

Notes, assignments, and code for NEUROBIO 735 (Spring 2018).

Class details:

1/10 – 2/8:
Wednesday, Thursday
3:00 – 4:30
DIBS conference room

Exercises

Quizzes

GitHub

Image data

Image data in neurobiology can be broadly divided into two types:

Loading the data

This week’s data are available for download on GitHub. The data are recordings of calcium imaging experiments in mouse V1 that come to us courtesy of Ashley Wilson in Lindsey Glickfeld’s lab. The data are once again in .mat format. This week, they are large by the standards of what we’ve been working with (~200MB), though far smaller than such datasets are for real experiments. In the actual experiments, images are taken at 30 Hz (30 images per second), whereas the sample data are downsampled to 3 Hz.

In the experiment that generated these data, the mice were exposed to a series of drifting grating stimuli, and recorded responses were images reflecting calcium fluorescence levels. The stimuli consisted of drifting gratings at a variety of orientations, probing the sensitivity of cells in V1 to both orientation and motion direction.

The variables in the data file are:

Each trial (and thus the dataset) began with the stimulus off and then switched it on.

  1. Load the data.
  2. Based on principles of memory layout we’ve discussed, which dimension of the array should be time, if we’re mostly interested in performing analysis on the individual images?
  3. Plot sections of the data (as images) to determine which dimension of the data array is time.

Converting from images to time series

In addition to a sequence of images, we can also think of these data as a collection of time series, one per pixel.

  1. Extract the calcium time series for a few representative pixels. Plot them. Be sure your x-axis reflects the actual time between images/samples.

Converting from arrays to movies

For spatiotemporal data, one of the best ways to gain qualitative insight is by using your eye as a pattern detector. For a sequence of images, Matlab, like many languages, will allow us to play them as a movie. The movie command has facilities for allowing this movie to be stored in standard formats, but a quick and dirty version that allows you to look at the data interactively can be done in only a few lines of code.

  1. Iterate through the frames of the movie, plotting each one, followed by the drawnow command. drawnow ensures the figure will be shown immediately and updated with each plot, instead of showing only the final image.
  2. Make sure the colors in your plot are appropriately normalized. Different image functions have different expectations about the range of values in the data you feed them, but it might help, for example, to make sure that the values in each pixel are between 0 and 1 across all images. More specifically, make sure you are normalizing across images, not just within images.

Tuning curves: a first pass

For each recorded neuron in the movie, we might like to assess its sensitivity to both orientation and motion direction. To do this, we first need to find the locations of cells within the image (and they might move), then appropriately average the calcium fluorescence time series, then finally assess whether a stimulus is tuned and to what degree.

But for programming, we should start simple, with the most straightforward version we can think of: let’s try to assess the tuning of each pixel and do so with a back-of-the-envelope sort of calculation that we can refine as we go.

  1. Let’s start with a fixed point in the image (e.g., (46, 200)). Plot the calcium fluorescence time series for that point.
  2. For the stimulus-off baseline and each orientation, find the mean calcium activation. There are lots of ways to do this. Plot the tuning curve as a function of motion direction. Make sure to label the x-axis appropriately and indicate the baseline activation level.
  3. Find the orientation for which activation is maximal. Do this programmmatically, since we’ll want to automate this for each pixel later.

Unfortunately, this method for finding the preferred orientation has a few problems:

Next class, we’ll work to remedy some of these defects.

Solutions