Notes, assignments, and code for NEUROBIO 735 (Spring 2018).
1/10 – 2/8:
Wednesday, Thursday
3:00 – 4:30
DIBS conference room
This homework focuses on extending and speeding up our code for detecting tuned cells in calcium imaging data. As part of making our analysis more realistic, we’ll walk through a lite version of the method used in Ohki et al..
In class, we used a statistical approach that simply averaged all baseline frames and all frames for each moving grating stimulus together. We also discussed some of the potential drawbacks of this method. For this homework, we’ll use a different approach: calculating an activation minus baseline difference image for each trial. This will allow us to compute a low-variance effect for each trial while calculating a more honest variance measure across trials.
In class, we remarked that much better methods were available for detecting whether or not a particular pixel is tuned. One such method is detailed in Ohki et al., which boils down to the following three steps:
anova1
. The default return value of the command is the p-value. You should use a false positive rate for the test of \(\alpha = 0.05\).Adjusting the colormap can be tricky. If your pixel values for the tuning image are in the range \([1, n_{stims}]\) (and untuned pixels have values < 1), then the following code snippet will handle plotting correctly:
figure
image(plot_img, 'CDataMapping', 'scaled');
colormap([zeros(1, 3); parula(nstims)])
cc = colorbar();
cc.TickLabels = {'None', stim_names};
where
plot_img
is the image to plotnstims
is the number of stimulistim_names
is a cell array of stimulus labels for the colorbarIn many cases of interest, the order in which we loop over arrays can impact performance. Let’s see if it makes a difference in our case.
Clearly, the pixel-by-pixel calculation of tuning is the most time-intensive step in our procedure. To get a better sense of where our program is spending its time, we’ll use the Matlab profiler to take a look:
Profile your code (either the tuning image generation itself or the entire homework).
In cases where the bottleneck in our code is one of Matlab’s own functions, and we’re not free to change our approach to the problem (e.g., the algorithm or approximation we’re using), we can still gain some traction by using parallel computing. The simplest method for doing this is parfor
, which executes code in parallel processes on your laptop (or on a cluster you’re connected to). In our case, because the calculation at each pixel is independent of every other, the problem is embarrassingly parallel and we can easily use parfor
.
Use parfor
to parallelize your pixel tuning calculation. Use parpool
to request 4 workers.
How big a speedup did you get? (Make sure not to time the parpool
setup step, which is once per session.) We would naively expect the computation time to be 4x smaller. Why might your answer differ from this?