You know why nobody likes statisticians? Because we’re the lawyers of science.
— name withheld
I just finished teaching another short course in programming, this time for our undergraduate neuro majors at Duke. As mentioned elsewhere, I’m excited by the work Software Carpentry has been doing, especially in designing lessons for programming novices. However, when it came time to run my own class, I found myself doing some redesign.
Welcome to part 3 of my series on how to win at undergraduate research. Bottom line: I’ve been arguing that if you want to have a successful research project (or simply be remembered come recommendation letter time), you should consider not being a flake and spending time in lab. In this final installment, I’ll be focusing on the research side of the equation, dispensing all my accumulated wisdom on the process of scientific discovery.
Disclaimer: I received a free copy of this work under the O’Reilly Blogger Review Program.
I like R. At least, well enough. I find the common lore to be true: the language is inconsistent and crufty, objects are bizarre, the data structures can be hard to work with. But the package system is excellent, with significantly better support for advanced statistical methods and analysis of categorical data. It’s clear that several of my favorite Python packages, pandas and statsmodels in particular, deliberately borrow from the best R has to offer, and ggplot2 produces, in my opinion, the best-looking off-the-shelf plots available. So collected here for the benefit of my friends learning R, is my shortlist of R learning recommendations.
Disclaimer: I received a free copy of this work under the O’Reilly Blogger Review Program.
I’m at Day 2 of the second Human Single Unit conference, this time held at Johns Hopkins. It’s been a strong program, and inspiring, given that I’m trying to get a related manuscript of my own out the door. Highlights from today:
Kahana lab apparently posts its raw data online for others to analyze. Mad respect. Would love to do this eventually. Human data of of this kind are so rare that we shouldn’t be hoarding.
Itzhak Fried saying that, “Everyone knows that about the minimum it takes to publish one of these single unit human studies is 3 – 5 years.” Well, I didn’t know when I started, but that’s turning out to be a pretty accurate time frame. This is what you don’t read about in the papers: the time, technical challenges, and frustrations. But man, oh, man, is the good stuff in this vein cool.
Another gem from Dr. Fried (loosely paraphrased): “Typically, as a physiologist, you have a question, then you put an electrode into the brain. In this work, you have an electrode, and you have to ask a question. And not a boring question, or a stupid question, because it’s going to take you a long time to answer it.”
This is the second part of a series on how to have success as an undergraduate researcher. It’s somewhat in the spirit of the much better How to Be a Good Graduate Student, albeit a little more tongue-in-cheek. Mostly, I’m trying to be honest about two infrequently acknowledged truths about undergraduate research:
Yes, it’s rec letter season again. Grad school, internships, fellowships, and the great Hunger Games that is medical school admissions. As a senior member of a large lab with a higher-than-average contingent of undergraduate volunteers, I count somewhere in the neighborhood of a dozen students I’ve personally mentored over the last several years, and that means even have to write a letter now and then in the service of former students.
From an undergraduate at my institution who was selected as a BRAIN Initiative Grand Challenge Scholar and interviewed on whitehouse.gov:
Kevin: The best way to become a valuable member in nearly any scientific field is to learn to code. Software developers are needed across all types of industry (private, university, government). It has been my experience that nearly every lab could use another pair of hands to perform data analysis or machine learning techniques that you can learn with at least some coding background. There are practically an infinite number of ways to learn to code, most of them you can do in your free time.
Disclaimer: I received a free copy of this work under the O’Reilly Blogger Review Program.
As of today, this blog has moved to GitHub pages, where I can take advantage of Jekyll to write posts in glorious low-tech Markdown. Plus equations \(\nabla \cdot \mathbf{B} = 0\) ! And code:
This is almost embarrassingly exciting.
Disclaimer: I received a free copy of this work under the O’Reilly Blogger Review Program.
Disclaimer: I received a free copy of this work under the O’Reilly Blogger Review Program.
Disclaimer: I received a free copy of this work under the O’Reilly Blogger Review Program.
Over the last year or so, I’ve been porting all my analysis over to Python. Why, you ask? Perhaps because it’s a pleasure to use a language that’s clean and consistent. Perhaps because I’m tired of dying a little inside every time I have to process strings in Matlab. Either way, to borrow a concept used by Bible translators, one could almost say that I’ve discovered in Python my heart language for coding.
Disclaimer: I received a free review copy of this work through the O’Reilly Blogger Review Program.
Disclaimer: I received a free review copy of this work through the O’Reilly Blogger Review Program.
Disclaimer: I received a free review copy of this work through the O’Reilly Blogger Review Program.
Some people, when confronted with a problem, think, “I know, I’ll use regular expressions.” Now they have two problems.
— Old Regex Proverb
Given that I have proven almost wholly incapable of sustaining a blog, it might come as a surprise to my “legions” of readers that I would break a long silence just to start posting book reviews. Programming book reviews at that.
Courtesy of Randall Munroe’s xkcd and Theo Sanderson’s handy text editor, a description of what I do using only the ten hundred most common words:
Sean Taylor has a great post up about what your choice of stats software says about you. Some of it’s a little harsh, and it’s all tongue-in-cheek, but I do have to admit to doing this sort of profiling. Graphs made in Excel do not exactly inspire confidence in yours truly.
It’s high times in the land of neuroscience. In the last two weeks, we’ve had three high-profile papers from the wizards of optogenetics, all related to depression, all linked to dopamine.Now, by this point, everyone in the blogosphere has covered the gee whiz aspect of this story, so I thought I’d delve into a couple of problematic issues these studies raise for those of us who think about neuroscience for a living.
Long, long ago, in a galaxy far, far away, I was a physicist. Or at least I trained as one. And even now, far removed from anything like what I did in grad school, I still wouldn’t trade that background for anything. But by this point, I’ve been a neuroscientist long enough that the hours I spent staring forlornly at equations on chalkboards have begun to seem like they happened to someone else.
I make no secret of the fact that I love to read The New Yorker. Truth be told, I’ve probably read more than 95% of the long features published in the last seven years. And of those many, many pieces, my favorite category is the profile. Maybe because I’m fixated on high-achievers and the rarefied air in which they seem to live. (The academic’s version of tabloid-reading? Bernard-Henri Lévy as Lindsay Lohan?) Maybe just because humans are the most intensely human primates on the globe, and we love nothing better than gossip. Either way, as I was walking back from lunch after having enjoyed Jane Kramer’s piece about Yotam Ottolenghi, a thought struck me: why don’t journalists write about scientists the way they write about chefs? Or visual artists, for that matter? Or musicians?
I’m uploading here three presentations for which I still get requests from time to time.
Aristotle teaches us that the most auspicious beginning for any story is in the middle of the thing. Elmore Leonard says leave out the parts readers tend to skip.