Does anyone want to discuss the neuroscience that we are trying to accomplish?
Yes, probably a million if you give me time. I’d actually love to see this completed cell in it’s full 3D glory (may post something to this effect in the feature request area later). Here’s a good question. I have sometimes noticed dark lines, spots or ‘inclusions’ (imagined as if a diamond) in the (surely, since the cell body tasks are excluded) axons and dendrites. What might those be and what do they do? Another thing I might ask about is are there inter-neuronal connections in the optic nerve run itself or is this somewhat akin to a bus in a computer system that is strictly a bundle of wires. Also, I think I can really see a (somewhat) far off application of this particular data set, that being direct delivery of visual data to the brain. Love to know what you (or anybody) thinks about any of those questions.
Actually, the full 3D display of neurons is what we are working right at the moment, and will be ready in a couple of days. You can count on it!
I’m definitely interested in finding out exactly how this neuron processes its inputs to produce its output. Are the inputs distributed in an oriented Gaussian way? What do the excitatory and inhibitory synapses look like (can we tell them apart)? Are there lateral inhibitory or excitatory connections? Are these modular: can other neurons be recognized as the same circuit yielding the same result when shifting along the retina? What is their density?
The 3D of the nearly(?) complete cell is so awesome.Also the fact that you can see the units that all users are playing is super cool.
You guys should put that on t-shirts and sell them to raise money for this project. I’d buy one, maybe even two.
Another idea that comes to mind (and I’ll have to try this out) is to have the whole cell up on one screen and then use that as a supplement to the 3D view and the 2D views to help sort out confusing sections. Also, if it’s an easy thing to implement perhaps we could have the sections in the bounding boxes colored a different color than the rest of the cell to make it stand out from the tangle in the background.
A few questions:
Lot’s of great questions. I can answer #1.
One more general question I have is this - How much is known about processing of the retinal image that occurs locally in the retina? Is the data that arrives at the LGN like an image, or an edge enhanced image etc?
We know so little about the outputs of retina, but one thing we do know for sure is that it is not like any kind of image at all. Lots of computations (such as detection of object motion, or its direction, orientation of aligned pattern, and so on so forth. Again, we know that such computations do happen in the retina but we don’t know about the details) already happen in the retina, and the output must be some sort of ‘encoded’ information.
What do the excitatory and inhibitory synapses look like (can we tell them apart)? Are there lateral inhibitory or excitatory connections? Are these modular: can other neurons be recognized as the same circuit yielding the same result when shifting along the retina? What is their density?
@turing8 T-shirts are a great idea! Do you or anyone else want to take charge of this?
I believe the numbering system is somewhat arbitrary. It came from annotating the dataset from the two-photon imaging of activity. @jinseop can correct me if I'm wrong.@robertb wrote:1. Right now, the overview shows us "Cell 5" and "Cell 6". Does that mean that cells 1 through 4 are done, or were they tests? If they are done, can we see them somewhere?
Yes you are correct. We are reconstructing the dendrites of ganglion cells, which are in the "inner plexiform layer" of the retina. On the other side of each ganglion cell body is an axon that goes into the optic nerve. Ideally, we would trace each axon all the way to its destination, and find the neurons to which it connects. But they are really far away, far outside the boundaries of this dataset.2. We're mapping only the retinal side of the ganglia, right? Because presumably there's only one axon on the other side that goes into the optic nerve?
Bipolar cells and amacrine cells are the direct inputs to the ganglion cells. We want to get to them also, but we are starting from the output and tracing our way backwards.3. There are cells that are inputs to the ganglia: horizontal cells, bipolar cells, and amacrine cells. Are the functions of these cells sufficiently well-understood and well-mapped that we don't have to consider them when thinking about a computational model of vision?
I don't think so, but amazingly there is a type of ganglion cell that is directly photosensitive and contain the photopigment called melanopsin. This type of cell is important for synchronizing circadian rhythms to the light-dark cycle. http://en.wikipedia.org/wiki/Photosensitive_ganglion_cell4. Are there any ganglia that take their inputs directly from the photoreceptive layer?
I'm not sure it was possible to distinguish between direct and indirect photoreceptivity.5. Were any of the ganglia found to be photoreceptive during the two-photon microscopy phase?
A large number (634) of ganglion cells were imaged. This is surely a significant fraction, but I don't know whether it was all of them. A more important limitation is that only eight visual stimuli were tested (bars moving in eight directions.) The original paper is here: http://www.nature.com/nature/journal/v471/n7337/abs/nature09818.html6. How complete was the two-photon microscopy phase? That is, were all ganglia tested? Or only a subset? Are the ganglia that we're tracing only the ones that were tested? Or are we tracing all of them including the ones that were not tested?
One more general question I have is this - How much is known about processing of the retinal image that occurs locally in the retina? Is the data that arrives at the LGN like an image, or an edge enhanced image etc?
We have the beginning of a page on orientation-selective ganglion cells.
Thanks @backupelk, you spared a small burden for me.