What will we discover about the retina by EyeWiring?

Does anyone want to discuss the neuroscience that we are trying to accomplish?


Right now we are reconstructing ganglion cells. These neurons are the outputs of the retina, sending their axons through the optic nerve to the brain. 

Our first cell is almost complete. From experiments that were done by Kevin Briggman when the retina was still alive, we know that this cell responds selectively to stimuli of a particular orientation.

I could go on and describe more, but first does anyone have any questions?

Yes, probably a million if you give me time. I’d actually love to see this completed cell in it’s full 3D glory (may post something to this effect in the feature request area later). Here’s a good question. I have sometimes noticed dark lines, spots or ‘inclusions’ (imagined as if a diamond) in the (surely, since the cell body tasks are excluded) axons and dendrites. What might those be and what do they do? Another thing I might ask about is are there inter-neuronal connections in the optic nerve run itself or is this somewhat akin to a bus in a computer system that is strictly a bundle of wires. Also, I think I can really see a (somewhat) far off application of this particular data set, that being direct delivery of visual data to the brain. Love to know what you (or anybody) thinks about any of those questions.

Actually, the full 3D display of neurons is what we are working right at the moment, and will be ready in a couple of days. You can count on it!


The ‘inclusions’ are an artifact from the tissue preparation. The staining technique used here is supposed to selectively stain only the cell membrane. But sometimes the ‘dye’ penetrates into cell membrane and stains also the organelles of neurons, such as vesicles, mitochondria, etc. Often, this makes the AI confused and it decides to stop coloring when there are inclusions. One of our goals is to make the AI smart enough to discriminate the noise and the real boundary of the neurons using larger context information. 

As far as I know, the optic nerve is rather akin to a bus. It is important that the axons not be short-circuited. Many kinds of visual information need to be separately handled until they can be properly processed in the visual cortex of the brain. 

The 3D display is now online.  You can check out our progress here.

I’m definitely interested in finding out exactly how this neuron processes its inputs to produce its output. Are the inputs distributed in an oriented Gaussian way? What do the excitatory and inhibitory synapses look like (can we tell them apart)? Are there lateral inhibitory or excitatory connections? Are these modular: can other neurons be recognized as the same circuit yielding the same result when shifting along the retina?  What is their density?


So yeah, I’ve got lots of questions that I would like to see answers to. 

The 3D of the nearly(?) complete cell is so awesome.Also the fact that you can see the units that all users are playing is super cool.

You guys should put that on t-shirts and sell them to raise money for this project. I’d buy one, maybe even two.

Another idea that comes to mind (and I’ll have to try this out) is to have the whole cell up on one screen and then use that as a supplement to the 3D view and the 2D views to help sort out confusing sections. Also, if it’s an easy thing to implement perhaps we could have the sections in the bounding boxes colored a different color than the rest of the cell to make it stand out from the tangle in the background.

@robertb

You make a very good point about excitatory and inhibitory synapses. Being able to get a good map of the distributions of the inhibitory synapses by means of structure would be quite a contribution; I get the impression that inhibitory systems are somewhat of a ‘mystical’ area in neurology currently. Although I do not know what the prospects are for such a thing - it may be that they are indistinguishable from structure alone.

One more general question I have is this - How much is known about processing of the retinal image that occurs locally in the retina? Is the data that arrives at the LGN like an image, or an edge enhanced image etc?

A few questions:


1. Right now, the overview shows us “Cell 5” and “Cell 6”. Does that mean that cells 1 through 4 are done, or were they tests? If they are done, can we see them somewhere?

2. We’re mapping only the retinal side of the ganglia, right? Because presumably there’s only one axon on the other side that goes into the optic nerve?

3. There are cells that are inputs to the ganglia: horizontal cells, bipolar cells, and amacrine cells. Are the functions of these cells sufficiently well-understood and well-mapped that we don’t have to consider them when thinking about a computational model of vision?

4. Are there any ganglia that take their inputs directly from the photoreceptive layer?

5. Were any of the ganglia found to be photoreceptive during the two-photon microscopy phase?

6. How complete was the two-photon microscopy phase? That is, were all ganglia tested? Or only a subset? Are the ganglia that we’re tracing only the ones that were tested? Or are we tracing all of them including the ones that were not tested?

7. What is our rate of mapping? A graph of overall volume completed per unit time would be nice.

I think that’s all for now :slight_smile:

Lot’s of great questions.  I can answer #1.


Think cells 1 to 4 the way you think of Heinz 1 through 56.  Just trials getting us to the right recipe
One more general question I have is this - How much is known about processing of the retinal image that occurs locally in the retina? Is the data that arrives at the LGN like an image, or an edge enhanced image etc?

We know so little about the outputs of retina, but one thing we do know for sure is that it is not like any kind of image at all. Lots of computations (such as detection of object motion, or its direction, orientation of aligned pattern, and so on so forth. Again, we know that such computations do happen in the retina but we don’t know about the details) already happen in the retina, and the output must be some sort of ‘encoded’ information. 



and … to roughly answer robertb’s questions, 
The two cells (#5,6) are horizontal orientation selective ganglion cells; They respond only to stimuli from horizontally aligned patterns. This, we know from the two-photon imaging. We did not decide yet on what other types of cells we will work next, but one of the possibilities is presynaptic partners of cell 6 in the inner plexiform layer (IPL), i.e., amacrine cells and bipolar cells forming a neural circuit with cell 6. Although there have been suggested models for this circuit, none of them is on a firm ground. They all are loosely based on the small pieces of findings, the little facts that people know of (for example, refer Venkataramani 2010 at the bottom of the “Retina” page on the main EyeWire site). So once we have the circuit, it could be a starting point of all kinds of researches. 

As far as it is known in general, there’s no possibility of direct connections between photoreceptors and ganglion cells. Their neurites are located too far away (IPL vs OPL) from each other to have connection. As you (seem to) know, people say that ganglion cells can be photoreceptive, but I guess our collaborators in Germany didn’t perform experiments about it. 

Maybe we should be able to provide some statistics including the progress rate in near future. 
What do the excitatory and inhibitory synapses look like (can we tell them apart)? Are there lateral inhibitory or excitatory connections? Are these modular: can other neurons be recognized as the same circuit yielding the same result when shifting along the retina?  What is their density?
There is a more fundamental question: How can we identify any synapses at all?  Briggman, Helmstaedter, and Denk (our collaborators in Heidelberg) used an unconventional stain that marks the boundaries between neurons but mostly does not stain the intracellular organelles. This means that neurotransmitter vesicles (one of the telltale signs of a synapse) are not visible in this dataset. Instead, Briggman et al. hypothesized that amacrine cells wrap around ganglion cells when they are making a synapse. This looks as if the amacrine cell is "grasping" the ganglion cell. They verified their hypothesis by analyzing a smaller, conventionally stained dataset. (See their 2011 Nature paper, which we should get around to posting online.)
We are planning to use the same approach of looking at the shapes and sizes of contact points to guess where synapses are. 

Regarding the distinction between excitatory and inhibitory synapses, that should not be a problem. Physiologists have found general rules: synapses from bipolar cells to ganglion cells are excitatory, etc.

Yes the retina has a repeating structure. Cells of a given type are spaced quasiperiodically. The spacing is different for each cell type.

@turing8  T-shirts are a great idea!  Do you or anyone else want to take charge of this?


I made some T-shirts for my book: http://connectomethebook.com/?page_id=1303
So I can give suggestions about how to do this, but I don’t have time to actually do it myself.
@robertb wrote:
1. Right now, the overview shows us "Cell 5" and "Cell 6". Does that mean that cells 1 through 4 are done, or were they tests? If they are done, can we see them somewhere?
I believe the numbering system is somewhat arbitrary. It came from annotating the dataset from the two-photon imaging of activity. @jinseop can correct me if I'm wrong.
2. We're mapping only the retinal side of the ganglia, right? Because presumably there's only one axon on the other side that goes into the optic nerve?
Yes you are correct. We are reconstructing the dendrites of ganglion cells, which are in the "inner plexiform layer" of the retina. On the other side of each ganglion cell body is an axon that goes into the optic nerve. Ideally, we would trace each axon all the way to its destination, and find the neurons to which it connects. But they are really far away, far outside the boundaries of this dataset.
3. There are cells that are inputs to the ganglia: horizontal cells, bipolar cells, and amacrine cells. Are the functions of these cells sufficiently well-understood and well-mapped that we don't have to consider them when thinking about a computational model of vision?
Bipolar cells and amacrine cells are the direct inputs to the ganglion cells. We want to get to them also, but we are starting from the output and tracing our way backwards.
4. Are there any ganglia that take their inputs directly from the photoreceptive layer?
I don't think so, but amazingly there is a type of ganglion cell that is directly photosensitive and contain the photopigment called melanopsin. This type of cell is important for synchronizing circadian rhythms to the light-dark cycle. http://en.wikipedia.org/wiki/Photosensitive_ganglion_cell
5. Were any of the ganglia found to be photoreceptive during the two-photon microscopy phase?
I'm not sure it was possible to distinguish between direct and indirect photoreceptivity.
6. How complete was the two-photon microscopy phase? That is, were all ganglia tested? Or only a subset? Are the ganglia that we're tracing only the ones that were tested? Or are we tracing all of them including the ones that were not tested?
A large number (634) of ganglion cells were imaged. This is surely a significant fraction, but I don't know whether it was all of them. A more important limitation is that only eight visual stimuli were tested (bars moving in eight directions.)  The original paper is here: http://www.nature.com/nature/journal/v471/n7337/abs/nature09818.html
We will try to post this soon.  Ideally, we would like to trace all the cells, but we have to start somewhere, so we are tracing two cells that were orientation selective.


One more general question I have is this - How much is known about processing of the retinal image that occurs locally in the retina? Is the data that arrives at the LGN like an image, or an edge enhanced image etc?
The classic model is that the retinal ganglion cells produce an edge-enhanced image by convolving with a "Mexican hat" filter, or difference of Gaussians.  But if it were this simple, why would there be so many types of ganglion cells? (Researchers say 15-20 and the number is still growing.) It is becoming increasingly accepted that the retina is much more complicated, and each type of ganglion cell performs a distinct kind of visual computation.  For more information, see the review by Gollisch and Meister:
http://www.cell.com/neuron/retrieve/pii/S0896627309009994

We have the beginning of a page on orientation-selective ganglion cells.

http://wiki.eyewire.org/wiki/Orientation_Selective_Ganglion_Cell

Does anyone want to be in charge of describing the scientific challenge on this page? This would require adding a new section, displaying and explaining the two-photon data that @jinseop can supply, and then posing the questions we are trying to answer. @jinseop and I can help.
@hsseung, @jinseop
Thanks for the answers & reference; If anyone wants to read it without violating the boycott a certain publisher or doesn't have an academic subscription, it is available from an MIT page on connectomics here

http://hebb.mit.edu/courses/connectomics/

Oh ... guess who one of the organisers is... :)

Thanks @backupelk, you spared a small burden for me. 

Still, I’m going to put those references to somewhere in EyeWire soon. 

and … guess to whom the “hebb” site belongs… :slight_smile: