Are these the highest-resolution images available?

Being an IT person, I know transferring high quality images takes a lot of bandwidth (money), but I was wondering if the images we’re getting are the best quality possible?


There are some sections where it is just really hard to tell where one cell ends and another begins, and it would help immensely to have just a tiny bit more resolution to work with on those areas. I wish CSI enhance was real.

Hi wschalle,


I don’t remember the resolution of the data set off the top of my head (as I recall, it’s like 15nm or so), one of the more sciency people can probably comment on that.  Depending on the setting, we’ll downsample the data to help save bandwidth.  Unfortunately for Eyewire, we are working with the full resolution images.  We’re using JPEG compression for the images so they might be a little more grainy than the originals, but in our testing, we didn’t find that using the uncompressed images helped all that much.

We are looking at using other data sets in the future which are higher resolution.  In those cases, we may present you with the downsampled images and allow you to zoom in (or CSI enhance if you prefer) in the cases where it’s ambiguous what’s going on.

One way to look at it is that you guys are making judgement calls on the bleeding edge.  For the most part, the ambiguous bits are the parts that the computers fail on, and that’s why we are relying on human judgement to make the best decisions possible given the data available.  Good Luck!

Thanks!


I assume that you’re getting a good sample size on each of the traces.

If there’s a low rate of agreement on some of the neurons, does a neuroscientist go back and make the final determination about the structure of the neuron?

If a user finds a radically different solution from the computer, is there any kind of automated checking going on to see if the solution is feasible or no?

Sorry, I’m full of questions – this application of crowdsourcing is really interesting!

No worries!  Currently, were sampling about 5 people per task, though sometimes more.  We don’t really have a sophisticated process of review, but we’re working on it.


In terms of differences between the humans and the computer, it’s not exactly as straightforward as all that.  Basically what the computer generates is a great big map of the whole volume with probabilities that each pixel is connected to the pixel next to it.  We do what’s called watersheding in order to create the chunks that you guys are working with.  What that leaves us with is a probability that each chunk is connected to each of the adjacent chunks.

Now if we want, we can threshold that graph of connections between chunks at a certain probability.  That would leave us with the “computer’s choices” as compared with a given human’s choices.  The big problem is choosing that threshold.  If we choose something too low, then the computer will make lots of merging mistakes until eventually it merges the whole volume into one big blob.  If we set the threshold too high, the computer will make splitting mistakes, where it doesn’t join things that should be (the way it looks right now).  In the middle, you get mistakes of both types.

For Eyewire, we decide to set the threshold pretty conservatively.  The point being that it’s easier for people to fix splitting mistakes than merging mistakes.  You may run into a few mergers in the tasks, but they should be few and far between.  If you do, abort the task and say that you found a merger.