Image processing

I happen to have done a little work in machine vision on a computer vision product in the industrial automation & safety domain; as an eyewire user, I’m sometimes impressed & sometimes frustrated by the decisions the edge-finder makes. It would be interesting to be able to work on that, either directly by being able to check out the code & some sample datasets, or even just the raw image data.  Could be there are other developers out there who would be interested in trying to optimize the code to cope with your data.

Hey edison!


Glad to hear you are interested in getting involved!  A large part of our lab is dedicated to the computer vision piece of the system.  We are always looking for better ways to segment the data.  This is definitely something that we’d be interested in getting feedback from the community on.  In fact, in the past we ran a competition to see who could provide us with the best results.  It was so successful that we are planning on running another one soon.  Definitely let me know if you’d like more details and we can keep you updated.

Matt

A tool within EyeWire that lets you identify a segment that jumps over a clear edge and mark where that edge is could give you a lot of data to train the AI on for that specific problem.

Actually, that’s kind of what everyone is doing right now.  The things you are selecting are the pieces as the AI sees it.  Once you’ve pieced together the cell the right way, we can retrain the AI on that.  Is that what you were suggesting?

Not really. I was talking about on the sub-piece level. Where one selection crosses over a cell wall and selects a significant portion of an adjacent cell, for instance. If leaving out that selection leaves out a significant chunk of the “good” cell, but including the selection causes a significant chunk of a “bad” cell to be included, either way you’re going to be training the AI on something that isn’t quite right. When I added that comment I didn’t know about the abort feature, which from my understanding is for that sort of situation. Out of curiosity, are you guys using an evolutionary algorithm, learning algorithm, or both, to train the AI?

Ah, yeah.  Good point.  We don’t have a way of identifying segments that the AI merged.


The AI is all convolutional neural networks, plus a fancy watershed algorithm to create the chunks.  If you’d like more info, I’m sure someone can dig up a paper or two describing some of this stuff.

Hi. Where can I download a sequence of raw micrographs to work on? 

Best,
Mos

I can ask around the lab and see what I can find for you.  What are you planning on doing with them?

I’ve been wondering whether it would be possible to get a couple cubes of data as well (in both raw and professionally traced form). I’m trying to get a project going with some friends to try to use an evolutionary algorithm to evolve an ANN for segmenting the data…

So for everyone interested, you can still register and get the data for last year’s 2D segmentation challenge.  We are going to do a 3D challenge this year, and details for that should be posted soon.

And now for your enjoyment, here’s the 3D challenge website!