Hypothetically: if a higher resolution scanning technique would be developed

If a higher resolution scanning technique would be developed, and new pictures with a higher voxel resolution are used, would the currently trained neural network still work? Or would it have to be re-trained?

It’s not even hypothetical anymore.  The eyewire dataset has been around for several years, and it was made using older techniques.  We have a new dataset that we are working on producing in the lab using newer hardware and newer staining techniques, and we are working on training the networks for this dataset.


The primary difference which causes problems with the new data set is that it uses a staining technique which shows the organelles inside the neurons.  Just more stuff that you need to recognize as not being cell walls.  The second biggest factor is that the new data is highly anisotropic, that is to say, the resolution is much higher in 2 of the dimensions than the third.  This makes the old 3d techniques which worked on mostly isotropic data much less effective.

It’s not that the old networks don’t work at all, they just don’t work as well as we’d like.  That combined with the fact that higher resolution data makes things of the same size take up more pixels, and any mistakes that the network makes are magnified.  In order to be able to work with the new data set, efficiently, we are trying to improve the machine learning techniques that we use.

Which machine learning techniques are you guys using?

It’s mostly convolutional neural networks, but we are always looking for new ideas and ways that give us better results.