Request adjacent image

Often when following a neuron which practically almost entirely coincides with an edge, it will disappear from view very slowly. However, sometimes it might reappear, but you only have a few pixels to judge whether or not it’s the same neuron that reappears.


I believe a feature which allows requesting an adjacent image would greatly help in determining how to act in those situations.

Hi whatthecode,


We are actually currently dealing with this issue, though it takes a little explaining to understand how.  We are tracing these neurons through a large number of volumes.  As the neuron flows from one to another, we are detecting these transitions and spawning new tasks based on them.  There are a few of situations which might be causing the circumstance which you are describing.

1)  A neuron may have brushed up against the edge and, as you said, leave and come back.
2)  A neuron may leave and another neuron comes back in (also as you suggested).
3)  A neuron may have branched just before it entered the volume that you are working on.

In the case of 1 or 2, once the neuron leaves your volume and there is a clean separation with all the surrounding neurons, then that task is over.  A new task will be spawned for the neuron in the next volume over.  If the task with the neuron in the other volume does continue back into the volume that you are currently working on, we will spawn a 3rd task which is just for the piece of that neuron after it has re-entered the volume for your current task.  If it never comes back then that task will never be spawned.

In the case of 3, there will already be another task in your current volume for the other part of the neuron which is in the same volume.

Basically, this is a situation where we are all relying on each other to do the right thing.  We know that you may not have enough context to always get it right, but with enough people doing the right thing given the context that they have, we should be able to synthesize the right answer.  The task spawner is quite an integral piece of the puzzle too, and one which we are still actively developing (thus some of the issues we’ve had recently).

As a side note, in the future, we would like to work with larger volumes.  The current problem is one of bandwidth.  It’s already slow enough that scrolling through the volume is a little stuttery when you first start a task.  If we made the volume any larger, that issue would be compounded.  We are working on ways to improve this, and people are getting faster internet connections all the time, so this may be something that we revisit in the future.

Making the volume larger wouldn’t resolve this issue.


But, as you explained, I do see how a new task being spawned does solve it. For clarity it might be worthwhile mentioning in a tutorial or FAQ that when uncertain you better not ‘assume’ it’s the same neuron. If enough of the neuron’s surface touches the sides of the volume I suppose a new task is triggered anyhow?