Be able to view the image stack along all three axes

I’d love to be able to complete a task, then rotate the image stack 90 degrees and check my work from that perspective as well. From previous experience I’ve found that some features are much more obvious from one direction than another.

Hi Icampagn,


I’m a little curious as to what you’ve done before that’s similar.  In any case, our experience has lead us to exactly the same conclusion.  The limitation that we are bumping up against is that in order to view all 3 views, you’d have to download 3 times the data.  It may be something that we add support for eventually, but it’s not a high priority at the moment.

Currently what we are doing is, when you are given a task, we pick the orientation at random.  This means that when we have multiple people do the same task, it’s being viewed from all the different orientations by somebody.  We feel that this gives reasonable accuracy while not increasing the bandwidth requirements.  Unfortunately, this means that sometimes you’ll be faced with difficult decisions, so just do the best that you can.  If this is very important to people, we may look at adding support for this sooner rather than later.

I’ve done some image segmentation in MRI images for a small-scale connectivity atlas I’m working on. The basic problem is the same there–surfaces that run parallel to the image plane are much more difficult to make out than if they are viewed orthogonally.


As I suggested in another thread, it might be useful to have the client side do the jpeg decoding in javascript and then reconstruct the images before they are displayed. That way the images only need to be downloaded once. This would put a bit more load on the client’s CPU and memory, but it shouldn’t be too bad. 
Interesting - I had assumed we were only viewing imagery along the primary tomographic axis, but reconstruction of the alternate axis views does explain certain artifacts. 

Is there an indication of which axis we've been presented with?  Perhaps as a middle ground to lcampagn's suggestion - an option to switch between 'presented' axis and the primary axis?

I am not sure if its possible on weaker systems, but it could probably be calculated from client side from a single axis set of images. Once all images from that single axis load, if you treat every pixel as voxel (or small cube) you basically get cube shaped Minecraft level. Afterwards, voxel data could be read from any axis, possibly rasterized back to image data (for speed).

We are working on a feature to allow users to examine a cube from a different axis.  Hopefully we’ll roll it out soon.

Okay, I’ve added the ability to change your orientation. To do so, ctrl+click (or command+click on mac) on the 2d image and drag it.


If you have any bugs with this feature please create a new thread in the bugs section of the forum. This feature will be added to the documentation some point after we’ve verified it does not break eyewire :slight_smile:

I like it!

This is a cool feature!  Tip: Sometimes it’s helpful to rotate the 2D view so that the neurite is perpendicular to the plane, because it can be difficult to trace when the neurite is close to parallel.

wow @echo !! this is stunning, thank you :slight_smile:


Interesting. I’d assumed that the vertical resolution would be lower than the 2D resolution, which would make x/z slices look oddly stretched. I guess I was wrong!

Do you also do the automatic segmentation step in multiple directions? Sometimes I come across artifacts where the segment has a spur where the automatic step included an extra bit for one slice only, leading to a thin wedge in 3D. I’ve only seen these running parallel to my current 2D slice, which would seem to suggest that edge detection is only performed along one axis?

Anyways, beautiful feature.

I want to work on mapping this evening

I like the way the 3D view starts out as a view from the entire cell and then zooms into the segment I am working on.  I think a toggle, to allow viewing of the surrounding cubes, would be helpful