Access To Raw Data?

I’d like to experiment with the raw data for a single cube (256x256x256), I have in mind an approach to tracing the neural bodies automatically but I need the data to play with.

Another thing I’d like to play with is making use of my nice new 3D flat panel display as an output device.

Thanks–
-Dave
dash@xdr.com

arbitrary angle section planes and the like?
I’d also like to have some raw data to play around with. For your own approach you’d also need the trace results.

Hey guys,


Just check out the network requests when you play a cube. The data is there, we just ask that you give EyeWire attribution. If you need help, we can discuss it further. We haven’t really documented it for other people to use yet, but it’s something we think could be cool.

Will Silversmith
EyeWire Developer

What format is the data in? I got base64 encoded data that when decoded yields a 128x128 png file, with no alpha channel and only the red component has any data (no blue, no green).

Moreover there are only around 40 of these small images. It’s not clear how they fit together to form a cube as there is almost no similiarity between images. Finally, how does it become 256x256x256?

Thanks!
-Dave

Hey Dave,


There are different kinds of requests. You’re correct that they are base64 encoded. The red images you see are the segmentation generated by our AI. There should be other base64 streams that provide the grayscale images.

Each image is transmitted in what we refer as “chunks” that are 128x128. Four chunks make a whole image. You might notice them load in that pattern as you use EyeWire. Each stream loads a chunk, so you need 4 URLs to make a stack of complete images.

There are 2 kinds of image data and two kinds of mesh data in EyeWire.

What kind of project are you working on? If you’re doing an art project I can get more specific about how the URLs work. If you want to try doing machine learning, check out this challenge: http://brainiac2.mit.edu/SNEMI3D/

Will




Well that certainly was helpful. The main problem is I had stopped after decoding the first base64 section. There are actually 32 sections in each file, say of the form:

http://cache.eyewire.org/volume/12474/chunk/0/0/1/0/tile/xz/0:32

But looking high and low the only images I’ve found are the 128x128 red png files. They have various levels of red (not grey) but there is no noise / graininess like in the b&w images. They are slice layers though, there is slight change from one to the next as expected.

Mesh files also get loaded, like this url:
http://cache.eyewire.org/volume/12474/chunk/0/0/0/0/mesh/130
but there is no data there (0 bytes).

I wouldn’t call anything I’m doing related to this a “project”. I just like playing around with algorithms. It seemed to me the problem of delineating the 3d volume of an individual neuron (or portion thereof) isn’t too hard, and there was an approach I wanted to try out.

Note for one particular grab I got a total of 1032 png files. At 256 layers and 4 128x128 png files per layer that’s 1024 and 8 extra. So it may be this is the actual data. Maybe the rgb ought to be interpreted as yuv…

Thanks for responding.
-Dave

ETA: Nevermind, I figured out the key thing is you’ve got to ask for the right number. 15602 is wrong (all red png, as you say probably the AI color mapping). 115395 is good, I got greyscale images. Well I’m in business… Thanks again.

Ok I finished writing a handy linux utility eyewire_fetch that pulls down an entire cube and saves out 256 png files, each 256x256 greyscale pixels.

http://www.linuxmotors.com/eyewire

I put a source tarball in the above directory. I also put a sample tarball of the png files of cube 115395 in case anyone wants to play with some raw data (which is the point of this whole exercise) without bothering to build the utility.

-Dave
ETA 20141209: Modified the link above, xdr.com domain changed ownership.

I`d like to access the raw data the Seung Lab uses for Eyewire, but I doubt that they will put out the entire E2198 scan for the general public.

 
-whitefieldcat

whitefieldcat, you could always inquire with the corresponding author of the paper which reported the E2198 dataset. The authors may well be open to other collaborations or extension of their research.

yeah, E2198 was given to MIT as a result of another research done in Max Planck institute so if you have a valid, documented idea they might be willing to share their data with you…I think.

Has anybody looked into using ImageJ  to automate neuron tracing? 


http://imagej.nih.gov/ij/

Here is an example ImageJ plugin named cell outliner…there are gobs of other routines…


Cell Outliner
Author: Mike Castleman (m at mlcastle.net)
Based on the original SegmentingAssistant plugin
by Mike Miller (miller5 at mailaps.org)
History: 2003/10/08: First version
Requires: ImageJ 1.31i or later
Limitations: Won’t work on single images - a work around is to simply add an empty frame into the image.
Source: Cell_Outliner.java
Installation: Download Cell_Outliner.java to the plugins folder, or subfolder, and compile and run using Plugins/Compile and Run. Restarting ImageJ will add a “Cell Outliner” command to the Plugins menu or a submenu of the Plugins menu.
Description: Applies the magic wand to a picture with a specified threshold and at a specified point on a stack. Will process an entire stack with specified starting frame and ending frame, using the magic wand outline to clear inside, than clear outside and then draw a line with graylevel=255. The “Preprocess Image” will apply a median filter with radius 1 and apply any cuts defined by “Beginning Slice” and “Ending Slice.” The plugin provides a nifty set of navigation buttons for scanning through the stack. It also provides a very nice little feature that has the program to compare the size of the current outline with the previous outline and to jog the “Vertical Centroid” value in order to find the full outline again. This is helpful when dealing with noisy data where the magic wans will occasionally find the outline around a patch of above threshold noise instead of finding the desired object.
Bugs: Sometimes draws the graylevel at a number less than 255, probably has to do with scale on conversion.

|Plugins | Home |

http://imagej.nih.gov/ij/plugins/cell-outliner.html


Just a thought…

Thanks for any advice…

ksmith9109


Interesting program! Several labs with different areas of studies (including neuroscience) are reported to use it.  I know that there are a few different methods of neuron tracing.  There is a method that another lab uses called skeleton tracing where they put a dot in the area being traced skipping ahead every 10 slides or so. This results in a more skeletal structure of a cell. The Seung Lab developed their own software (Omni) to get the results we needed for our research.  With Omni, we quickly go thru each slide coloring in the area being traced.  We’ve found Omni to be more accurate with finding extensions, nubs and branch topography.  Eyewire is a version of Omni that has been scaled down so anyone on the internet can use it.  We’ve found we can “flesh” out the cell structure better.  If we wanted to quickly trace a cell, then Image J would probably be fairly similar to what our on AI does on Eyewire/Omni.  The AI uses a pixel threshold to determine the continuation, but as you know, it gets stuck a lot and needs some extra help from a human.


Very interesting to learn about different processes in science! Cheers!