Who's brain was this?

Just curious who’s brain we are all rummaging through right now.  Or are we not allowed to know? 

It's probably confidential. What I'm wondering is how the pictures were taken. How did we get layers?

It’s a mouse retina, not a brain.

It is indeed mouse retina, not brain.  We used Serial block-face scanning electron microscopy  to get the layers.  Basically, you take a tiny chunk of retina, turn it into plastic, and then use a machine with a very sharp knife (think of it as a really excellent deli slicer) to cut the chunk into layers.  Then each layer is imaged, aligned, and readied for EyeWire.


http://www.ted.com/talks/sebastian_seung.html  If you want, skip ahead to the 4:40 mark.  It’ll show you what we do with the layers/how we get 3D shapes from 2D images.  

why not use dicom slices with an MRI or CT scan?

Because the SEM is a lot higher resolution than either MRI or CT.  I believe theses scans are around 10nm resolution give or take.

Do you generate 3D pixel space from the slices, or we get it as the slices were sliced? Does the AI get it in 3D? At some cubes it seems to me the AI could’ve done a better job if it looked at perpendicular slices (like engineers’ plans and sections). Humans could do better if they could look from different angles at problematic (low-contrast) situations.

We take the 2d slices and do registration and alignment on them to create the 3D stack.  The AI gets the 3D data (which is part of why it takes so long to run it.  Actually, we are currently looking into using a 2d machine learning piece plus a separate piece to stitch them together in the z dimension.  That is showing promising results, but that is on a data set with much higher resolution in X and Y than in Z.  This dataset is much closer to isotropic.