AI Watson of IBM and brain maping

Kof kof, It seams to me, that eyewire will be obsolete soon(2017) because of advancements in AI. I would like some serious discussion about this. This is the first question. Until recently, i thought that this kind of technology was still far far away in the future.

If you don’t know about Watson of IBM. It’s an AI that runs on a supercomputer. It’s very powerful, it won at jeopardy (an American TV show).

If i understood correctly. What they did, is they took many known AI algorithms, and shove them all together in a single supercoputer, while still been able to respond in seconds… (it won the TV show, fractions of a second response times)

This is not just a gimmick like deep blue that won against Kasparoff. They are having trials for use by doctors to help with diagnosis. They currently developing it in been able to process images (X rays, MRIs etc…). Yes, that thing, will help doctors to make diagnosis, that’s how “scary” it is… It’s like skynet…O_O

I saw somewhere, that they were planing to commercialize it in 2017, they would have an open API, and have a revenue sharing system (because it’s been trained, not programmed) with interested parties (sorry if i sound like an advert XD, i’m not affiliated with IBM).

a very IBM-like video from IBM about Watson

Wikipedia

Well, it seams to me, that something like Watson can reliably map the brain by it self. Seriously, diagnosing cancer and interpreting IRMs seams more difficult…

The second question:
Mapping the brain seams very interesting even for IBM/AI developers, since reverse engineering the human brain, will help improve the AI that makes the mapping… In a scary feedback loop… I think it’s mainly a matter of pooring billions in supercomputer time. And big industries have a strong insentive to do it. No??? I’m too naive/ignorant or something?
Seriously, i think eyewire people should contact IBM…

I hope people from eyewire will answer my post, using something as powerful as watson in brain mapping is baffling… The possibility of a feedback loop is … mind blowing… Ahhhh skynet is coming O_O

thank you for reading :smiley:

Hi quantum_immortal,

Thank you for your inquiries! Personally, I’m an avid Jeopardy fan so I’m aware of Watson and its capabilities. It’s pretty interesting stuff. I’ll try my best to answer your thoughts.

First, Eyewire is a product of AI mapping and deep learning. We use image segmentation processes to assign the 3D segments based on the 2D images. You can read on what we use in Eyewire here. As you have experienced in the game, the AI is only so good at identifying connections (or mergers) so we need humans to help finish the job of mapping neurons. We take this data from our players and use it to help train a new, better AI to be used on a future dataset (yay machine learning!). In a sense, by playing Eyewire you are actively helping to make yourself obsolete. It’s a goal of Eyewire’s to one day have this be completely done by a computer, there are 86 billion neurons in a human brain so we NEED computers!

I don’t think Eyewire will be considered obsolete for quite some time, mainly due to the issue of computer vision. Optical learning is a little bit different from informational learning such as Watson’s DeepQA. Watson has a variety of algorithms that apply to the field of question answering (2010 technical article on DeepQA). For image analysis, we have to train the computer to “see” and process those images. Our lab at Princeton works on machine/deep learning to help out with this optical training. They’re the ones who take the data from Eyewire’s human gameplay submissions and incorporate it into improving algorithms. The Princeton lab and Eyewire HQ are currently involved with a project with IARPA.

Our professor, Sebastian Seung is an active part of the AI world. He recently spoke at an AI symposium at NYU (at which IBM was one of the sponsors). If I recall correctly, he mentioned that yes, the future of AI being involved in our lives is imminent. If you’re interested in reading about Sebastian, connectomics and AI, I suggest this NYT article from last year.

Our office got pretty excited over this article about an AI computer defeating a pro human player at the game of go. I think you might be interested.

I hope this helps answer so of your insights about the use of AI and Eyewire.

1 Like

I haven’t realize that… I thought it was a desperate attempt to do something with no money. XD (until something better comes along)

Wait, so you are telling me, that we don’t even have enough example neural tissue mapped to properly do machine learning on them?

I think watson is a significant development, more then its technical merits per see, because now AIs will be deployed at an industrial scale.

We do have plenty of neural tissue examples - Eyewire is developed/created from a machine learning process. Our Princeton lab builds convolutional neural networks (CNN) that do image analysis. That image analysis lines up the 2D planes with each other (so when you switch from x, y, z axis they line up). By matching up the 2D planes, it allows us to create volume (and thus understand it in 3D space). It also makes it possible for you to see the 3D segments that match up with the 2D slides using image segmentation (it looks at the pixels and does it’s best to assign 3D segments to that area; however, it often undercolors or overcolors).

We’re constantly striving to make our AI even better so that it can effectively build neurons. Right now, it’s still not very good so we’re continuing to train the computer on data we gather from human submissions (aka Eyewire submissions).

Yes, AI is already present in a lot of large companies’ ad-assignment algorithms so they are constantly expanding the applications, etc. The big contenders in developing AI include not only IBM, but Facebook and Google.

Google DeepMind just had a livestream of it’s AlphaGo AI play a human at the game of go. Check it out here: https://www.youtube.com/watch?v=vFr3K2DORc8

1 Like

I’d like to add that the AI that was trained on the Eyewire dataset took months to train on 10 100x100x100 cubes. For reference an Eyewire cube is 256x256x256. We’ve sped up our machine learning by a few orders of magnitude, but it’s still too slow to train on enough info. We’re trying some variations of training techniques as well.

You can read the paper from the link below. Look for the methods section under “Segmentation”.

Helmstaedter, et al. “Connectomic reconstruction of the inner plexiform layer in the mouse retina.” VOL 500 Nature 2013. doi:10.1038/nature12346

http://sci-hub.io/10.1038/nature12346

(Sci hub is a kind of robin hood of scientific literature distribution, it looks sketchy but it’s not).

3 Likes

I meant having enough cubes/solutions pairs. Not just the raw cubes… So, you don’t even have enough problem/solution pairs to train on???

If you don’t, then why it’s not done backwards? You generate fake cubes from fake neurons. Of course done in a way that is indistinguishable from real cubes. This seams considerably simpler then gathering real data…

hum… Can’t you use boinc for this? Neural nets love parallelism right?

Ohh, yea, i knew about sci-hub. The piratebay of academics XD .

We are playing on real cubes from real cells, the only presolved “fake” cubes are in the tutorials. After that the AI doesn’t know the solution to the cubes. Players play same cubes and form a consensus. If the consensus touches wall(s) of the cube the AI spawns more cube(s). If the consensus is wrong then scouts/scythes/admins fix it adding or removing segments. I’m guessing after each cells is done they take the data generated by the consensus and input it in the AI’s learning curve?

Boinc is good for crunching data like einstein@home or finding asteroid trajectories etc etc. Here I’m not sure how it would be able to do it since the AI doesn’t know the traces. If it did yeah it’d be a simple thing of using a crowd sourced PC array like Boinc and crunching out the data but it doesn’t except for the tutorials that have been pre-solved in the lab.

I didn’t know about sci-hub so thanks for that @will. :smile:

I know how eyewire is played.

What i meant, is to generate fake realistic 3D neuronal networks in a computer. Since you are the one that generated them, you alredy know the solutions. You use the fake neurons to generate the fake cubes, with all the bad quality and distortions that the real data suffer from. You could generate a billion of cubes/solutions pairs like this and train and test an AI on boinc with them…

They are not fake neurons. They are real neurons from a mouse retina sample and we don’t already know the solutions that’s why we trace and/or reap the cubes when mistakes have been made.

So Eyewire exists in a two-fold way - it can reconstruct neurons for scientific analysis while at the same time gather data to improve training the AI. The Princeton lab handles the analysis of all this data. If you’re interested in the technical papers, I highly suggest reading some of their recent publications listed here: http://seunglab.org/publications/ (the 2015 paper “Crowdsourcing the creation of image segmentation algorithms for connectomics” may be of interest to you). They also have links to their open-source software here: http://seunglab.org/software/ (there are some of our machine learning algorithms there).

There are many different endeavors out there with the goal of tracing neurons in the brain (all of them use AI to some degree). Different methods include skeleton reconstructions, full reconstructions (Eyewire), and dense reconstructions (they reconstruct everything inside a cube at once, rather than tracing only one neuron thru the cube).

The Allen Institute is another big player in using AI to do brain science: https://www.alleninstitute.org/our-science/brain-science/. They just received a grant to do cell reconstruction today: http://www.bizjournals.com/seattle/blog/health-care-inc/2016/03/allen-institute-for-brain-science-receives-18-7m.html?ana=twt However, their 3D cells look a little bit different than ours (we have more textures/details ;-)).

The NIH has the BRAIN initiative which has one grant currently looking for a program to train neuroscientists in computational neuroscience. They’re involved in using AI to analyze data from the brain (this is a much wider scope than Eyewire’s and includes MRIs etc.).

I’ve only named a few of many recent advances into applying AI to neuroscience. We should start to see even more results in the coming years, but please remember science is slow!

Ps. Here’s an article about IARPA which I mentioned before (we’re a collaborator on the MiCRONS project): http://www.scientificamerican.com/article/the-u-s-government-launches-a-100-million-apollo-project-of-the-brain/

1 Like

:confused: sigh…
It completely flew over your head again? Yes, i know how eyewire works! Please don’t repeat again how eyewire works -_-

What i meant. Why you aren’t making fake ones that are like the real ones? You start from a realistic 3D model of neural nets, you slice them and degrade the image quality to be like the real electron microscope data (monochrome, mergers, dislocations etc…)… Once you have the algorithm right, you can easily generate a billion solution/cube pairs, to train neural nets on them…

hum, that’s too technical for me :confused:
I wasn’t asking for the details of how the AI algorithm works. I was asking about it’s learning data.

The go AI needed millions of examples. I’m assuming AI for neuron tracing also needs something that heavy (or worse, since it’s not done yet…). I’m assuming that indeed, they aren’t enough real data to train the AIs properly… in the whole world… I didn’t got a clear answer on that. Yes or no?

So again, my question is, why you don’t generate fake ones, that are like the real data. This ways you can easily have a billion problem/solution pairs to train the AI on them… Then it becomes a number crunching exercise…

Sorry for repeating my self, but apparently i’m misunderstood twice now… -_-

No it didn’t flew over my head, I just discarded it as a really bad idea, ignored it and tried to have you acquiesce in how this works.

I’m sure that the admins will reply as to why eyewire is the way it is and why the AI learns the way it does and not through fake data.:

Because Dr. Seung and the scientists in his lab want and require real data from real neurons . Having the AI learn from billion of fake data will probably teach it wrong?

Eyewire has the function of teaching the AI so maybe Seung labs in Princeton could make a Boinc app for that purpose but I’m guessing it’d still be using real data and not fake. But AI also has an equally important (if not more) purpose. That of teaching the scientists of the various different neuron types, building accurate 3D models of them and help scientists understand how they (neurons) work and connect (synapse) with each other. Neurons that fire together wire together and all that.

Finally someone should tell you that you can catch a lot more bees with honey than with vinegar. :wink:

1 Like

No, there is not enough data out there with the correct solutions/reconstructions.

Based on the several projects I have posted above with the goal of mapping neurons/the brain, obviously there is a demand for good reconstructions. The only fully mapped connectome is c. elegans (a tiny roundworm). Right now, there is a lot of imaging data (em images,etc.), but not enough manpower to perform reconstruction or correct analysis. (Btw, most of the technical papers I’ve posted have great understandable abstracts/summaries at the top that explain what current research the industry is working on.)

Now, I am aware of the work being done but I do not work directly with the AI. As I’ve said, our Princeton lab is in charge of that research. So perhaps some of the questions I’m raising below may be misguided, but they still are concerns that have been discussed.

In order to backwards generate, I believe you would need to generate dense reconstructions. Dense reconstructions involve tracing everything in an entire chunk/cube/volume of a dataset. If you used cell reconstruction such as Eyewire, we wouldn’t be able to generate the little bits of glia and extra segments in a 2D slide that fill in between neurons. Those troublesome parts are part of what the AI currently has trouble deciding if they belong or not to a branch. Currently, humans would still need to work on generating enough examples of dense reconstruction to create a training set for an AI.

For different sections of the brain, there are different kinds of cells and structural organization. As you’ve experienced on Eyewire, our section of the retina contains several different types of cells. Our scientists are working at classifying these neurons as they are reconstructed. There is another dataset at Seung Lab with neurons that have teeny tiny disconnected (but important) spines that our current retinal AI would have difficulty comprehending.

Another issue is cleaning up slides to the point where they can be traced well enough. If you notice in Eyewire, you don’t see the inner organelles inside cells. We have another dataset where we have the organelles present and the AI is getting trained to ignore those inner workings, but it’s still tricky for some areas. How would you backwards generate those organelles from 3D models so the AI could learn to ignore them?

Nseraf is correct in saying that we are interested in learning as we go with the reconstructions. If we’re going to have to generate reconstructions with partial human power, why not create real data we can analyze while creating more training? As I’ve stated before: “Eyewire exists in a two-fold way - it can reconstruct neurons for scientific analysis while at the same time gather data to improve training the AI.”

1 Like

Bummer …
Let me guess, something like … 1000 in the whole world?

In your opinion, what is more difficult? the problem, or the reverse? You are trying to say, that the reverse, is even more difficult? I find it hard to believe.

Hum… About fake training data. Are you alredy doing some cheap tricks, like rotating the images? Mirror them? Tilting the cut plane? Redefine the cubes boundaries? From the way our vision works, i don’t think the AI will be able to tell…

I’m afraid I can’t give you a good estimate on those numbers. In Eyewire alone, we’ve completed over 700 cell reconstructions. We estimate Eyewire’s tiny dataset to contain close to 10,000 cells. That’s just a tiny fraction of the 86 billion neurons we have in the brain. As I’ve stated before, Eyewire’s cells are specialized to the cells present in the retina. There are several other projects out there beyond Eyewire working on the same problem so I’m not even sure what those numbers are - just that we all still need to build more solutions.

Remember, the brain is mostly unmapped in terms of connectomics. We know where regions/parts are located but we don’t know the exact terrain (at a cellular/neuron level) and we’re trying to map that out. If we don’t even know what some maps look like, how do we know what type of fake dataset to generate for training an AI? For the moment, sample areas are being mapped to help train the AI. In Eyewire’s case, it is the retina. Also, an important part of connectomics is locating the synapses between cells so we can better learn about cell communication in the brain. We need detailed structures to find these synapses.

I’m sorry but I’m not sure what you’re asking about here. What problem are you referring to?

Yes, we already use a lot of those tricks in order to present Eyewire in it’s current state to you as a game. I will refer you again to some of the links posted above.

The paper sapphiresun posted explaining our processes. Please search for the “Segmentation” section. It talks about the training of the neural network (and includes translations and rotations). It also talks about how we determine voxel connectedness (how likely one pixel is connected to another pixel to make boundaries, etc.).

And the wiki-link about our AI on Eyewire and how it is prepared.

1 Like